Está en la página 1de 1184

VERITAS Foundation Suite 3.

5 for Solaris: Administration and Troubleshooting

Participant Guide

100-001821-A

COURSE DEVELOPER Jade Arrington

Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this guide, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. Copyright Copyright 2002 VERITAS Software Corporation. All rights reserved. No part of the contents of this training material may be reproduced in any form or by any means or be used for the purposes of training or education without the written permission of VERITAS Software Corporation. Trademark Notice

LEAD SUBJECT MATTER EXPERTS Robert Lucas Dave Rogers Stephen Williams

TECHNICAL CONTRIBUTORS AND REVIEWERS Gail Adey Billie Bachra Shawn Bagheri Margy Cassidy Bilge Gerrits Bill Havey Russell Henmi Gene Henriksen Allison Hope Gerald Jackson Scott Kaiser Ganessan Kalayanasunduram Bill Lehman Joe Maionchi Gil Mayol Steve Pate Subbiah Sundaram Peter Vajgel

VERITAS, the VERITAS logo, and VERITAS FirstWatch, VERITAS Cluster Server, VERITAS File System, VERITAS Volume Manager, VERITAS NetBackup, and VERITAS HSM are registered trademarks of VERITAS Software Corporation. Other product names mentioned herein may be trademarks and/or registered trademarks of their respective companies. Printed in the USA, September 2002. VERITAS Foundation Suite 3.5 for Solaris: Administration and Troubleshooting Participant Guide 1.0 SKU: TK-FOS-0003 VERITAS Software Corporation 350 Ellis Street Mountain View, CA 94043 Phone 6505278000 Fax 6505278050 www.veritas.com

Contents
Volume I
Course Introduction
What Is Storage Virtualization? I-2

Storage Management Issues I-2 Defining Storage Virtualization I-3 How Is Storage Virtualization Used in Your Environment? I-4 Storage-Based Storage Virtualization I-5 Host-Based Storage Virtualization I-6 Network-Based Storage Virtualization I-7
Introducing VERITAS Foundation Suite I-8

What Is VERITAS Volume Manager? I-9 What Is VERITAS File System? I-10 Benefits of VERITAS Foundation Suite I-11
Course Overview I-13

Course Objectives

I-13
I-14

Additional Course Resources

Lesson 1: Virtual Objects


Introduction 1-2 Physical Data Storage 1-4

Physical Storage Objects 1-4 Physical Disk Structure 1-4 Physical Disk Naming 1-5 Disk Arrays 1-6 Multipathed Disk Arrays 1-6
Virtual Data Storage 1-7

Virtual Storage Management 1-7 What Is a Volume? 1-7 How Do You Access a Volume? 1-7 Why Use Volume Manager? 1-8 Volume Manager-Controlled Disks 1-9
Volume Manager Storage Objects 1-11

Virtual Objects 1-11 Disk Groups 1-12 Volume Manager Disks 1-13 Notes on VxVM Disk Naming 1-13 Subdisks 1-14 Plexes 1-15 Volumes 1-17

Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

iii

Volume Manager Storage Layouts 1-19

Volume Layouts 1-19 Concatenated 1-20 Striped 1-20 Mirrored 1-20 RAID-5 1-20 Layered 1-20
Summary 1-21

Lesson 2: Installing VERITAS Foundation Suite


Introduction 2-2 2-4 Installation Prerequisites

OS Version Compatibility 2-4 Compatibility with Other VERITAS Products Version Release Differences 2-7
VxVM and VxFS Software Packages 2-9

2-5

VERITAS Storage Solutions Products and Suites 2-9 VxVM Standard Packages 2-11 VERITAS Enterprise Administrator Packages 2-11 VxVM Package Space Requirements 2-12 VxFS Standard Packages 2-13 Package Space Requirements 2-13 Other Options Included with Foundation Suite 2-14 Other Options Available for Foundation Suite 2-15 Licenses Required for Optional Features 2-17
Adding License Keys 2-18

License Keys 2-18 Frequently Asked Questions About the A5x00-VxVM Bundle Obtaining a License Key 2-20 Generating License Keys with vLicense 2-21 Adding a License Key 2-22 Viewing Installed License Keys 2-23 Managing Multiple Licensing Utilities 2-25
Adding Foundation Suite Packages 2-26

2-19

Methods for Adding Foundation Suite Packages Adding Packages with the Installer 2-26 Adding Packages Manually with pkgadd 2-29 Verifying Package Installation 2-31 Listing Installed VERITAS Packages 2-31 Listing Detailed Package Information 2-32
Planning VxVM Setup 2-33

2-26

Planning a First-Time VxVM Setup 2-33 Example: Typical Initial VxVM Setup 2-41

iv

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Installing VxVM for the First Time 2-42 The vxinstall Program 2-42 The vxinstall Process 2-43 Summary 2-53

Lesson 3: VERITAS Volume Manager Interfaces


Introduction 3-2 3-4 VxVM User Interfaces Using the VEA Interface

Volume Manager User Interfaces


3-5

3-4

VERITAS Enterprise Administrator 3-5 The VEA Main Window 3-6 Other Views in VEA 3-7 Accessing Tasks Through VEA 3-8 Setting VEA Preferences 3-9 Viewing Tasks Through VEA 3-10 Displaying VEA Help Information 3-14
Using the Command Line Interface 3-15

Command Line Interface 3-15 Examples of CLI Commands 3-15 Accessing Manual Pages for CLI Commands
Using the vxdiskadm Interface 3-19 The vxdiskadm Interface 3-19 Installing the VEA Software 3-21

3-17

The VEA Software Packages 3-21 Installing the VEA Server and Client on Solaris 3-22 Installing the VEA Client on Windows 3-23
Starting the VEA Server and Client 3-24

Starting the VEA Server 3-24 Manually Starting the VEA Server 3-24 Starting the VEA Client 3-25 Connecting Automatically at VEA Client Startup
Managing the VEA Server 3-28

3-26

Confirming VEA Server Startup 3-28 Stopping the VEA Server 3-28 Displaying the VEA Version 3-28 Monitoring VEA Event and Task Logs 3-29
Customizing VEA Security 3-30

Controlling User Access to VEA 3-30 Modifying Group Access 3-31


Summary 3-33

Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lesson 4: Managing Disks


Introduction 4-2 4-4 Naming Disk Devices

Device Naming Schemes 4-4 Traditional Device Naming 4-4 Enclosure-Based Naming 4-5 Benefits of Enclosure-Based Naming 4-6 Selecting a Naming Scheme 4-7 Administering Enclosure-Based Naming 4-8 Changing the Disk-Naming Scheme 4-9
VxVM Disk Configuration Stages 4-10

Placing a Disk Under Volume Manager Control 4-10 Before Configuring a Disk for Use by VxVM 4-10 Stages of Disk Configuration 4-10 Stage One: Initialize Disk 4-11 Stage Two: Assign a Disk to a Disk Group 4-12 Stage Three: Assign Disk Space to Volumes 4-13
Adding a Disk to a Disk Group 4-14

Before You Add a Disk 4-14 Adding Disks 4-15 Disk Naming 4-15 Default Disk Naming 4-15 Notes on Disk Naming 4-16 Adding a Disk: Methods 4-17 Adding a Disk: VEA 4-18 Adding a Disk: vxdiskadm 4-20 Adding a Disk: CLI 4-22
Viewing Disk Information 4-25

Keeping Track of Your Disks 4-25 Viewing Disk Information: Methods 4-25 Displaying Disk Information: VEA 4-26 Displaying Disk Information: CLI 4-29 Displaying Disk Information: vxdiskadm 4-37
Removing a Disk from a Disk Group 4-38

Removing Disks 4-38 Before You Remove a Disk 4-38 Evacuating a Disk 4-39 Removing a Disk: Methods 4-41 Removing a Disk: VEA 4-42 Removing a Disk: vxdiskadm 4-43 Removing a Disk: CLI 4-44
Renaming a Disk 4-46

Changing the Disk Media Name 4-46 Before You Rename a Disk 4-46 Renaming a Disk: VEA 4-46
vi VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Renaming a Disk: CLI 4-47


Moving a Disk 4-48

Moving an Empty Disk from One Disk Group to Another Moving a Disk: VEA 4-48 Moving a Disk: vxdiskadm 4-48 Moving a Disk: CLI 4-49
Summary 4-50

4-48

Lesson 5: Managing Disk Groups


Introduction 5-2 Purposes of Disk Groups 5-4

What Is a Disk Group? 5-4 Why Are Disk Groups Needed? 5-4 Disk Management 5-5 Disk Management: Example 5-6 The rootdg Disk Group 5-7 Example: Disk Groups and High Availability
Creating a Disk Group 5-9

5-8

Creating a Disk Group 5-9 Creating a Disk Group: Methods 5-9 Creating a Disk Group: VEA 5-10 Creating a Disk Group: vxdiskadm 5-11 Creating a Disk Group: CLI 5-13
Creating Spare Disks for a Disk Group 5-14

Designating a Disk As a Hot-Relocation Spare 5-14 Setting Up a Disk As a Spare: VEA 5-14 Setting Up a Disk As a Spare: vxdiskadm 5-15 Setting Up a Disk As a Spare: CLI 5-15
Deporting a Disk Group 5-16

Making a Disk Group Unavailable 5-16 Specifying a New Host 5-16 Deporting and Renaming 5-16 Before You Deport a Disk Group 5-17 Deporting a Disk Group: Methods 5-17 Deporting a Disk Group: VEA 5-18 Deporting a Disk Group: vxdiskadm 5-19 Deporting a Disk Group: CLI 5-20
Importing a Disk Group 5-21

Importing a Deported Disk Group 5-21 Importing and Renaming 5-21 Clearing Host Locks 5-21 Importing As Temporary 5-22 Forcing an Import 5-22 Importing a Disk Group: Methods 5-23
Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

vii

Importing a Disk Group: VEA 5-24 Importing a Disk Group: vxdiskadm Importing a Disk Group: CLI 5-26
Moving Disk Groups Between Systems

5-25

5-28

Moving a Disk Group: VEA 5-28 Moving a Disk Group: vxdiskadm 5-29 Moving a Disk Group: CLI 5-29
Renaming a Disk Group 5-30

Renaming a Disk Group: VEA 5-30 Renaming a Disk Group: CLI 5-31
Destroying a Disk Group 5-32

Destroying a Disk Group: VEA 5-33 Destroying a Disk Group: CLI 5-34
Viewing Disk Group Information 5-35

Viewing Disk Group Information: Methods 5-35 Viewing Disk Group Properties: VEA 5-36 Viewing Disk Group Properties: CLI 5-37
Upgrading a Disk Group 5-40

Disk Group Versioning 5-40 Upgrading a Disk Group: VEA 5-43 Upgrading a Disk Group: CLI 5-44
Summary 5-46

Lesson 6: Creating a Volume


Introduction 6-2 6-4 Selecting a Volume Layout

What Is Volume Layout? 6-4 Spanning 6-5 Redundancy 6-5 Resilience 6-5 RAID 6-6 VxVM-Supported RAID Levels 6-7 VxVM Volume Layout Types 6-8 Concatenated Layout 6-9 Concatenation: Advantages 6-9 Concatenation: Disadvantages 6-9 Striped Layout 6-10 Striping: Advantages 6-11 Striping: Disadvantages 6-11 Mirrored Layout 6-12 Mirroring: Advantages 6-13 Mirroring: Disadvantages 6-13 RAID-5 6-14 RAID-5: Advantages 6-15
viii VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

RAID-5: Disadvantages 6-15


Creating a Volume 6-16

Creating a Volume 6-16 Before You Create a Volume 6-16 Creating a Volume: Methods 6-17 Creating a Volume: VEA 6-18 Creating a Volume: CLI 6-25
Displaying Volume Layout Information 6-35

Displaying Volume Information: Methods 6-35 Displaying Volume Information: VEA 6-36 Displaying Volume Layout Information: CLI 6-42 Displaying Information for All Volumes 6-44
Removing a Volume 6-46

Removing a Volume: VEA 6-47 Removing a Volume: CLI 6-48


Summary 6-49

Lesson 7: Configuring Volumes


Introduction 7-2 Administering Mirrors 7-4

Adding a Mirror 7-4 Adding a Mirror: VEA 7-5 Adding a Mirror: CLI 7-6 Mirroring All Volumes 7-6 Setting a Default Mirror on Volume Creation Removing a Mirror 7-8 Removing a Mirror: VEA 7-9 Removing a Mirror: CLI 7-10
Adding a Log to a Volume 7-11

7-7

Logging in VxVM 7-11 Dirty Region Logging 7-11 RAID-5 Logging 7-12 Adding a Log: VEA 7-13 Removing a Log: VEA 7-13 Adding a Log: CLI 7-14 Removing a Log: CLI 7-15
Changing the Volume Read Policy 7-16

Volume Read Policies with Mirroring 7-16 Changing the Volume Read Policy: VEA 7-17 Changing the Volume Read Policy: CLI 7-18
Adding a File System to a Volume 7-19

Adding a File System to a Volume: Methods 7-19 Adding a File System to a Volume: VEA 7-20 Mounting a File System: VEA 7-21
Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

ix

Unmounting a File System: VEA 7-21 Adding a File System to a Volume: CLI 7-22 Mounting a File System at Boot: CLI 7-23
Allocating Storage for Volumes 7-24

Specifying Storage Attributes for Volumes 7-24 Specifying Storage Attributes: VEA 7-25 Specifying Storage Attributes: CLI 7-26 Specifying Ordered Allocation of Storage for Volumes Specifying Ordered Allocation: VEA 7-30 Specifying Ordered Allocation: CLI 7-30 Specifying SAN Storage Groups 7-37
What Is a Layered Volume? 7-38

7-29

Methods Used to Mirror Data 7-38 Comparing Regular Mirroring with Enhanced Mirroring 7-39 How Do Layered Volumes Work? 7-41 Layered Volumes: Advantages 7-42 Layered Volumes: Disadvantages 7-42 Layered Volume Layouts 7-43 mirror-concat 7-44 mirror-stripe 7-45 concat-mirror 7-46 stripe-mirror 7-47
Creating a Layered Volume 7-48

Creating a Layered Volume: VEA 7-48 Creating a Layered Volume: CLI 7-49 Controlling VxVM Mirroring 7-50 Default Mirroring Behavior 7-51 Creating Layered Volumes: Examples 7-52 Viewing a Layered Volume: VEA 7-53 Viewing a Layered Volume: CLI 7-53
Summary 7-56

Lesson 8: Volume Maintenance


Introduction 8-2 8-4 Resizing a Volume

Resizing a Volume 8-4 Resizing a Volume: Methods 8-6 Resizing a Volume: VEA 8-7 Resizing a Volume: CLI 8-8 Resizing Volumes with vxassist Resizing Volumes with vxresize
Creating a Volume Snapshot 8-14

8-9 8-11

Creating a Snapshot Copy of a Volume 8-14 Creating a Volume Snapshot: Phases 8-15 Creating a Volume Snapshot: Methods 8-16
x VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Volume Snapshot: VEA 8-17 Removing a Snapshot Volume: VEA 8-19 Reassociating a Snapshot Volume (Snapback): VEA 8-19 Dissociating a Snapshot Volume (Snapclear): VEA 8-20 Creating a Snapshot Volume: CLI 8-21 Removing a Snapshot Volume: CLI 8-24 Reassociating a Snapshot Volume (Snapback): CLI 8-24 Dissociating a Snapshot Volume (Snapclear): CLI 8-25
Changing the Volume Layout 8-26

What Is Online Relayout? 8-26 Supported Transformations 8-27 How Does Online Relayout Work? 8-28 Notes on Online Relayout 8-30 Changing the Volume Layout: Methods 8-31 Changing the Volume Layout: VEA 8-32 Changing the Volume Layout: CLI 8-34 The vxassist relayout Command 8-35 Changing to a Striped Layout: CLI 8-36 Changing Column and Stripe Characteristics: CLI 8-36 Changing to a RAID-5 Layout: CLI 8-37 Converting to a Layered Volume: CLI 8-38
Managing Volume Tasks 8-39

Monitoring and Controlling Online Relayout Managing Volume Tasks: Methods 8-39 Managing Volume Tasks: VEA 8-40 Managing Volume Tasks: CLI 8-41 Controlling the Task Progress Rate 8-51
Summary 8-53

8-39

Volume II
Lesson 9: Setting Up a File System
Introduction 9-2 9-4 File System Types

Types of File Systems 9-4 Type-Independent File Systems 9-4 Type-Dependent File Systems 9-4 Data Flow Through File Systems 9-5
Using VERITAS File System Commands 9-6

Using VxFS As an Alternate to UFS 9-6 Location of VxFS Commands 9-6 General File System Command Syntax 9-7 Using VxFS Commands by Default 9-7 VxFS Commands 9-8 Administering a File System Using VEA 9-9
Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

xi

Creating a New File System 9-10 The mkfs Command 9-10

Steps to Create a New File System 9-11 Example: Creating a File System on a VxVM Volume
9-12

9-11

Setting File System Properties

Using mkfs Command Options 9-12 Checking VxFS Structure 9-13 Enabling Large File Support 9-14 Specifying a File System Layout Version 9-16 Setting Block Size 9-17 Default Block Size 9-17 Considerations for Setting Block Size 9-17 Setting Log Size 9-18 Selecting an Appropriate Log Size 9-19
Mounting a File System 9-20 The mount Command 9-20

Displaying Mounted File Systems 9-21 Mounting All File Systems 9-21

Mounting a File System Automatically 9-22 The vfstab File 9-22 Adding an Entry to the vfstab File 9-23 Unmounting a File System 9-24 The umount Command 9-24

Unmounting All File Systems 9-24 Forcing an Unmount 9-25

Identifying File System Type 9-26 The fstyp Command 9-26

Example: Displaying File System Type Example: Verbose Mode 9-27

9-26

Identifying Free Space 9-28 The df Command 9-28 Syntax for the df Command

9-28 9-29
9-30

Generic Options 9-29 Example: Displaying Free Space

Maintaining File System Consistency The fsck Command 9-30 Summary 9-31

Example: Checking VxFS Consistency

9-30

Lesson 10: Online File System Administration


Introduction 10-2 10-4 Resizing a File System

File System Size 10-4 Traditional File System Resizing 10-4


xii VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resizing a VERITAS File System 10-5 The fsadm Command 10-6 Example: Expanding a VERITAS File System Using fsadm 10-7 Example: Shrinking a VERITAS File System Using fsadm 10-7 The vxresize Command 10-8 Example: Expanding a Volume and File System Using vxresize 10-9 Example: Shrinking a Volume and File System Using vxresize 10-9 Troubleshooting Tips: Resizing a File System 10-10
Backing Up a File System 10-11

VxFS Backup and Restore Utilities 10-11 The vxdump Command 10-12 The vxdump Options 10-13 Example: Dumping to a File 10-14 Example: Dumping to a Tape 10-14
Restoring a File System 10-15 The vxrestore Command 10-15 The vxrestore Options 10-16

Example: Restoring from a File 10-16 Example: Restoring from a Tape 10-16 Troubleshooting Tips: Using vxdump and vxrestore
10-18

10-17

Creating a Snapshot File System

Backing Up a VERITAS File System 10-18 Traditional File System Backups 10-18 What Is a Snapshot File System? 10-19 What Does a Snapshot File System Contain? 10-19 How Is a Snapshot File System Used in a Backup? 10-19 Snapshot File System Disk Structure 10-20 Mounting a Snapshot File System 10-20 Data Copied to Snapshot 10-21 Reading a Snapshot 10-22 Creating a Snapshot File System 10-23 Example: Creating a Snapshot File System 10-23 Using a Snapshot File System for Backup 10-24 Backing Up a Snapshot File System 10-25 Restoring from a Snapshot File System Backup 10-25
Managing Snapshot File Systems 10-27

Selecting Snapshot File System Size 10-27 Multiple Snapshots of One File System 10-28 Performance of Snapshot File Systems 10-28 Troubleshooting Tips: Snapshot File Systems 10-29
Summary 10-30

Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

xiii

Lesson 11: Defragmenting a File System


Introduction 11-2 11-4 Extent-Based Allocation

Comparing VxFS with Traditional UNIX Allocation Policies UFS Block-Based Allocation 11-5 VxFS Extent-Based Allocation 11-6 Benefits of Extent-Based Allocation 11-8
VxFS File System Layout Options 11-9

11-4

File System Layout 11-9 VERITAS File System Layout Versions 11-9
Upgrading the File System Layout 11-10

Upgrading the Layout 11-10 Performing Online Upgrades 11-10 The vxupgrade Command 11-11 Using the vxupgrade Command 11-11 Displaying the File System Layout Version How Does vxupgrade Work? 11-12
File System Structure 11-13

11-11

UFS Structure 11-13 VxFS Structural Components 11-14 Allocation Units 11-14 Structural Files 11-14
Converting UFS to VxFS 11-16

What Block Sizes Can Be Converted? 11-16 How Much Free Space Is Required? 11-16 How Long Does the Conversion Take? 11-17 The vxfsconvert Command 11-18 UFS to VxFS Conversion Process 11-19 What Is Converted? 11-21 What Is Not Converted? 11-21 What If the Conversion Fails? 11-21 How Does the Conversion Process Work? 11-22
Fragmentation 11-23

What Is Fragmentation? 11-23 Controlling Fragmentation 11-23 Types of Fragmentation 11-24


Monitoring Fragmentation 11-25

Running Fragmentation Reports 11-25 Running the Directory Fragmentation Report 11-26 Example: Reporting on Directory Fragmentation 11-26 Interpreting the Report 11-27 Running the Extent Fragmentation Report 11-28 Example: Reporting on Extent Fragmentation 11-29 Interpreting the Report 11-29

xiv

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Guidelines for Interpreting Fragmentation Data Example: Fragmented File System 11-31
Defragmenting a File System 11-32

11-30

VxFS Defragmentation 11-32 The fsadm Command 11-32 Notes on fsadm Options 11-33 Defragmenting Extents 11-34 Duration of Defragmentation 11-34 Example: Defragmenting Extents 11-34 Defragmenting Directories 11-37 Example: Defragmenting Directories 11-37 Scheduling Defragmentation 11-40 Scheduling Defragmentation as a cron Job 11-41 Defragmenting a File System: VEA 11-41
Summary 11-42

Lesson 12: Intent Logging


Introduction 12-2 12-4 Role of the Intent Log

What Is Intent Logging? 12-4 Traditional File System Recovery 12-4 VxFS Intent Log Replay 12-5 What Does the Intent Log Contain? 12-6 Preventing Intent Log Changes from Being Overwritten
Maintaining File System Consistency The fsck Command 12-7 12-7

12-6

Generic Options 12-8 VxFS-Specific Options 12-8 VxFS fsck Example: Using the Intent Log 12-9 VxFS fsck Example: Without Using the Intent Log 12-9 VxFS fsck Example: Parallel Log Replay 12-9 Checking Consistency Using VEA 12-9 Output of the fsck Command 12-10 Notes on Running fsck 12-10
12-11

Selecting an Intent Log Size

Default Intent Log Size 12-11 Guidelines for Selecting an Intent Log Size 12-11
Controlling Logging Behavior 12-12 Selecting mount Options for Logging Logging mount Options 12-12

12-12
12-14

Improving Performance Through Logging Options

Logging and VxFS Performance 12-14 Guidelines for Selecting mount Options 12-14 Specifying an I/O Size for Logging 12-16
Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

xv

Summary 12-17

Lesson 13: Architecture


Introduction 13-2 13-6 VxVM Component Design 13-4 Monitoring the VxVM Configuration Database

VxVM Configuration Database 13-6 Displaying Disk Group Configuration Data 13-7 Displaying Disk Configuration Data 13-9
Controlling the Configuration Daemon 13-12 VxVM Configuration Daemon: vxconfigd How Does vxconfigd Work? 13-12 The vxdctl Utility 13-14 Displaying vxconfigd Status 13-14 Enabling vxconfigd 13-14 Starting vxconfigd 13-15 Stopping vxconfigd 13-15 Disabling vxconfigd 13-15

13-12

Checking Licensing Information 13-16 Displaying Supported VxVM Object Versions

13-16

Managing the volboot File 13-18 The volboot File 13-18 Viewing the Contents of volboot

13-18

Changing the Host ID 13-19 Re-Creating the volboot File 13-19

Summary 13-20

Lesson 14: Introduction to Recovery


Introduction 14-2 14-4 Maintaining Data Consistency

What Is Resynchronization? 14-4 Resynchronization Processes 14-5 Minimizing the Impact of Resynchronization Dirty Region Logging 14-8 RAID-5 Logging 14-10 SmartSync Recovery Accelerator 14-11
Hot Relocation 14-13

14-7

Disk Failure 14-13 Impact of Disk Failure 14-13 What Is Hot Relocation? 14-14 How Does Hot Relocation Work? 14-15 How Is Space Selected for Relocation? 14-16

xvi

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Spare Disks

14-17

Managing Spare Disks: VEA 14-17 Managing Spare Disks: vxdiskadm 14-19 Managing Spare Disks: CLI 14-21
Replacing a Disk 14-23

Disk Replacement Tasks 14-23 Adding a New Disk 14-24 Disk Replacement Methods 14-25 Replacing a Disk: VEA 14-26 Replacing a Failed Disk: vxdiskadm Replacing a Disk: CLI 14-28
Unrelocating a Disk 14-29 The vxunreloc Utility

14-27

14-29 Unrelocating a Disk: VEA 14-30 Unrelocating a Disk: vxdiskadm 14-31 Unrelocating a Disk: CLI 14-32 Viewing Relocated Subdisks: CLI 14-32
14-33

Recovering a Volume

Recovering a Volume: VEA 14-33 Recovering a Volume: CLI 14-34 Recovering Volumes: vxdiskadm 14-36
Protecting the VxVM Configuration 14-37

Precautionary Tasks 14-37 The vxprint Command 14-37 Saving the Configuration Database 14-38 Displaying a Saved Configuration 14-38 Recovering a Lost Volume 14-38 Saving the /etc/system File 14-39
Summary 14-40

Lesson 15: Disk Problems and Solutions


Introduction 15-2 15-4 Identifying I/O Failure

Disk Failure 15-4 Disk Failure Handling 15-4 FAILING vs. FAILED Disks 15-5 Identifying Failure: Console Messages 15-6 Identifying Failure: Disk Records 15-7 Identifying Failure: Volume States 15-9 Example: Degraded Plex of a RAID-5 Volume
Disk Failure Types 15-13 15-14

15-11

Three Disk Failure Types 15-13


Resolving Permanent Disk Failure

Volume States After Permanent Disk Failure


Contents

15-14
xvii

Copyright 2002 VERITAS Software Corporation. All rights reserved.

Permanent Disk Failure 15-15 Resolving Permanent Disk Failure: Process 15-15 Volume States After Attaching the Disk Media 15-17 Volume States After Recovering Redundant Volumes 15-18
Resolving Temporary Disk Failure 15-19

Temporary Disk Failure 15-19 Resolving Temporary Disk Failure: Process 15-19 Volume States After Reattaching the Disk 15-21 Volume States After Recovery 15-22
Resolving Intermittent Disk Failure 15-23

Intermittent Disk Failure 15-23 Removing a Failing Drive 15-24 Forced Removal 15-25 The failing Flag 15-26
Summary 15-27

Lesson 16: Plex Problems and Solutions


Introduction 16-2 Displaying State Information for VxVM Objects 16-4

How Volumes Are Created 16-4 Initializing a Volumes Plexes 16-5 Identifying Plex Problems 16-6 Displaying State Information 16-6
Interpreting Plex States 16-8

Plex States 16-8 Condition Flags 16-11


Interpreting Volume States 16-13

Volume States 16-13


Interpreting Kernel States 16-15

Kernel States 16-15


Resolving Plex Problems 16-16 The vxrecover Command 16-17 The vxvol start Command 16-19 The vxmend Command 16-20

Fixing Layered Volumes

16-27

Analyzing Plex Problems

16-28

If the Good Plex Is Known 16-28 If the Good Plex Is Known: Example 16-29 If the Good Plex Is Not Known 16-30 If the Good Plex Is Not Known: Example 16-31
Summary 16-32

xviii

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lesson 17: Encapsulation and Boot Disk Mirroring


Introduction 17-2 What Is Disk Encapsulation? 17-4

Disk Encapsulation 17-4 What Is Root Encapsulation? 17-5 Why Encapsulate Root? 17-6 When Not to Encapsulate Root 17-7 Limitations of Root Disk Encapsulation 17-7 File System Requirements for Root Volumes 17-8 Encapsulation Requirements 17-10
Encapsulating the Root Disk 17-11

Encapsulating Root: VEA 17-11 Encapsulating Root: vxdiskadm 17-12


Viewing Encapsulated Disks 17-13

Review: Viewing Disk Information 17-13 VTOC: Before and After Encapsulating Root Disk 17-13 VTOC: Before and After Data Disk Encapsulation 17-14 /etc/system 17-15 /etc/vfstab: Before Root Encapsulation 17-16 /etc/vfstab: After Root Encapsulation 17-17
Creating an Alternate Boot Disk 17-18

Mirroring the Root Disk 17-18 Requirements for Mirroring the Root Disk 17-18 Why Create an Alternate Boot Disk? 17-19 Possible Boot Disk Errors 17-20 Booting from Alternate Mirror 17-21 Creating an Alternate Boot Disk: VEA 17-23 Creating an Alternate Boot Disk: vxdiskadm 17-24 Creating an Alternate Boot Disk: CLI 17-25 Which Root Disk Is Booting? 17-26
Unencapsulating a Root Disk 17-28 The vxunroot Command 17-28 Upgrading to a New VxVM Version 17-30

General Notes on Upgrades 17-30 Other Notes on Upgrades 17-31 Scripts Used in VxVM Upgrades 17-32 What Does the upgrade_start Script Do? 17-32 What Does the upgrade_finish Script Do? 17-33 Upgrading Volume Manager Only 17-34 Upgrading VxVM from SUNWvxvm 17-37 Upgrading Solaris Only 17-38 Upgrading VxVM and Solaris 17-40 After Upgrading 17-42

Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

xix

Upgrading to a New VxFS Version

17-43

Upgrading the VxFS Version 17-43 Before You Upgrade 17-43 Upgrading VxFS Only 17-44 Upgrading VxFS and Solaris 17-45 Upgrading Solaris Only 17-45
Summary 17-46

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Introduction 18-2 Solaris Boot Process 18-4

Solaris Boot Process Overview 18-4 Phase 1: Boot PROM Phase 18-5 Phase 2: Boot Program Phase 18-6 Phase 3: Kernel Initialization Phase 18-7 Phase 4: The /sbin/init Phase 18-8 VxVM Startup: Single-User Scripts 18-9 VxVM Startup: Multiuser Scripts 18-15
Troubleshooting the Boot Process 18-16

Files Used in the Boot Process 18-16 Troubleshooting: The Boot Device Cannot Be Opened 18-17 Troubleshooting: Invalid UNIX Partition 18-19 Troubleshooting: VxVM Startup Scripts Exit Without Initialization 18-20 Troubleshooting: Invalid or Missing /etc/system File 18-21 Troubleshooting: Unable to Boot from Unusable or Stale Plexes 18-25 Troubleshooting: Conflicting Host ID in the volboot File 18-27 Troubleshooting: File System Corruption 18-29 Troubleshooting: Root File System Mounted As Read-Only 18-30 Troubleshooting: Corrupted, Missing, or Expired License Keys 18-31 Troubleshooting: Missing or Misnamed /var/vxvm/tempdb 18-33 Troubleshooting: Debugging with vxconfigd 18-34
Root Disk Encapsulation 18-36

Root Disk Encapsulation: Purpose 18-36 Before Encapsulating the Root Disk 18-36 Initializing VxVM: Normal Process 18-36 Encapsulation Example: Root Disk with Space at the End of the Drive 18-37 Encapsulation Example: Root Disk with No Free Space on the Disk 18-39 Initializing VxVM: Recovery Process 18-41 Ensuring Consistent Layouts 18-41
Creating an Emergency Boot Disk 18-43

Why Create an Emergency Boot Disk? 18-43 Emergency Boot Disk Creation 18-44 Booting from an Emergency Boot Disk 18-46

xx

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering rootdg

18-47

Temporarily Importing rootdg 18-47 VxVM rootdg Failure and Recovery Scenarios 18-49 VxVM rootdg Failure and Recovery Solutions 18-57
Summary 18-63

Lesson 19: Administering DMP (Self Study)


Introduction 19-2 Discovering Disk Devices 19-4

What Is Device Discovery? 19-4 Discovering and Configuring Disk Devices 19-4 Adding Support for a New Disk Array 19-5 Scanning for Disks 19-5 Removing Support for a Disk Array 19-6
Administering the Device Discovery Layer 19-7

Listing Supported Disk Arrays 19-7 Excluding Support for a Disk Array 19-7 Reincluding Support for an Excluded Disk Array Listing Excluded Disk Arrays 19-8 Listing Supported JBODs 19-8 Adding Support for JBODs 19-8 Removing Support for JBODs 19-8
Dynamic Multipathing 19-9

19-8

What Is Dynamic Multipathing? 19-9 Benefits of DMP 19-9 Enabling DMP 19-10 Identifying DMP-Supported Arrays 19-10 What Is a Multiported Disk Array? 19-11 Active/Active Disk Arrays 19-11 Active/Passive Disk Arrays 19-12
Preventing Multipathing for a Device 19-13

Excluding Devices from Multipathing 19-14 Including Devices for Multipathing 19-15
Managing DMP 19-16

Listing Controllers on a System 19-18 Displaying the Paths Controlled by DMP Node 19-20 Displaying the DMP Node That Controls a Path 19-22 Enabling or Disabling I/O to a Controller 19-23 Listing Information About Enclosures 19-25 Renaming an Enclosure 19-25
Controlling Automatic Restore Processes 19-27

DMP Restore Daemon 19-27 Starting the DMP Restore Daemon 19-27 Checking the Status of the Restore Daemon
Contents

19-28
xxi

Copyright 2002 VERITAS Software Corporation. All rights reserved.

Stopping the DMP Restore Daemon 19-28 Example: Changing Restore Daemon Properties
Summary 19-29

19-28

Lesson 20: Controlling Users (Self Study)


Introduction 20-2 20-4 Who Uses Quotas?

Benefits of Quotas 20-4 Examples: Organizations That Use Quotas


Quota Limits 20-5

20-4

Types of Quota Limits 20-5 Effect of Quota Limits 20-6


Quota Commands 20-7

The Quota Files 20-7 Internal vs. External Quota Files 20-7 API for Manipulating Disk Quotas 20-8 VxFS Quota Commands 20-9 Quota mount Option 20-9
Setting Quotas 20-10

Overview: How to Set User and Group Quotas 20-10 Step 1: Create the quotas and quotas.grp Files 20-10 Step 2: Turn On Quotas 20-11 Step 3: Invoke the Quota Editor 20-11 Step 4: Modify Quota Limits 20-12 Step 5: Edit the Time Limit 20-12 Step 6: Confirm Quota Changes 20-14 Turning Off Quotas 20-14
Controlling User Access 20-15

What Are ACLs? 20-15 Example: Using ACLs 20-15


Setting ACLs 20-16 The setfacl Command

Examples: Setting ACLs

20-16 20-17

Viewing ACLs 20-18 The getfacl Command

20-18 Example: Viewing ACLs 20-18 Example: Setting the Same ACL on Two Files

20-18

Summary 20-19

xxii

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume III
Appendix A: Lab Exercises
Lab 1: Virtual Objects A-2 A-9 Lab 2: Installing VERITAS Foundation Suite Lab 3: VxVM Interfaces A-11 Lab 4: Managing Disks A-14 Lab 5: Managing Disk Groups A-15 Lab 6: Creating a Volume A-22 A-27 A-30 A-33 A-36 Lab 7: Configuring Volumes A-25 Lab 8: Volume Maintenance Lab 9: Setting Up a File System

Lab 10: Online File System Administration Lab 11: Defragmenting a File System Lab 12: Intent Logging Lab 13: Architecture A-38 A-42

Lab 14: Introduction to Recovery A-44 Lab 15: Disk Problems and Solutions A-52 Lab 16: Plex Problems and Solutions A-59 Lab 17: Encapsulation and Root Disk Mirroring Lab 19: Administering DMP (Optional) Lab 20: Controlling Users (Optional) A-74 A-79 A-66 A-69 Lab 18: VxVM, Boot Disk, and rootdg Recovery

Appendix B: Lab Solutions


Lab 1 Solutions: Virtual Objects B-2 B-9 Lab 2 Solutions: Installing VERITAS Foundation Suite Lab 3 Solutions: VxVM Interfaces B-13 Lab 4 Solutions: Managing Disks B-19 Lab 5 Solutions: Managing Disk Groups B-24 Lab 6 Solutions: Creating a Volume B-33 B-47 B-55 B-59 B-64 Lab 7 Solutions: Configuring Volumes B-41 Lab 8 Solutions: Volume Maintenance Lab 9 Solutions: Setting Up a File System

Lab 10 Solutions: Online File System Administration Lab 11 Solutions: Defragmenting a File System Lab 12 Solutions: Intent Logging Lab 13 Solutions: Architecture
Contents

B-68 B-72
xxiii

Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 14 Solutions: Introduction to Recovery B-74 Lab 15 Solutions: Disk Problems and Solutions B-85 Lab 16 Solutions: Plex Problems and Solutions B-98 Lab 17 Solutions: Encapsulation and Root Disk Mirroring Lab 19 Solutions: Administering DMP (Optional) Lab 20 Solutions: Controlling Users (Optional) B-124 B-131 B-110 B-113 Lab 18 Solutions: VxVM, Boot Disk, and rootdg Recovery

Appendix C: VxVM Command Quick Reference


Locations of VERITAS Volume Manager Commands VxVM Command Quick Reference C-4 C-2

Disk Operations C-4 Disk Group Operations C-4 Subdisk Operations C-5 Plex Operations C-5 Volume Operations C-5 DMP, DDL, and Task Management
Using VxVM Commands: Examples C-8

C-7

Appendix D: VxFS Command Quick Reference


Locations of VERITAS File System Commands Installing VERITAS File System D-4 Setting Up a File System Online Administration Benchmarking D-6 D-6 D-7 D-8 Managing Extents Intent Logging D-7 D-9 D-10 D-4 D-5 D-2

Defragmenting a File System

I/O Types and Cache Advisories File System Tuning Controlling Users QuickLog D-11 Quick I/O D-12

Appendix E: VERITAS Enterprise Administrator Quick Reference


General VEA Administration and Use Disk Operations E-2 Disk Group Operations E-3 Volume Operations E-3 Viewing Objects and Properties E-4
xxiv VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

E-2

Managing Tasks E-4 Subdisk Operations E-5 E-5 File System Operations

Appendix F: VMSA Reference


Volume Manager Storage Administrator Using the VMSA Main Window F-3 Object Tree Grid F-5 F-6 F-7 F-8 F-9 F-12 Menu Bar Toolbar Status Area F-4 F-2

Command Launcher

Other Views in VMSA F-10 Object View Window Components VMSA Properties File F-17

Appendix G: Volume Manager Tunable Parameters


Tunables for the VxVM System I/O Driver Viewing Tunable Parameters Setting Tunable Parameters G-2 G-2 G-7 G-2

Tunables for the VxVM DMP Driver

Appendix H: Troubleshooting Quick Reference


Disk Failures and Solutions H-3 Volume and Plex State Problems and Solutions H-4

Appendix I: Volume Manager Start-Up Scripts


The /etc/rcS.d Directory I-3 The /etc/rc2.d Directory I-4

Appendix J: Operating VxVM and VxFS in a Linux Environment


Introduction J-2 J-4 J-5 VERITAS Solutions for Enterprise Linux Comparing Linux and Solaris

Linux: Open Source Operating System J-5 Linux Device Naming J-6
VxVM and VxFS for Linux: Supported Features J-7

Supported Features: VxVM J-7 Supported Features: VxFS J-9

Contents
Copyright 2002 VERITAS Software Corporation. All rights reserved.

xxv

VxVM and VxFS for Linux: Installation Prerequisites

J-10

Product Versions and Supported Kernels J-10 Obtaining RedHat Kernels J-11 Installing Linux Patches J-11 Staying Informed J-11 Confirming Sufficient Memory and Space J-12 Checking the Disks on Your System J-13 Obtaining a License Key J-13
Installing VxVM and VxFS on Linux The rpm Command J-14 J-14

VxVM and VxFS Packages J-14 Adding VxVM and VxFS for Linux Packages J-14 Verifying Package Installation J-16 Running vxinstall J-16 Location of VxVM Commands J-17 Location of VxVM Manual Pages J-17 Administering the Device Discovery Layer Changing Tunable Parameters J-18

Operating VxVM on Linux J-17

J-17

Operating VxFS on Linux

J-19

Location of VxFS Commands J-19 Location of VxFS Manual Pages J-19 Running VxFS-Specific Commands J-19 Mounting a File System Automatically J-20 Unsupported Command Options J-20 Other Administrative Notes J-20
Summary J-21

Additional Resources J-21 Glossary Index

xxvi

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VERITAS Education Solutions


Welcome to: VERITAS Foundation Suite: Administration and Troubleshooting This is the first course in a two-part learning path designed to help you make the most of VERITAS Foundation Suite, a combination of VERITAS Volume Manager and VERITAS File System, by implementing a high-performance foundation for your data protection, high availability, and disaster recovery solutions.

VERITAS Foundation Suite Learning Path


VERITAS Foundation Suite: Administration and Troubleshooting This course covers: Installation Configuration Online administration Recovery Troubleshooting

VERITAS Foundation Suite (Advanced): Performance and Tuning

This course covers: Performance tuning File system tuning QuickLog and Quick I/O Storage checkpointing Off-host processing

After completing the Foundation Suite learning path, to continue building your storage management skills, VERITAS recommends:

VERITAS Volume Replicator

VERITAS Cluster Server Suite Learn how to create a highly available environment through clustering with VERITAS Cluster Server.

VERITAS SANPoint Foundation Suite HA Learn how to extend VERITAS File System and VERITAS Volume Manager so that multiple servers can share access to SAN storage.

Learn how to implement data replication in your disaster recovery strategy with VERITAS Volume Replicator.

For the most up-to-date information on VERITAS Education Solutions offerings, visit http://www.veritas.com.

Copyright 2002 VERITAS Software Corporation. All rights reserved.

Copyright 2002 VERITAS Software Corporation. All rights reserved.

Course Introduction

Storage Management Issues


Human Resource Database E-mail Server Customer Order Database

10% Full

50% Full Other issues: Other issues: Multiple-vendor hardware Multiple-vendor hardware Explosive data growth Explosive data growth Different application needs Different application needs Multiple operating systems Multiple operating systems Rapid change Rapid change Budgetary constraints Budgetary constraints

90% Full Problem: Customer order database cannot access unutilized storage. Common solution: Add more storage.
I-3

FOS35_Sol_R1.0_20020930

What Is Storage Virtualization?


Storage Management Issues Storage management is becoming increasingly complex due to: Multiple operating systems Unprecedented data growth Storage hardware from multiple vendors Dissimilar applications with different storage resource needs Management pressure to increase efficiency Budgetary and cost-control constraints Rapidly changing business climates To create a truly efficient environment, administrators must have the tools to skillfully manage large, complex, and heterogeneous environments. Storage virtualization helps businesses to simplify the complex IT storage environment and gain control of capital and operating costs by providing consistent and automated management of storage.

I-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

What Is Storage Virtualization?


Virtualization:
The logical representation of physical storage across the entire enterprise Consumer Consumer Consumer

Application requirements from storage Application requirements Growth potential Failure Throughput resistance Responsiveness Recovery time

Capacity
Disk size Number of disks/ path

Performance
Disk seek time Cache hit rate

Availability
MTBF Path redundancy

Physical aspects of storage Storage Resources


FOS35_Sol_R1.0_20020930 I-4

FOS35_Sol_R1.0_20020930

I-4

Defining Storage Virtualization Storage virtualization is the process of taking multiple physical storage devices and combining them into logical (virtual) storage devices that are presented to the operating system, applications, and users. Storage virtualization builds a layer of abstraction above the physical storage, so that data is not restricted to specific hardware devices, creating a flexible storage environment. Storage virtualization simplifies management of storage and potentially reduces cost through improved hardware utilization and consolidation. With storage virtualization, the physical aspects of storage are masked to users. Administrators can concentrate less on physical aspects of storage and more on delivering access to necessary data. Benefits of storage virtualization include: Greater IT productivity through the automation of manual tasks and simplified administration of heterogeneous environments Increased application return on investment through improved throughput and increased uptime Lower hardware costs through the optimized use of hardware resources

Course Introduction
Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-3

Storage Virtualization: Types


Storage-Based
Servers

Host-Based
Server

Network-Based
Servers Switch

Storage

Storage Storage

Most companies use a combination of these three types of storage virtualization to support their chosen architectures and application requirements.
FOS35_Sol_R1.0_20020930 I-5

How Is Storage Virtualization Used in Your Environment? The way in which you use storage virtualization, and the benefits derived from storage virtualization, depend on the nature of your IT infrastructure and your specific application requirements. Three main types of storage virtualization used today are: Storage-based Host-based Network-based Most companies use a combination of these three types of storage virtualization solutions to support their chosen architecture and application needs. The type of storage virtualization that you use depends on factors such as: Heterogeneity of deployed enterprise storage arrays Need for applications to access data contained in multiple storage devices Importance of uptime when replacing or upgrading storage Need for multiple hosts to access data within a single storage device Value of the maturity of technology Investments in a SAN architecture Level of security required Level of scalability needed

I-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Storage-Based Storage Virtualization


LAN Servers

Brand A Disk Array

Brand B Disk Array

Disks within an individual array are presented virtually to multiple servers.


FOS35_Sol_R1.0_20020930 I-6

Storage-Based Storage Virtualization Storage-based storage virtualization refers to disks within an individual array that are presented virtually to multiple servers. Storage is virtualized by the array itself. For example, RAID arrays virtualize the individual disks (that are contained within the array) into logical LUNS which are accessed by host operating systems using the same method of addressing as a directly-attached physical disk. This type of storage virtualization is useful under these conditions: You need to have data in an array accessible to servers of different operating systems. All of a servers data needs are met by storage contained in the physical box. You are not concerned about disruption to data access when replacing or upgrading the storage. The main limitation to this type of storage virtualization is that data cannot be shared between arrays, creating islands of storage that must be managed.

Course Introduction
Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-5

Host-Based Storage Virtualization


LAN

Disks within multiple arrays and from multiple vendors are presented virtually to a single host server.

Host Server

Switch Brand B Disk Array JBOD

Brand A Disk Array

Virtualized Disks
FOS35_Sol_R1.0_20020930 I-7

Host-Based Storage Virtualization Host-based storage virtualization refers to disks within multiple arrays and from multiple vendors that are presented virtually to a single host server. For example, software-based solutions, such as VERITAS Foundation Suite, provide host-based storage virtualization. Using VERITAS Foundation Suite to administer host-based storage virtualization is the focus of this training. Host-based storage virtualization is useful under these conditions: A server needs to access data stored in multiple storage devices. You need the flexibility to access data stored in arrays from different vendors. Additional servers do not need to access the data assigned to a particular host. Maturity of technology is a highly important factor to you in making IT decisions. Note: By combining VERITAS Foundation Suite with clustering technologies, such as the Cluster Volume Manager available with SANPoint Foundation Suite HA, storage can be virtualized to multiple hosts of the same operating system.

I-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Network-Based Storage Virtualization


Application Servers SAN LAN Application Servers LAN Appliance

Disks from multiple arrays and vendors are presented virtually to multiple servers.
Brand A Disk Array Virtualized Disks
FOS35_Sol_R1.0_20020930

Switch

SAN Brand B Disk Array JBOD

I-8

Network-Based Storage Virtualization Network-based storage virtualization refers to disks from multiple arrays and multiple vendors that are presented virtually to multiple servers. For example, VERITAS ServPoint and SANPoint solutions build on the storage virtualization platform provided by VERITAS Foundation Suite to enable network-based storage virtualization. Network-based storage virtualization is useful under these conditions: You need to have data accessible across heterogeneous servers and storage devices. You require central administration of storage across all Network Attached Storage (NAS) systems or Storage Area Network (SAN) devices. You want to ensure that replacing or upgrading storage does not disrupt data access. You want to virtualize storage to provide block services to applications. Configuring and administering network-based storage virtualization is beyond the scope of this training. For more information on this type of storage virtualization, look for these VERITAS Education offerings: VERITAS Storage Area Network (SAN) Fundamentals VERITAS ServPoint SAN for Solaris VERITAS ServPoint NAS for Solaris VERITAS SANPoint Control

Course Introduction
Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-7

VERITAS Foundation Suite


VERITAS Foundation Suite provides host-based storage virtualization for performance, availability, and manageability benefits for enterprise computing environments.
Company Business Process
High Availability Application Solutions Data Protection Volume Manager and File System

VERITAS Cluster Server/Replication VERITAS Editions VERITAS NetBackup/Backup Exec VERITAS Foundation Suite Hardware and Operating System

FOS35_Sol_R1.0_20020930

I-9

Introducing VERITAS Foundation Suite


VERITAS storage management solutions address the increasing costs of managing mission-critical data and disk resources in Direct Attached Storage (DAS) and Storage Area Network (SAN) environments. At the heart of these solutions is VERITAS Foundation Suite, which includes VERITAS Volume Manager (VxVM), VERITAS File System (VxFS), and other value add products. Independently, these components provide key benefits. When used together as an integrated solution, VxVM and VxFS deliver the highest possible levels of performance, availability, and manageability for heterogeneous storage environments.

I-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

What Is VERITAS Volume Manager?


Physical Disks

Virtual Volumes

Users Databases Applications


I-10

FOS35_Sol_R1.0_20020930

What Is VERITAS Volume Manager? VERITAS Volume Manager, the industry-leader in storage virtualization, is an easy-to-use, online storage management solution for organizations that require uninterrupted, consistent access to mission-critical data. VxVM enables you to apply business policies to configure, share, and manage storage without worrying about the physical limitations of disk storage. VxVM reduces total cost of ownership by enabling administrators to easily build storage configurations that improve performance and increase data availability. VxVM provides a logical volume management layer which overcomes the physical restrictions of hardware disk devices by spanning volumes across multiple spindles. Through the support of RAID redundancy techniques, VxVM protects against disk and hardware failures, while providing the flexibility to extend the capabilities of existing hardware. Working in conjunction with VERITAS File System, VERITAS Volume Manager creates a foundation for other value-added technologies such as SAN environments, clustering and failover, automated management, backup and HSM, and remote browser-based management.

Course Introduction
Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-9

What Is VERITAS File System?


Shared data access Structured data access Controlled data access Common interface Manageability of data storage Balance of integrity and performance
Integrity Integrity Performance Performance

FOS35_Sol_R1.0_20020930

I-11

What Is VERITAS File System? A file system is a collection of directories organized into a structure that enables you to locate and store files. All information processed is eventually stored in a file system. The main purposes of a file system are to: Provide shared access to data storage Provide structured access to data Control access to data Provide a common, portable application interface Enable the manageability of data storage The value of a file system depends on its integrity and performance. Integrity: Information sent to the file system must be exactly the same when it is retrieved from the file system. Performance: A file system must not impose an undue overhead when responding to I/O requests from applications. In practice, the requirements to provide integrity and performance conflict. Therefore, a file system must provide a balance between these two requirements. VERITAS File System is a powerful, quick-recovery journaling file system that provides the high performance and easy online manageability required by missioncritical applications. VERITAS File System augments UNIX file management with continuous availability and optimized performance. It provides scalable, optimized performance and the capacity to meet the increasing demands of user loads in client/server environments.

I-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VERITAS Foundation Suite: Benefits


Manageability Manage storage and file systems from one interface. Configure storage online. VxVM and VxFS are consistent across Solaris, HP-UX, AIX, and Linux. Availability RAID and hot relocation protect against data loss. Online operations eliminate planned downtime. Performance I/O throughput can be maximized using volume layouts. Performance bottlenecks can be located and eliminated using analysis tools. Scalability VxVM and VxFS run on 32-bit and 64-bit operating systems. Storage can be deported to larger enterprise platforms.
FOS35_Sol_R1.0_20020930 I-12

Benefits of VERITAS Foundation Suite Commercial system availability now requires continuous uptime in many implementations. Systems must be available 24 hours a day, 7 days a week, and 365 days a year. VERITAS Foundation Suite reduces the cost of ownership by providing scalable manageability, availability, and performance enhancements for these enterprise computing environments. Manageability Management of storage and the file system is performed online in real time, eliminating the need for planned downtime. Online volume and file system management can be performed through an intuitive, easy-to-use graphical user interface that is integrated with the VERITAS Volume Manager (VxVM) product. VxVM provides consistent management across Solaris, HP-UX, AIX, Linux, and Windows 2000 platforms. VxFS command operations are consistent across Solaris, HP-UX, AIX, and Linux platforms. Availability Integrity of storage is maintained by true mirroring across all write operations. Through RAID techniques, storage remains available in the event of hardware failure. Data redundancy is maintained by hot relocation, which protects against multiple simultaneous disk failures.
Course Introduction
Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-11

Recovery time is minimized with logging and background mirror resynchronization. Logging of file system changes enables fast file system recovery. Snapshot of a file system provides an internally consistent, read-only image for backup.

Performance I/O throughput can be maximized by measuring and modifying volume layouts while storage remains online. Performance bottlenecks can be located and eliminated using VxVM analysis tools. Extent-based allocation of space for files minimizes file level access time. Read-ahead buffering dynamically tunes itself to the pattern of file access. Aggressive caching of writes greatly reduces the number of disk accesses. Direct I/O performs file I/O directly into and out of user buffers. Scalability VxVM runs over a 32-bit and 64-bit operating system. Storage can be deported to larger enterprise-class platforms. Storage devices can be spanned. VxVM is fully integrated with VERITAS File System (VxFS). With VxFS, several add-on products are available for maximizing performance in a database environment.

I-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Course Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

I-13

I-13

Course Overview
This training provides comprehensive instruction on using the file and disk management foundation products: VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). In this course, you learn how to combine file system and disk management technology to ensure easy management of all storage and maximum availability of essential data. Course Objectives After completing this course, you will be able to: Install and configure VxVM and VxFS. Perform online administration using administration tools. Manage disks, disk groups, and volumes. Create and manage file system snapshots. Monitor file system fragmentation and defragment a file system. Control file system logging behavior. Monitor file system structure, upgrade the file system layout, and convert a UFS file system to VxFS. Perform recovery management techniques, such as backing up and restoring the VxVM configuration, restarting volumes and mirrors, and resolving disk failures. Interpret plex, volume, and kernel states.

Course Introduction
Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-13

Course Resources
Administering DMP (Self-Study Lesson) Controlling Users (Self-Study Lesson) Lab Exercises (Appendix A) Lab Solutions (Appendix B) VxVM Command Reference (Appendix C) VxFS Command Reference (Appendix D) VEA Quick Reference (Appendix E) VMSA Quick Reference (Appendix F) VxVM Tunable Parameters (Appendix G) Troubleshooting Quick Reference (Appendix H) VxVM Start-Up Scripts (Appendix I) Operating VxVM and VxFS in a Linux Environment (Appendix J) Glossary
I-14

FOS35_Sol_R1.0_20020930

Additional Course Resources


Self-Study Lessons Self-study lessons provide additional learning material that you can work through on your own. Self-study lessons included in this training are: Administering DMP (Self Study) This lesson describes how to administer the device discovery layer (DDL) and dynamic multipathing (DMP) features of VxVM. Controlling Users (Self Study) This lesson describes how to implement quotas and access control lists (ACLs) in a VERITAS file system. Self-study material may be covered during class time at the discretion of the instructor and based on the needs of class attendees. If you have a particular interest in any part of the self-study material, talk to your instructor. Appendix A: Lab Exercises This section contains hands-on exercises that enable you to practice the concepts and procedures presented in the lessons. Appendix B: Lab Solutions This section contains detailed solutions to the lab exercises for each lesson.

I-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Appendix C: VxVM Command Reference This section contains a quick reference guide to common VERITAS Volume Manager commands. Appendix D: VxFS Command Reference This section contains a listing of all VERITAS File System commands with directory locations and descriptions. This section also contains a quick reference guide to VERITAS File System commands used in performing common tasks. Appendix E: VERITAS Enterprise Administrator Quick Reference This section contains a quick reference guide to performing tasks using the VERITAS Enterprise Administrator (VEA) graphical user interface tool. Appendix F: VMSA Reference This section contains an overview of using the Volume Manager Storage Administrator (VMSA) graphical user interface tool. The VMSA interface is used with versions of VxVM earlier than version 3.5. Appendix G: Volume Manager Tunable Parameters This section contains a summary of kernel parameters that define the behavior of VxVMs system I/O driver. Appendix H: Troubleshooting Quick Reference This section contains a summary of possible disk failures and problems with plex and volume states, and solutions for resolving the problems. Appendix I: Volume Manager Start-Up Scripts This section contains a summary of the scripts involved in VxVM startup. Appendix J: Operating VxVM and VxFS in a Linux Environment This section contains an overview of VxVM and VxFS on the Linux platform and key differences between VxVM and VxFS on Solaris and on Linux. Glossary For your reference, this course includes a glossary of terms related to VERITAS Foundation Suite.

Course Introduction
Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-15

I-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Virtual Objects

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930 Copyright 2002 VERITAS

Recovery and Recovery and Troubleshooting Troubleshooting

1-2

1-2

Introduction
Overview This lesson describes the virtual storage objects that VERITAS Volume Manager (VxVM) uses to manage physical disk storage. This lesson introduces common virtual storage layouts, illustrates how virtual storage objects relate to physical storage objects, and describes the benefits of virtual data storage. Importance Before you install and set up VERITAS Foundation Suite, you should be familiar with the virtual objects that VxVM uses to manage physical disk storage. A conceptual understanding of virtual objects helps you to interpret and manage the virtual objects represented in VxVM interfaces, tools, and reports. Outline of Topics Physical Data Storage Virtual Data Storage Volume Manager Storage Objects Volume Manager Storage Layouts

1-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Identify the structural characteristics of a disk that are affected by placing a disk under Volume Manager control. Describe the structural characteristics of a disk after it is placed under Volume Manager control. Identify the virtual objects that are created by Volume Manager to manage data storage, including disk groups, Volume Manager disks, subdisks, plexes, and volumes. Define volume layout and identify virtual storage layout types used by Volume Manager to remap address space.
FOS35_Sol_R1.0_20020930 1-3

Copyright 2002 VERITAS

Objectives After completing this lesson, you will be able to: Identify the structural characteristics of a disk that are affected by placing a disk under Volume Manager control. Describe the structural characteristics of a disk after it is placed under Volume Manager control. Identify the virtual objects that are created by Volume Manager to manage data storage, including disk groups, Volume Manager disks, subdisks, plexes, and volumes. Define volume layout and identify virtual storage layout types used by Volume Manager to remap address space.

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-3

Physical Disk Structure


Solaris Disk /dev/rdsk/c0t1d4

Boot

Partition 2 (Backup Slice) Partition 2 refers to the Partition 2 refers to the entire disk, including the entire disk, including the VTOC, by convention. VTOC, by convention.

0 1 2 3 4 5 6 Partition 0 Partition 1 Partition 3 Partition 4 Partition 5 Partition 6 Partition 7

VTOC (Disk Label)

Partitions (Slices)

FOS35_Sol_R1.0_20020930

1-4

Copyright 2002 VERITAS

Physical Data Storage


Physical Storage Objects The basic physical storage device that ultimately stores your data is the hard disk. When you install Solaris, hard disks are formatted as part of the installation program. Formatting is the basic method of organizing a disk to prepare for files to be written to and retrieved from the disk. A formatted disk has a prearranged storage pattern that is designed for the storage and retrieval of data. Physical Disk Structure A physical Solaris disk is made up of the following parts: VTOC: A Solaris disk has an area called the volume table of contents (VTOC) that stores information about the disk structure and organization. The VTOC is also called the disk label. On a Solaris disk, the VTOC is typically less than 200 bytes and resides on the first sector of the disk. A sector is 512 bytes on most systems. On the boot disk, the boot block resides within the first 16 sectors (8K). The boot block contains instructions that point to where the second stage of the boot process is located. Partitions: After the VTOC, the remainder of a Solaris disk is divided into units called partitions. A partition is simply a group of cylinders set aside for a particular use. Information about the size, location, and use of partitions is stored in the VTOC in the partition table.

1-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Another term for a partition is a slice. The terms partition and slice are used interchangeably. Some Solaris utilities, such as the format utility, only use the term partition. Partition 2 refers to the entire disk, including the VTOC, by convention. This partition is also referred to as the backup slice. Physical Disk Naming You locate and access the data on a physical disk by using a device name that specifies the controller, target ID, and disk number. A typical device name uses the format: c#t#d#. c# is the controller number. t# is the target ID. d# is the logical unit number (LUN) of the drive attached to the target. If a disk is divided into partitions, then you also specify the partition number in the device name: s# is the partition (slice) number. For example, device name c0t0d0s1 is connected to controller number 0 in the system, with a target ID of 0, physical disk number 0, and partition number 1 on the disk.

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-5

Physical Data Storage


Physical Disks

Disk array: A collection Disk array: A collection of physical disks used of physical disks used to balance I/O across to balance I/O across multiple disks multiple disks Multipathed disk array: Multipathed disk array: Provides multiple ports Provides multiple ports to access disks to to access disks to achieve performance and achieve performance and availability benefits availability benefits
FOS35_Sol_R1.0_20020930

Users Databases Applications


1-5

Copyright 2002 VERITAS

Disk Arrays Performing I/O to physical disks can be a relatively slow process, because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read and write operations are done to individual disks, one at a time, the read-write time can become unmanageable. A disk array is a collection of physical disks. Performing I/O operations on multiple disks in a disk array can improve I/O speed and throughput. Multipathed Disk Arrays Some disk arrays provide multiple ports to access disk devices. These ports, coupled with the host bus adaptor (HBA) controller and any data bus or I/O processor local to the array, make up multiple hardware paths to access the disk devices. This type of disk array is called a multipathed disk array. You can connect multipathed disk arrays to host systems in many different configurations, such as: Connecting multiple ports to different controllers on a single host Chaining ports through a single controller on a host Connecting ports to different hosts simultaneously

1-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Virtual Data Storage


Physical Disks

Virtual Volumes

Volume Manager creates a Volume Manager creates a virtual layer of data storage. virtual layer of data storage.
Volume Manager volumes appear to software and users to be physical disk partitions. Volumes have block and character device nodes in the /dev tree: /dev/vx/[r]dsk/

Users Databases Applications


1-6

FOS35_Sol_R1.0_20020930

Copyright 2002 VERITAS

Virtual Data Storage


Virtual Storage Management VERITAS Volume Manager creates a virtual level of storage management above the physical device level by creating virtual storage objects. The virtual storage object that is visible to users and applications is called a volume. What Is a Volume? A volume is a virtual object, created by Volume Manager, that stores data. A volume is made up of space from one or more physical disks on which the data is physically stored. How Do You Access a Volume? Volumes created by VxVM appear to the operating system as physical disks, and applications that interact with volumes work in the same way as with physical disks. All users and applications access volumes as contiguous address space using special device files in a manner similar to accessing a disk partition. Volumes have block and character device nodes in the /dev tree. You can supply the name of the path to a volume in your commands and programs, in your file system and database configuration files, and in any other context where you would otherwise use the path to a physical disk partition.

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-7

Virtual Storage Benefits


Physical Disks

Virtual Volumes
Benefits: Disk spanning Load balancing Multidisk configurations (concatenation, mirroring, striping, and RAID-5) Online administration High availability (disk group import and deport, hot relocation and dynamic multipathing)

Users Databases Applications


1-7

FOS35_Sol_R1.0_20020930

Copyright 2002 VERITAS

Why Use Volume Manager? Benefits of using Volume Manager for virtual storage management include: Disk spanning: By using volumes and other virtual objects, Volume Manager enables you to span data over multiple physical disks. The process of logically combining physical devices to enable data to be stored across multiple devices is called spanning. Load balancing: Data can be spread across several disks within an array to distribute or balance I/O operations across the disks. Using parallel I/O across multiple disks improves I/O performance by increasing data transfer speed and overall throughput for the array. Complex multidisk configurations: Volume Manager virtual objects enable you to create complex disk configurations in multidisk systems that enhance performance and reliability. Multidisk configurations, such as striping, mirroring, and RAID-5 configurations, can provide data redundancy, performance improvements, and high availability. Online administration: Volume Manager uses virtual objects to perform administrative tasks on disks without interrupting service to applications and users. High availability: Volume Manager includes automatic failover and recovery features that ensure continuous access to critical data. Volume Manager can move collections of disks between hosts (disk group import and deport), automatically relocate data in case of disk failure (hot relocation), and automatically detect and use multipathed disk arrays (dynamic multipathing, or DMP).
1-8 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Control


Volume Manager-Controlled Disk

Boot

6 0 1 2 3 4 5

VTOC Private Region


Partition 3, Tag 15

Partition 2 (Backup Slice) Size of private region: Size of private region: Minimum number of Minimum number of cylinders for at least cylinders for at least 2048 sectors 2048 sectors

Metadata

User Data

Partition 4, Tag 14 Size of public region: Size of public region: Remainder of disk Remainder of disk
1-8

Public Region

FOS35_Sol_R1.0_20020930

Copyright 2002 VERITAS

Volume Manager-Controlled Disks With Volume Manager, you enable virtual data storage by bringing a disk under Volume Manager control. To bring a disk under Volume Manager control means that Volume Manager creates virtual objects and establishes logical connections between those objects and the underlying physical objects, or disks. When a disk is brought under Volume Manager control: 1 Volume Manager removes all of the partition table entries from the VTOC, except for partition table entry 2 (backup slice). Partition table entry 2 contains the entire disk, including the VTOC, and is used to determine the size of the disk. Note: The boot disk is a special case and is discussed in a later lesson. 2 Volume Manager then rewrites the VTOC and creates two partitions on the physical disk. One partition contains the private region, and the other contains the public region. Private region: The private region stores information, such as disk headers, configuration copies, and kernel logs, that Volume Manager uses to manage virtual objects. The private region represents a small management overhead. The default size of the private region is 2048 blocks (sectors), and the maximum size is 524288 blocks (sectors). With 512-byte blocks, the default size is 1048576 bytes (1 MB), and the maximum size is 268435456 bytes (256 MB).

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-9

Public region: The public region consists of the remainder of the space on the disk. The public region represents the available space that Volume Manager can use to assign to volumes and is where an application stores data. Volume Manager never overwrites this area unless specifically instructed to do so.

By convention, the public region of a Volume Manager-controlled disk is referred to as a Volume Manager disk, or VxVM disk. The true definition of a VxVM disk is a Volume Manager-controlled disk. Partition Tags: VxVM sets the partition tags, the numeric values that describe the file system mounted on a partition, for the public and private regions: Tag 14 is always used for the public region of the disk. Tag 15 is always used for the private region of the disk. If the disk has no partitions that are being placed under Volume Manager control, then Volume Manager creates the private region first, and the public region second, on the disk. 3 Once a disk is under Volume Manager control, VxVM updates the VTOC with information about the removal of the existing partitions and the addition of the two new partitions, which correspond to the public and private regions.

1-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Objects


Physical Disks

VxVM Disks VxVM Disks Volumes Volumes Subdisks Subdisks Disk Group Disk Group Plexes Plexes
FOS35_Sol_R1.0_20020930 1-9

Subdisk Subdisk Subdisk

Copyright 2002 VERITAS

Volume Manager Storage Objects


Virtual Objects A volume is a virtual object that records and retrieves data from one or more physical disks. Volumes are one of a variety of virtual object types used by Volume Manager for storage management. Volume Manager virtual objects include: Disk groups Volume Manager disks Subdisks Plexes Volumes

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-11

Disk Groups
VxVM Disks c0t0d0
Physical Disk Logical Disk

Disk Group: acctdg


VxVM objects VxVM objects cannot span disk cannot span disk groups. groups. Disk groups Disk groups represent represent management and management and configuration configuration boundaries. boundaries. Disk groups enable Disk groups enable high availability. high availability.

c1t0d0
Physical Disk Logical Disk

c2t0d0
Physical Disk
FOS35_Sol_R1.0_20020930

Logical Disk
1-10

Copyright 2002 VERITAS FOS35_Sol_R1.0_20020930

1-10

Disk Groups A disk group is a collection of VxVM disks. You group disks into disk groups for management purposes, such as to hold the data for a specific application or set of applications. For example, data for accounting applications can be organized in a disk group called acctdg. A configuration database is a set of records with detailed information about all of the Volume Manager objects in a disk group, including object attributes and their connections. Disk groups are configured by the system administrator and represent management and configuration boundaries. Volume Manager objects cannot span disk groups. For example, a volumes subdisks, plexes, and disks must be derived from the same disk group as the volume. You can create additional disk groups as necessary. Disk groups allow you to group disks into logical collections. Disk groups enable high availability, because a disk group and its components can be moved as a unit from one host machine to another. Disk drives can be shared by two or more hosts, but can be accessed by only one host at a time. If one host crashes, the other host can take over the failed hosts disk drives, as well as its disk groups.

1-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Disks
VxVM Disks c0t0d0
Physical Disk Logical Disk

Disk Media Name: acctdg01

c1t0d0
Physical Disk Logical Disk

Disk Media Name: acctdg02

c2t0d0
Physical Disk
FOS35_Sol_R1.0_20020930

Logical Disk

Disk Media Name: acctdg03


1-11

Disk Group: acctdg


Copyright 2002 VERITAS FOS35_Sol_R1.0_20020930 1-11

Volume Manager Disks A Volume Manager (VxVM) disk is created from the public region of a physical disk that is under Volume Manager control. Each VxVM disk corresponds to one physical disk. Each VxVM disk has a unique virtual disk name called a disk media name. The disk media name is a logical name used for Volume Manager administrative purposes. Volume Manager uses the disk media name when assigning space to volumes. A VxVM disk is given a disk media name when it is added to a disk group. You can supply the disk media name or allow Volume Manager to assign a default name that typically takes the form diskgroup##, where diskgroup is the name of the disk group. The disk media name is stored with a unique disk ID to avoid name collision. Once a VxVM disk is assigned a disk media name, the disk is no longer referred to by its physical address of c#t#d#. The physical address of c#t#d# becomes known as the disk access record. Notes on VxVM Disk Naming The rootdg disk group is a special disk group that follows a different set of naming conventions. For disks in the rootdg disk group, the default VxVM disk names are disk01, disk02, and so on. If you use the command line utilities to administer Volume Manager, then the device nodes are used for the disk names, unless you specify the disk media name in the command.
Lesson 1: Virtual Objects
Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-13

Subdisks
VxVM Disks c0t0d0
Physical Disk

acctdg01
acctdg01-01 acctdg01-02 acctdg01-03

Subdisks

c1t0d0
Physical Disk

acctdg02
acctdg02-01 acctdg02-02 acctdg02-03

Subdisks

c2t0d0
Physical Disk
FOS35_Sol_R1.0_20020930

acctdg03
acctdg03-01 acctdg03-02 acctdg03-03

Subdisks
1-12

Disk Group: acctdg


Copyright 2002 VERITAS FOS35_Sol_R1.0_20020930 1-12

Subdisks A VxVM disk can be divided into one or more subdisks. A subdisk is a set of contiguous disk blocks that represent a specific portion of a VxVM disk. A subdisk is a subsection of a disks public region. A subdisk is the smallest unit of storage in Volume Manager. Subdisks are the building blocks for Volume Manager objects. A subdisk is defined by an offset and a length in sectors on a VxVM disk. The default name for a subdisk takes the form DMname-##. The name is made up of the VxVM disk media name, a hyphen, and a two-digit number. A VxVM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions of a VxVM disk. Any VxVM disk space that is not reserved or that is not part of a subdisk is free space. You can use free space to create new subdisks. Conceptually, a subdisk is similar to a partition: both a subdisk and a partition divide a disk into pieces defined by an offset address and length. Each of those pieces represents contiguous space on the physical disk. However, there is one important distinction: The maximum number of partitions to a disk is eight. There is no theoretical limit to the number of subdisks that can be attached to a single plex. Note: The number of subdisks is limited by default to a value of 4096. If required, you can change this default by using the vol_subdisk_num tunable parameter. For more information on tunable parameters, see the VERITAS Volume Manager System Administrators Guide.
1-14 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Plexes
VxVM Disks c0t0d0
Physical Disk

Volumes expvol
acctdg01-01 acctdg02-02 acctdg03-02

acctdg01
acctdg01-01 acctdg01-02 acctdg01-03

c1t0d0
Physical Disk

acctdg02
acctdg02-01 acctdg02-02 acctdg02-03

Plexes Plexes

expvol-01 payvol

c2t0d0
Physical Disk
FOS35_Sol_R1.0_20020930

acctdg03
acctdg03-01 acctdg03-02 acctdg03-03

acctdg01-02 acctdg03-01 acctdg02-01

payvol-01

payvol-02
1-13

Disk Group: acctdg


Copyright 2002 VERITAS FOS35_Sol_R1.0_20020930 1-13

Plexes Volume Manager uses subdisks to build virtual objects called plexes. A plex is a structured or ordered collection of subdisks that represents one copy of the data in a volume. A plex consists of one or more subdisks located on one or more physical disks. A plex is also called a mirror. The terms plex and mirror can be used interchangeably, even though a plex is only one copy of the data. The terms mirrored or mirroring imply two or more copies of data. The length of a plex is determined by the last block that can be read or written on the last subdisk in the plex. Plex length may not equal volume length to the exact sector, because the plex is aligned to a cylinder boundary. The default naming convention for plexes in a volume is volumename-##. The default plex name consists of the volume name, a hyphen, and a two-digit number.

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-15

Plex Types
Volume

Complete Plex
Holds a complete copy of the volume Maps the entire address space

Volume

Sparse Plex
Has a length that is less than volume length Maps the entire address space

Log Plex
Volume

Dedicated to logging Speeds up data consistency checks and repairs

FOS35_Sol_R1.0_20020930

1-14

Copyright 2002 VERITAS

Plex Types Plexes can be categorized into three types: Complete plex: A complete plex holds a complete copy of a volume and therefore maps the entire address space of the volume. A volume must have at least one complete plex. Most plexes in VxVM are complete plexes. For example, if a volume is 1 MB in length, then the complete plex must also be at least 1 MB in length, and the 1 MB of address space must be mapped to one or more subdisks whose combined length adds up to 1 MB with no gaps in the address space. Sparse plex: A sparse plex is a plex that has a length that is less than the length of the volume or that maps to only part of the address space of a volume. Sparse plexes are not commonly used in newer VxVM versions. In older VxVM versions, sparse plexes are used for performance improvement. For example, a RAM disk uses a sparse plex to map to a hot spot within a volume to improve read performance. The RAM disk must be of sufficient size and offset to cover the hot spot and does not need to map to the whole volume. Log plex: A log plex is a plex that is dedicated to logging. A log plex is used to speed up data consistency checks and repairs after a system failure. RAID-5 and mirrored volumes typically use a log plex. A volume must have at least one complete plex that has a complete copy of the data in the volume with at least one associated subdisk. Other plexes in the volume can be complete, sparse, or log plexes. A volume can have up to 32 plexes; however, you should never use more than 31 plexes in a single volume. Volume Manager requires one plex for automatic or temporary online operations.
1-16 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volumes
VxVM Disks c0t0d0
Physical Disk

Volumes expvol
acctdg01-01 acctdg02-02 acctdg03-02

acctdg01
acctdg01-01 acctdg01-02 acctdg01-03

c1t0d0
Physical Disk

acctdg02
acctdg02-01 acctdg02-02 acctdg02-03

expvol-01 payvol
acctdg01-02 acctdg03-01 acctdg02-01

c2t0d0
Physical Disk
FOS35_Sol_R1.0_20020930

acctdg03
acctdg03-01 acctdg03-02 acctdg03-03

payvol-01

payvol-02
1-15

Disk Group: acctdg


Copyright 2002 VERITAS FOS35_Sol_R1.0_20020930 1-15

Volumes A volume is a virtual storage device that is used by applications in a manner similar to a physical disk. Due to its virtual nature, a volume is not restricted by the physical size constraints that apply to a physical disk. A volume is composed of one or more plexes. A volume can span across multiple disks. The data in a volume is stored on subdisks of the spanned disks. A volume must be configured from VxVM disks and subdisks within the same disk group. Volume Manager uses the default naming convention vol## for volumes, where ## represents a two-digit number. You can assign meaningful volume names that reflect the nature or use of the data in the volumes. For example, two volumes in acctdg can be expvol, a volume that contains expense data, and payvol, a volume that contains payroll data.

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-17

Example: Writing to a Volume


VxVM Disks datadg01
datadg01-01 20 MB datadg01-02 datadg1-03

Volume vol01
datadg01-02 datadg02-03

datadg02
datadg02-01 datadg02-02

0 20 MB 70 MB

vol01-01

50 MB datadg02-03 Disk Group: datadg


FOS35_Sol_R1.0_20020930 1-16

Copyright 2002 VERITAS

Writing to a Volume In the example, the plex named vol01-01 consists of two subdisks. Each subdisk comes from a different VxVM disk: datadg01-02, which is 20 MB in size, comes from the VxVM disk datadg01. datadg02-03, which is 50 MB in size, comes from the VxVM disk datadg02. The length of a plex is defined as the last accessible byte in the plex, so in the example, the plex length is 70 MB. The first subdisk occupies the first 20 MB of address space in the plex, and the second subdisk occupies the next 50 MB of address space, from 20 MB to 70 MB. If an application writes to the bytes located at 10 MB in the plex, then the data is written to subdisk datadg01-02. The application starts writing at 10 MB from the beginning of the subdisk, which also happens to be 10 MB from the beginning of the plex. If an application writes to the bytes located at 60 MB in the plex, then the data is written to subdisk datadg02-03. The application starts writing at 40 MB from the beginning of the subdisk. The bytes located at 60 MB into the plex are 40 MB from the beginning of subdisk datadg02-03, because 20 MB plus 40 MB equals 60 MB.

1-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Layouts
Volume layout: The way plexes are configured to remap the volume address space through which I/O is redirected
Note: Volume Note: Volume layouts are layouts are covered in detail covered in detail in later lessons. in later lessons.

Concatenated Concatenated

Striped Striped

Mirrored Mirrored

RAID-5 RAID-5

Layered Layered

FOS35_Sol_R1.0_20020930

1-17

Copyright 2002 VERITAS FOS35_Sol_R1.0_20020930

1-17

Volume Manager Storage Layouts


Volume Layouts A volumes layout refers to the organization of plexes in a volume. Volume layout is the way plexes are configured to remap the volume address space through which I/O is redirected at run-time. Volume layouts are based on the concept of disk spanning, which is the ability to logically combine physical disks in order to store data across multiple disks. A variety of volume layouts is available, and each layout has different advantages and disadvantages. The layouts that you choose depend on the levels of performance and reliability required by your system. With Volume Manager, you can change the volume layout without disrupting applications or file systems that are using the volume. A volume layout can be configured, reconfigured, resized, and tuned while the volume remains accessible. Supported volume layouts include: Concatenated Striped Mirrored RAID-5 Layered

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-19

Concatenated In a concatenated volume, subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. Striped In a striped volume, data is spread evenly across multiple disks. Stripes are equally-sized fragments that are allocated alternately and evenly to the subdisks of a single plex. There must be at least two subdisks in a striped plex, each of which must exist on a different disk. Throughput increases with the number of disks across which a plex is striped. Striping helps to balance I/O load in cases where high traffic areas exist on certain subdisks. Mirrored A mirrored volume uses multiple plexes to duplicate the information contained in a volume. Although a volume can have a single plex, at least two are required for true mirroring (redundancy of data). Each of these plexes should contain disk space from different disks for the redundancy to be useful. RAID-5 A RAID-5 volume uses striping to spread data and parity evenly across multiple disks in an array. Each stripe contains a parity stripe unit and data stripe units. Parity can be used to reconstruct data if one of the disks fails. In comparison to the performance of striped volumes, write throughput of RAID-5 volumes decreases, because parity information needs to be updated each time data is accessed. However, in comparison to mirroring, the use of parity reduces the amount of space required. Layered A layered volume is a virtual Volume Manager object that nests other virtual objects inside of itself. Layered volumes provide better redundancy by mirroring data at a more granular level.

1-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to:
Identify the structural characteristics of a disk that are affected by placing a disk under Volume Manager control. Describe the structural characteristics of a disk after it is placed under Volume Manager control. Identify the virtual objects that are created by Volume Manager to manage data storage, including disk groups, Volume Manager disks, subdisks, plexes, and volumes. Define volume layout and identify virtual storage layout types used by Volume Manager to remap address space.
FOS35_Sol_R1.0_20020930 1-18

Copyright 2002 VERITAS

Summary
This lesson described the virtual storage objects that VERITAS Volume Manager uses to manage physical disk storage. This lesson introduced common virtual storage layouts, illustrated how virtual storage objects relate to physical storage objects, and described the benefits of virtual data storage. Next Steps You are now familiar with Volume Manager objects and how virtual objects relate to physical disks when a disk is controlled by Volume Manager. In the next lesson, you will install and set up VERITAS Foundation Suite. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager.

Lesson 1: Virtual Objects


Copyright 2002 VERITAS Software Corporation. All rights reserved.

1-21

Lab 1
Lab 1: Virtual Objects
In this lab, you explore the relationship between Volume Manager objects and physical disks by determining how data in a volume maps to a physical disk. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

1-19

Copyright 2002 VERITAS

Lab 1: Virtual Objects


Goal In this theoretical exercise, you explore the relationship between Volume Manager objects and physical disks by determining how data in a volume maps to a physical disk. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

1-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Installing VERITAS Foundation Suite

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930 Copyright 2002 VERITAS

Recovery and Recovery and Troubleshooting Troubleshooting

2-2

2-2

Introduction
Overview This lesson describes guidelines and considerations for planning a first-time installation of VERITAS Foundation Suite. This lesson includes procedures for adding license keys, adding the VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS) software packages, and running the VxVM installation program. Importance Before you install VERITAS Foundation Suite, you need to be aware of the contents of your physical disks and decide how you want Volume Manager to handle those disks. By following these installation guidelines, you can ensure that you set up VERITAS Foundation Suite in a way that meets the needs of your environment. Outline of Topics Installation Prerequisites VxVM and VxFS Software Packages Adding License Keys Adding Foundation Suite Packages Planning VxVM Setup Installing VxVM for the First Time

2-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Identify operating system compatibility and other preinstallation considerations. Describe the VxVM, VxFS, and VEA software packages, space requirements, and optional feature sets. Obtain license keys, add licenses by using vxlicinst, and view licenses by using vxlicrep. Add VxVM and VxFS packages interactively, by using the Installer utility, and manually, by using pkgadd. Plan an initial setup of VxVM by determining the desired characteristics of a first-time installation. Configure a first-time installation of VxVM by using the vxinstall program.
FOS35_Sol_R1.0_20020930 2-3

Copyright 2002 VERITAS

Objectives After completing this lesson, you will be able to: Identify operating system compatibility for VxVM and VxFS installation and resources for locating up-to-date product release and patch information. Describe the VxVM, VxFS, and VEA software packages, package space requirements, and optionally licensable feature sets. Obtain license keys through the vLicense Web site, add license keys by using the vxlicinst command, and view license keys by using the vxlicrep command. Add the VxVM and VxFS software packages interactively, by using the Installer utility, and manually, by using the pkgadd command. Plan an initial setup of VxVM by determining the desired characteristics of a first-time installation, such as whether to encapsulate the boot disk, whether to place other disks into the rootdg disk group, and whether to use enclosurebased naming. Configure a first-time installation of VxVM by using the vxinstall program.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-3

VxVM/Solaris Compatibility
VxVM Version 3.5 3.2 3.1.1 3.1 3.0.4 3.0.3 3.0.2 3.0.1 3.0
FOS35_Sol_R1.0_20020930

Solaris Version 2.6, 7, 8, and 9 2.6, 7, and 8 2.6, 7, and 8 2.6, 7, and 8 2.5.1, 2.6, 7, and 8 2.5.1, 2.6, 7, and 8 2.5.1, 2.6, and 7 2.5.1, 2.6, and 7 2.5.1 and 2.6
2-4

Copyright 2002 VERITAS

Installation Prerequisites
OS Version Compatibility Before performing installation procedures, you should ensure that the version of VxVM and VxFS that you are installing is compatible with the version of the Solaris operating system that you are running. VxVM release 3.5 operates on Solaris 2.6, 7, 8, and 9 (32-bit and 64-bit). If you are running Solaris 2.5.1 or an earlier version, you will need to upgrade your Solaris operating system before you install VxVM 3.5. The table shows the compatibility of VxVM versions with Solaris versions:
VxVM Version 3.5 3.2 3.1.1 3.1 3.0.4 3.0.3 3.0.2 3.0.1 3.0 Supported Solaris Versions 2.6, 7, 8, and 9 2.6, 7, and 8 2.6, 7, and 8 2.6, 7, and 8 2.5.1, 2.6, 7, and 8 2.5.1, 2.6, 7, and 8 2.5.1, 2.6, and 7 2.5.1, 2.6, and 7 2.5.1 and 2.6

2-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxFS/Solaris Compatibility

VxFS Version 3.5 3.4 3.3.3 3.3.2 3.3.1

Solaris Version 2.6, 7, 8, and 9 2.6, 7, and 8 2.5.1, 2.6, 7, and 8 2.5.1, 2.6, and 7 2.5.1 and 2.6

FOS35_Sol_R1.0_20020930

2-5

Copyright 2002 VERITAS

VERITAS File System release 3.5 operates on Solaris 2.6, 7, 8, and 9 in 32-bit and 64-bit mode. VERITAS recommends upgrading any previously installed VERITAS File System to VxFS 3.5. The following table shows the compatibility of VxFS versions with Solaris versions:
VxFS Version 3.5 3.4 3.3.3 3.3.2 3.3.1 3.3 Supported Solaris Versions 2.6, 7, 8, and 9 2.6, 7, and 8 2.5.1, 2.6, 7, and 8 2.5.1, 2.6, and 7 2.5.1 and 2.6 2.5.1 and 2.6

Compatibility with Other VERITAS Products Many people use VERITAS File System in conjunction with other VERITAS products. If you use features such as VERITAS Quick I/O or VERITAS QuickLog, then you should also verify the compatibility of those features with the Solaris version.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-5

The following table shows Solaris version compatibility with recent releases of features, such as VERITAS Quick I/O and VERITAS QuickLog, and other VERITAS products, such as VERITAS Storage Migrator (formerly VERITAS HSM), and NetBackup (NBU).
VxFS 3.5 3.4 3.3.3 3.3.2 3.3.1 Solaris 2.6, 7, 8, and 9 2.6, 7, and 8 2.5.1, 2.6, 7, and 8 2.5.1, 2.6, and 7 2.5.1 and 2.6 Quick I/O 3.5 3.4 3.3.3 3.3.2 3.3.1.1 QuickLog 3.5 3.4 1.2 1.1 1.0.5 Storage Migrator (HSM) 4.5 3.4 3.4 3.2 3.1.6 NBU 4.5 3.4 3.2/3.4 3.2 3.1.1

2-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Staying Informed
http://support.veritas.com http://support.veritas.com
Products Products Support services Support services

Patches Patches Email services Email services

Alerts Alerts
FOS35_Sol_R1.0_20020930 2-6

FOS35_Sol_R1.0_20020930 Copyright 2002 VERITAS

Search for for Technotes Technotes

2-6

Version Release Differences With each new release of VERITAS Foundation Suite, changes are made that may affect the installation or operation of VxVM and VxFS in your environment. By reading version release notes and installation documentation that are included with the product, you can stay informed of any changes. For more information about specific releases of VERITAS Foundation Suite, visit the VERITAS Support Web site at: http://support.veritas.com This site contains product and patch information, a searchable knowledge base of technical notes, access to product-specific news groups and e-mail notification services, and other information about contacting technical support staff. VxVM Recent Release Notes Some of the release notes for VxVM 3.5 include: VERITAS no longer supports VxVM 1.x, 2.0.x, 2.1.x, 2.2.x, 2.3.x, 2.4.x, and 2.5.x software. VERITAS Volume Manager no longer supports Solaris 2.3, 2.4, 2.5, and 2.5.1 operating systems. VERITAS Volume Manager no longer supports the Sun-4c product line. Volume Manager Visual Administrator (VxVA) is not compatible with VxVM versions 3.0 and later and is no longer available with VxVM. Volume Manager Storage Administrator (VMSA) is not compatible with VxVM versions 3.5 and later and is no longer available with VxVM.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-7

VxFS Recent Release Notes The following section summarizes some of the release notes for recent VxFS versions: VxFS 3.5 is the last to support VxFS version 1 and version 2 file system layouts. VxFS 3.4 does not operate on Solaris 2.5.1. VxFS 3.3.3 is the last release to support Solaris 2.5.1. Versions of VxFS earlier than 3.2.5 are no longer supported. VxFS versions 3.0 and 3.1 are specific to HP-UX. VxFS 3.3.2 introduced a patch process that is similar to the standard Sun patch process. VxFS Kernel Issues VxFS often requires more than the default 8K kernel stack size, so during the installation of VxFS 3.2.x and higher, entries are added to the /etc/system file. This increases the kernel thread stack size of the system to 24K. VxFS is a kernel-loadable driver, which means that it may load ahead of or behind other drivers when the system reboots. To avoid the possibility of problems in a failover scenario, you should: Maintain the same VxFS product version on all systems. Maintain the same product version of each VxFS add-on product. Ensure that forceload:fs/vxfs is in the /etc/system file on all systems.

2-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VERITAS Storage Solutions


VERITAS Storage Solutions for Solaris
CD1 CD2 CD3

From the VERITAS Storage Solutions CD-ROMs, VxVM and VxFS are available as stand-alone products, as part of a product suite (such as Foundation Suite), or as part of product editions.

FOS35_Sol_R1.0_20020930

VERITAS Volume Manager VERITAS File System VERITAS Foundation Suite VERITAS Volume Replicator VERITAS Database Edition for Oracle VERITAS Cluster Server VERITAS SANPoint Foundation Suite VERITAS Cluster Server QuickStart VERITAS Database Edition/Advanced Cluster for Oracle 9i VERITAS Cluster Server Traffic Director

2-7

Copyright 2002 VERITAS

VxVM and VxFS Software Packages


VERITAS Storage Solutions Products and Suites VERITAS Volume Manager and VERITAS File System are the foundation components of many VERITAS storage solutions. The VERITAS Storage Solutions CD-ROM set contains: VERITAS Volume Manager VERITAS File System VERITAS Foundation Suite (includes VxVM and VxFS) VERITAS Volume Replicator VERITAS Database Edition for Oracle (includes VxVM and VxFS) VERITAS Cluster Server VERITAS SANPoint Foundation Suite (includes VxVM and VxFS) VERITAS Cluster Server QuickStart VERITAS Database Edition/Advanced Cluster for Oracle9i (includes VxVM and VxFS) VERITAS Cluster Server Traffic Director In addition to VERITAS Foundation Suite, VxVM and VxFS are included in many of these product suites, as indicated. The packages that you install depend on the products and licenses that you have purchased. When you install a product suite, the component product packages are automatically installed. When installing any of the products, suites, or editions, you should always follow the instructions in the product release notes and installation guides.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-9

Foundation Product Suites VERITAS Foundation Suite is available in many forms: VERITAS Foundation Suite Includes VxVM, VxFS, QuickLog, and SANPoint Control QuickStart VERITAS Foundation Suite QuickStart Includes limited-function feature sets for entry-level servers VERITAS Foundation Suite HA Adds VERITAS Cluster Server to VERITAS Foundation Suite VERITAS SANPoint Foundation Suite VERITAS SANPoint Foundation Suite provides simple and reliable data sharing in an enterprise-level SAN environment by implementing a clustered file system that leverages the strengths of file and volume management technologies, combined with sophisticated failover and clustering logic, to ensure consistency and availability. SANPoint Foundation Suite includes: VERITAS Volume Manager VERITAS Cluster Volume Manager VERITAS File System VERITAS Cluster File System VERITAS SANPoint Foundation Suite HA Adds VERITAS Cluster Server to VERITAS SANPoint Foundation Suite for faster application failover Edition Products As the foundation components for other value-added technologies, VERITAS Volume Manager and VERITAS File System are also included in the following Edition products: VERITAS Database Edition for DB2 VERITAS Database Edition/HA for DB2 VERITAS Database Edition for Oracle VERITAS Database Edition/HA for Oracle VERITAS Database Edition for Sybase VERITAS Database Edition/HA for Sybase Note: HA=High Availability

2-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Standard Packages


VxVM packages: VRTSvxvm VRTSvlic VRTSvmdoc VRTSvmman VxVM software VxVM licensing utilities VxVM documentation VxVM manual pages

VEA (GUI) packages and scripts: VRTSob VRTSobgui VRTSvmpro VRTSfspro VRTSobadmin
FOS35_Sol_R1.0_20020930

VEA service VEA graphical user interface Disk management services provider File system provider Installation administration script
2-8

Copyright 2002 VERITAS

VxVM Standard Packages VERITAS Volume Manager consists of the following software packages: VRTSvxvm: This package contains the VxVM software drivers, daemons, and utilities. VRTSvlic: This package contains the VERITAS licensing utilities. This package must be installed to activate all VxVM and VxFS licensable features. Note: This package is new with VxVM 3.5 and later. This package can coexist on the same system with previous licensing packages. VRTSvmdoc: This package contains online copies of VxVM documentation. VRTSvmman: This package contains the VxVM manual pages. VERITAS Enterprise Administrator Packages The VERITAS Enterprise Administrator (VEA) is the graphical user interface for Foundation Suite and other VERITAS products. You can use VEA to administer disks, volumes, and file systems on local or remote machines. Packages include: VRTSob: This package contains the VEA service. VRTSobgui: This package contains the VEA graphical user interface. VRTSvmpro: This package contains the VERITAS Virtual Disk Management provider, which populates the GUI with volume management functions. VRTSfspro: This package contains the VERITAS File System provider, which populates the VEA GUI with file system functions. When installing VEA, you should also use the installation administration file, VRTSobadmin, to ensure successful installation of the VRTSob package.
Lesson 2: Installing VERITAS Foundation Suite
Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-11

VxVM Space Requirements


Package VRTSvxvm VRTSvlic VRTSvmman VRTSvmdoc VRTSob VRTSobgui VRTSvmpro VRTSfspro

FOS35_Sol_R1.0_20020930

Space Required 27 MB in / & 32 MB in /usr 1 MB in /usr & 1 MB in /opt 1 MB in /opt 30 MB in /opt 33 MB in /opt 5 MB in /opt 8.5 MB in /opt

Total minimum space requirements: 27 MB in / 33 MB in /usr 78.5 MB in /opt

2-9

Copyright 2002 VERITAS

VxVM Package Space Requirements Before you install any of the packages, confirm that your system has enough free disk space to accommodate the installation. VxVM programs and files are installed in the /, /usr, and /opt file systems. The following table shows the approximate minimum space requirements for each package and for each file system:
Package Contents Driver and utilities Licensing utilities Manual pages Documentation VERITAS Enterprise Administrator GUI VxVM provider for VEA VxFS provider for VEA Size 59 MB 2 MB 1 MB 30 MB 33 MB 5 MB 8.5 MB File System 27 MB in / 32 MB in /usr 1 MB in /usr 1 MB in /opt /opt /opt /opt /opt /opt

VRTSvxvm
VRTSvlic

VRTSvmman VRTSvmdoc VRTSob VRTSobgui VRTSvmpro VRTSfspro

Total minimum space requirements: 27 MB in / 33 MB in /usr 78.5 MB in /opt


2-12 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxFS Standard Packages


VRTSvxfs VRTSfsdoc VxFS software and manual pages VxFS documentation

Space requirements:
Directory / /usr /opt Size 1.5 MB 2.25 MB 3.5 MB Contents Binaries Libraries Commands, manual pages
2-10

FOS35_Sol_R1.0_20020930

Copyright 2002 VERITAS

VxFS Standard Packages VERITAS File System consists of the following software packages: VRTSvxfs: This package contains the VERITAS File System software and manual pages. VRTSfsdoc: This package contains the VERITAS File System documentation in PDF format. If you do not want these documents online, then do not install this package. Package Space Requirements VxFS programs and files are installed in the /, /usr, and /opt file systems. Approximate minimum space requirements for each directory are:
Directory Size 1.5 MB 2.25 MB 3.5 MB Contents Binaries Libraries Commands, manual pages

/ /usr /opt

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-13

Options and Add-On Feature Sets


Options included with Foundation Suite: VERITAS QuickLog VERITAS SANPoint Control QuickStart Options available with additional licenses: VERITAS FlashSnap (Disk group split/join, FastResync, and storage checkpoints) VERITAS Volume Replicator VERITAS Cluster Volume Manager VERITAS Cluster File System VERITAS Quick I/O for Databases
FOS35_Sol_R1.0_20020930 2-11

Copyright 2002 VERITAS

Other Options Included with Foundation Suite In addition to VERITAS Volume Manager and VERITAS File System, VERITAS Foundation Suite includes these optional products and features: VERITAS QuickLog: VERITAS QuickLog is part of the VRTSvxfs package and is a feature designed to enhance file system performance. Although QuickLog can improve file system performance, VxFS does not require QuickLog to operate effectively. The VERITAS QuickLog license is included with VERITAS Foundation Suite and VERITAS Foundation Suite HA. VERITAS SANPoint Control QuickStart: VERITAS SANPoint Control is a separate software tool that you can use in a Storage Area Network (SAN) environment to provide comprehensive resource management and end-to-end data path management from host to storage. With SANPoint Control, you can have a single, centralized, consistent storage management interface to simplify the complex tasks involved in deploying, managing and growing a multivendor networked storage environment. The QuickStart version is a limited-feature version of this tool that consists of the following packages on your VERITAS CD-ROM: VRTSspc: The VERITAS SANPoint Control console VRTSspcq: The VERITAS SANPoint Control QuickStart software Installing and operating VERITAS SANPoint Control is beyond the scope of this course. For detailed training, attend the VERITAS SANPoint Control course.

2-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Other Options Available for Foundation Suite The VxVM and VxFS packages contain the full functionality of additional optional feature sets that you can enable with separate licenses. VxVM Optional Features Optional features that you can enable with additional licenses include: VERITAS FlashSnap: The VRTSvxvm package contains a set of optional features called VERITAS FlashSnap. FlashSnap is an integral part of the Volume Manager software, but requires a separate license key for use. FlashSnap facilitates point-in-time copies of data, while enabling applications to maintain optimal performance, by enabling features such as FastResync and disk group split and join functionality. FlashSnap provides an efficient method to perform offline and off-host processing tasks, such as backup and decision support. VERITAS FastResync: The FastResync option can be purchased separately or as part of the VERITAS FlashSnap option. The FastResync option speeds mirror synchronization by writing only changed data blocks when split mirrors are rejoined, minimizing the effect of mirroring operations. VERITAS Volume Replicator: The VRTSvxvm package also contains the VERITAS Volume Replicator (VVR) software. VVR is an integral part of the Volume Manager software but requires a separate license key to activate the functionality. Volume Replicator augments Volume Manager functionality to enable you to mirror data to remote locations over any IP network. Replicated copies of data can be used for disaster recovery, off-host processing, off-host backup, and application migration. Volume Replicator ensures maximum business continuity by delivering true disaster recovery and flexible off-host processing. VVR-related packages include: VRTSvrdoc: This package contains online copies of VERITAS Volume Replicator documentation. VRTSvrw: This package contains the VVR Web Console, the Web-based graphical user interface for administering VVR configurations using a Web browser. VRTSweb: This package contains the VERITAS Web GUI Engine, which is used by all VERITAS products with Web GUIs, such as VVR, VERITAS Global Cluster Manager, and VERITAS Cluster Server QuickStart. Cluster Functionality: VxVM includes optional cluster functionality that enables VxVM to be used in a cluster environment. Cluster functionality is an integral part of the Volume Manager software but requires a separate license key to activate the features. A cluster is a set of hosts that share a set of disks. Each host is referred to as a node in a cluster. The cluster functionality of VxVM allows up to 16 nodes in a cluster to simultaneously access and manage a set of disks under VxVM control. The same logical view of disk configuration and any configuration changes are available on all of the nodes. When the cluster functionality is
Lesson 2: Installing VERITAS Foundation Suite
Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-15

enabled, all of the nodes in the cluster can share VxVM objects. Disk groups can be simultaneously imported on up to 16 hosts, and Cluster File System (an option to VERITAS File System) is used to ensure that only one host can write to a disk group during write operations. The main benefits of cluster configurations are high availability and off-host processing. VxFS Optional Features The VRTSvxfs package also contains these optionally licensable features: VERITAS Quick I/O for Databases: VERITAS Quick I/O for Databases (referred to as Quick I/O) enables applications to access preallocated VxFS files as raw character devices. This provides the administrative benefits of running databases on file systems without the performance degradation usually associated with databases created on file systems. Quick I/O is a separately licensable feature available only with VERITAS Editions products. Note: In previous VxFS distributions, the QuickLog and Quick I/O features were supplied in separate packages (VRTSqlog and VRTSqio, respectively). VERITAS Cluster File System: VERITAS Cluster File System (CFS) is a shared file system that enables multiple hosts to mount and perform file operations concurrently on the same file. CFS is a separately licensable feature that is only available with VERITAS SANPoint Foundation Suite and VERITAS SANPoint Foundation Suite HA, because it requires an integrated set of VERITAS products to function. To configure a cluster and to provide failover support, CFS requires: VERITAS Cluster Server (VCS): VCS supplies two major components integral to CFS: the Low Latency Transport (LLT) package and the Group Membership and Atomic Broadcast (GAB) package. LLT provides nodeto-node communications and monitors network communications. GAB provides cluster state, configuration, and membership service, and monitors the heartbeat links between systems to ensure that they are active. VERITAS Volume Manager (VxVM): CFS requires the Cluster Volume Manager (CVM) feature of VxVM to create the cluster volumes necessary for mounting cluster file systems. All of these products are included in the VERITAS SANPoint Foundation Suite product bundles.

2-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Licenses Required for Optional Features The following table describes the products and licenses required to enable optional volume management features:
Feature Built into the Software Package of: VxVM VxVM VxVM VxVM VxFS VxFS VxFS VxFS Licenses needed to enable the feature are: FlashSnap Volume Replicator SANPoint Foundation Suite FlashSnap or FastResync FlashSnap or Database Edition Foundation Suite Database Edition SANPoint Foundation Suite

Disk group split and join Volume replication Cluster Volume Manager FastResync Storage Checkpoints QuickLog Quick I/O Cluster File System

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-17

VxVM and VxFS Licensing


VxVM and VxFS require license keys for use. Licensing utilities are contained in the VRTSvlic package, which is common to all VERITAS products. This package can coexist with previous licensing packages, such as VRTSlic. A new license key is not needed for an upgrade. Evaluation license keys must be replaced with permanent license keys. If a Sun StorEdge array is attached to a Sun system, the VxVM license is generated automatically. Sun bundles a version of VxVM with A5x00 storage arrays.
FOS35_Sol_R1.0_20020930 2-12

Copyright 2002 VERITAS

Adding License Keys


License Keys VxVM and VxFS are licensed products that require valid license keys for use. You must have your license keys before you begin installation, because you are prompted for the license key during the installation process. VERITAS licensing utilities are contained in the VRTSvlic package on your software CD-ROM. Prior to VxVM and VxFS version 3.5, licensing utilities were contained in the VRTSlic package. These packages can coexist on your system. Licensing for Upgrades A new license key is not necessary if you are upgrading VERITAS software from a previously licensed version of the product. Licensing for Evaluation If you have an evaluation license key you must obtain a permanent license key when you purchase the product. The VERITAS licensing mechanism checks the system date to verify that it has not been set back. If the system date has been reset, the evaluation license key becomes invalid. Licensing for Sun StorEdge If a Sun StorEdge array is attached to a Sun system, the VxVM license is generated automatically. The license is only valid while the StorEdge is attached to the system. If the StorEdge fails, the license remains valid for an additional 14 days.

2-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Frequently Asked Questions About the A5x00-VxVM Bundle What is the Sun A5x00 bundle? Sun bundles a version of VERITAS Volume Manager with A5x00 storage arrays. Do you need a license from Sun or VERITAS to use this version of VxVM? When you attach the A5x00 array to your system, you can begin using VxVM. No license is required to use Volume Manager with an A5x00 array. Is the bundled Volume Manager a fully featured version? The version of Volume Manager that is shipped with the A5x00 is a fully featured version of Volume Manager. All of the Volume Manager features are available if used in conjunction with the A5x00 storage units. What functionality exists for a non-A5x00 disk array within the storage subsystem? On non-A5x00 disks, Volume Manager functionality is limited to: Basic Volume Manager functionality, with access to the GUI and command line utilities Concatenation Limited mirroring capabilities You have a maximum of two mirrors on non-A5x00 disks. Spanning The license key that is provided with the A5x00 bundle can identify non-A5x00 arrays that are connected to a storage pool and limits the functionality to those non-A5x00 devices. How can you take advantage of the fully featured version of Volume Manager within a mixed A5x00 and non-A5X00 environment? In order to receive the additional functionality for non-A5x00 devices, you must upgrade to a full license of Volume Manager from the lite version of Volume Manager that is bundled with an A5x00. How can I obtain the upgrade? Contact VERITAS or your local VERITAS account manager to order an upgrade.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-19

Obtaining a License Key


To obtain a license key:
Complete a License Key Request form and fax it to VERITAS customer support. or Create a vLicense account and retrieve license keys online. vLicense is a Web site that you can use to retrieve and manage your license keys.

To generate a license key, you must provide your:


Customer number * Order number * Host ID: # hostid Machine type: # uname -i
FOS35_Sol_R1.0_20020930

* Located on your License Located on your License


Key Request form Key Request form
2-13

Copyright 2002 VERITAS

Obtaining a License Key When you purchase Foundation Suite, you receive a License Key Request form issued by VERITAS customer support. By using this form, you can obtain license keys by one of two methods: Complete the License Key Request form and fax it to VERITAS customer support. A license key will be generated and returned to you by fax or e-mail. Create a vLicense account. vLicense is VERITAS Softwares online license key retrieval Web site that you can use to retrieve and manage your license keys. This is the fastest way to receive a license key. License keys are uniquely generated based on your system host ID number. To generate a new license key, you must provide the following information: Customer number (located on your License Key Request form) Order number (located on your License Key Request form) Host ID To obtain the host ID of your system, use the command: # hostid Host machine type To obtain the host machine type, use the command: # uname -i The host type is listed in the first line of output that follows a blank line. For example, the host type of a Sun Fire 280R is: SUNW,Sun-Fire-280R

2-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Generating License Keys


The vLicense online The vLicense online license management license management system enables you to: system enables you to:
Access automatic Access automatic license key generation license key generation and delivery. and delivery. Manage and track Manage and track license key inventory license key inventory and usage. and usage. Locate and reissue Locate and reissue lost license keys. lost license keys. Report, track, and Report, track, and resolve license key resolve license key issues online. issues online. Consolidate and share Consolidate and share license key license key information with other information with other FOS35_Sol_R1.0_20020930 accounts. accounts.
Copyright 2002 VERITAS

http://vlicense.veritas.com

2-14

Generating License Keys with vLicense VERITAS vLicense (vlicense.veritas.com) is a self-service online license management system. By setting up an account through vLicense, you can: Access automatic license key generation and delivery services. License key requests are fulfilled in minutes. Manage and track license key inventory and usage. Your complete license key inventory is stored online with detailed history and usage information. Locate and reissue lost license keys. Key history information provides you with an audit trail that can be used to resolve lost license key issues. Report, track, and resolve license key issues online. The online customer service feature within the license management system enables you to create and track license key service requests. Consolidate and share license key information with other accounts. For example, an account with Company A can share key information with their parent Company B, depending on the details of their licensing agreements. Notes on vLicense vLicense currently supports production license keys only. Temporary, evaluation, or demonstration keys must be obtained through your VERITAS sales representative. Host ID changes must be processed manually and cannot be processed through the vLicense system. Contact VERITAS customer support for more details.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-21

Adding License Keys


To add a license key: # vxlicinst
VERITAS License Manager vxlicinst utility version 3.00.004 Copyright (C) VERITAS Software Corp 2002. All Rights reserved. Enter your license key: IAAA-ZBBB-N222-9999-HHHH-PPPP-PCCC License key successfully installed for VERITAS Volume Manager.

License keys are installed in /etc/vx/licenses/lic.


FOS35_Sol_R1.0_20020930 2-15

Copyright 2002 VERITAS

Adding a License Key You can add license keys for VxVM and VxFS when you run the installation program or, if the VRTSvlic package is already installed, by using the vxlicinst command. To add a new license key after the VRTSvlic package is installed: 1 At the command line, type vxlicinst. 2 When prompted, type your license key number. 3 After you enter a valid key, the system verifies that the key is successfully installed. The key is installed in /etc/vx/licenses/lic. 4 To license additional features, you reenter the vxlicinst command and enter a valid license key for each feature. Adding a License Key: Example
# vxlicinst VERITAS License Manager vxlicinst utility version 3.00.004 Copyright (C) VERITAS Software Corp 2002. All Rights reserved. Enter your license key: IAAA-ZBBB-N222-9999-HHHH-PPPP-PCCC License key successfully installed for VERITAS Volume Manager

2-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Viewing License Keys


To view installed license key information: # vxlicrep
Creating a report on all VERITAS products installed on this system --------------***********************---------License Key = IAAA-ZBBB-N222-9999-HHHH-PPPP-PCCC Product Name = VERITAS Volume Manager Serial Number = 888 License Type = PERMANENT OEM ID = 111 Site License = YES Point Product = YES Features := DMP = Enabled SSA_DMP = Enabled EMC_DMP = Enabled DGC_DMP = Enabled ...
FOS35_Sol_R1.0_20020930 2-16

Copyright 2002 VERITAS

Viewing Installed License Keys If you are not sure whether license keys have been installed, you can view installed license key information by using the vxlicrep command. To view currently installed licenses: 1 At the command line, type vxlicrep. 2 Information about installed license keys is displayed. This information includes: License key number Name of the VERITAS product that the key enables Type of license Features enabled by the key

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-23

Viewing Installed License Keys: Example


# vxlicrep VERITAS License Manager vxlicrep utility version 3.00.004 Copyright (C) VERITAS Software Corp 2002. All Rights reserved. Creating a report on all VERITAS products installed on this system -----------------***********************----------------License Key = IAAA-ZBBB-N222-9999-HHHH-PPPP-PCCC-CC Product Name = VERITAS Volume Manager Serial Number = 888 License Type = PERMANENT OEM ID = 111 Site License = YES Point Product = YES
Features:= DMP = Enabled SSA_DMP = Enabled EMC_DMP = Enabled DGC_DMP = Enabled HITACHI_DMP = Enabled SEAGATE_DMP = Enabled ... -----------------***********************----------------License Key Product Name Serial Number License Type OEM ID Features:= HP_OnLineJFS HP_DMAPI LINUX_LITE VXFS ... = = = = Enabled Enabled Enabled Enabled = = = = = FFSS-ZBBB-N222-9999-HHHH-PPPP-PCCC-CC VERITAS File System 234 PERMANENT 1111

Note: The vxlicrep command reports all currently installed licenses for both VRTSvlic and the previous licensing package, VRTSlic.

2-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Comparing Licensing Utilities


Description
Adding a license key Viewing license keys Path of installed license information

VRTSvlic
vxlicinst vxlicrep

VRTSlic
vxlicense -c vxlicense -p

/etc/vx/licenses/lic

/etc/vx/elm

FOS35_Sol_R1.0_20020930

2-17

Copyright 2002 VERITAS

Managing Multiple Licensing Utilities The current licensing utilities of the VRTSvlic package can coexist on your system with previous licensing utilities, such as those contained in the VRTSlic package. You should retain the VRTSlic package only if you have older products that rely on the previous licensing technology. Otherwise, you can remove the VRTSlic package. When you remove the VRTSlic package, existing license key files are not deleted and can be accessed by the VRTSvlic utilities. The following table compares functions of VRTSvlic and VRTSlic:
Description Adding a license key Viewing installed license keys Path of installed license information License key file naming scheme VRTSvlic vxlicinst vxlicrep /etc/vx/licenses/lic key_string.vxlic Example: ABCD-EFGH-IJKL-MNOPQRST-UVWX-YZ.vxlic VRTSlic vxlicense -c vxlicense -p /etc/vx/elm feature_number.lic Example: 95.lic

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-25

Adding Packages: Installer


1. Log on as superuser. 2. Mount the VERITAS CD-ROM. 3. Run the Installer script: # /cdrom/CD_name/installer 4. The Installer utility attempts to locate the licensing package, VRTSvlic. Respond as appropriate to install the licensing package. 5. The product status page is displayed: VERITAS Product Currently Installed Licensed +====================+====================+===== Volume Manager Not Installed No File System Not Installed No Foundation Suite Not Installed No Volume Replicator Not Installed No FlashSnap Not Installed No ... 1=Add License Key 2=Installation Menu 3=Refresh h=Help p=Product Descriptions q=quit
FOS35_Sol_R1.0_20020930 2-18

Copyright 2002 VERITAS

Adding Foundation Suite Packages


Methods for Adding Foundation Suite Packages You can add the Foundation Suite packages by using one of two methods: By invoking the Installer utility available on the VERITAS Storage Solutions CD-ROM By manually adding the software packages from the command line by using the pkgadd command Adding Packages with the Installer The Installer is a menu-based installation utility that you can use to install any product contained on the VERITAS Storage Solutions CD-ROM. This utility acts as a wrapper for existing product installation scripts and is most useful when you are installing multiple VERITAS products or bundles, such as VERITAS Foundation Suite or VERITAS Database Edition. When you add Foundation Suite packages by using the Installer utility, all VxVM, VxFS, and VEA packages are installed. If you want to add a specific package only, for example, only the VRTSvmdoc package, then you must add the package manually from the command line. Note: The VERITAS Storage Solutions CD-ROM contains an Installer guide (installer.pdf) that describes how to use the Installer utility. You should also read all product installation guides and release notes even if you are using the Installer utility.

2-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

To add the Foundation Suite packages using the Installer: 1 Log on as superuser. 2 Mount the VERITAS CD-ROM. If Solaris volume management software is running on your system, then the CD is automatically mounted as /cdrom/CD_name. If Solaris volume management software is not available, then you can mount the CD manually by using the mount command. For example: # mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /mnt
c0t6d0s2 is the default address for the CD and /mnt is the mount point. 3 Run the Installer script: # /cdrom/CD_name/installer 4 The Installer utility attempts to locate the licensing package, VRTSvlic. This package must be installed for the installation to continue. Looking for package VRTSvlic Currently installed: 0 Minimum Version: 3.00.000 For Release Train installation to continue, VRTSvlic must be installed or upgraded. Do you want to install it now? [y,n]: Type y to install VRTSvlic and respond to the prompts as appropriate to continue. The license utilities are installed in /etc/vx/licenses. 5 After the licensing utilities are installed, the VERITAS product status page is displayed. This list displays the VERITAS products on the CD-ROM and the installation and licensing status of each product. VERITAS Product Currently InstalledLicensed +=============================+====================+===== Volume Manager Not Installed No File System Not Installed No Foundation Suite Not Installed No Volume Replicator Not Installed No FlashSnap Not Installed No Database Edition for Oracle Not Installed No Advanced Cluster for Oracle9i/RAC Not Installed No Cluster Server QuickStart Not Installed No Cluster Server Not Installed No Cluster Server Traffic Director Not Installed No SanPoint Foundation Suite Not Installed No
1=Add License Key 2=Installation Menu 3=Refresh h=Help p=Product Descriptions q=quit Enter [1,2,3,h,p,q]:

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-27

Adding Packages: Installer


6. Type 1 to enter a license key. After entering a license key, you return to the product status page. 7. Type 2 to install Foundation Suite packages. 8. A list of available products is displayed. Available products: 1) Volume Manager 2) File System 3) Foundation Suite (QuickStart, HA & FlashSnap) 4) Volume Replicator 5) Database Edition for Oracle ... Enter the number of the product to install [1-11,q,h]: Type 3 to install the Foundation Suite packages. 9. When the installation is complete, you return to the installation menu. You can install additional VERITAS products or type q to exit from the menu system.
FOS35_Sol_R1.0_20020930 2-19

Copyright 2002 VERITAS

6 To add a license key, type 1 and press Return. You are prompted to enter a license key, and then you are returned to the product status page. 7 From the product status page, to install the Foundation Suite packages, type 2 and press Return. 8 A list of available products is displayed. Available products:
1) Volume Manager 2) File System 3) Foundation Suite (QuickStart, HA & FlashSnap) 4) Volume Replicator 5) Database Edition for Oracle (Enterprise, Standard, HA, Storage Mapping & FlashSnap) 6) Cluster Server 7) SanPoint Foundation Suite (HA & FlashSnap) 8) Cluster Server QuickStart (Custom & Oracle) 9) Database Edition/Advanced Cluster for Oracle 9i 10) Cluster Server Traffic Director q) Return to main menu h) Installation help Enter the number of the product to install [1-11,q,h]:3 Type 3 and press Return to begin installing the Foundation Suite packages. 9 When the installation is complete, you return to the installation menu. You can install additional VERITAS products or type q to exit from the menu system.

2-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding Packages Manually


1. Log on as superuser. 2. Mount the VERITAS CD-ROM. 3. Add the packages:
# pkgadd -d /cdrom/CD_name/product_name/pkgs packages

For example:
# pkgadd -d /cdrom/CD_name/product_name/pkgs VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman VRTSvxfs VRTSfsdoc Note: You must list the VRTSvlic package first, followed by the VRTSvxvm package, and then the other packages in the command line. The VRTSvxfs package must precede the VRTSfsdoc package.
FOS35_Sol_R1.0_20020930 2-20

Copyright 2002 VERITAS

Adding Packages Manually with pkgadd To install the Foundation Suite software packages manually, you use the pkgadd command: 1 Log on as superuser. 2 Mount the VERITAS CD-ROM. 3 Add the VxVM packages by using the pkgadd command: # pkgadd -d /cdrom/CD_name/product_name/pkgs packages Note: You must list the VRTSvlic package first, followed by the VRTSvxvm package, and then the remaining packages in the command line. For example, to install the licensing utilities, the VxVM software, documentation, and manual pages, and the VxFS software and documentation: # pkgadd -d /cdrom/storage_solutions_solaris_3.5cd1/ volume_manager/pkgs VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman VRTSvxfs VRTSfsdoc 4 When you install the packages, the system displays a series of status messages as the installation progresses. During installation of VxFS, you receive a series of questions. To continue the installation, respond to the questions by typing y. 5 When the installation is complete, reboot the system.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-29

Modifications to /etc/system
The installation procedure modifies the /etc/system file by adding:
* vxfs_START -- do not remove the following lines: * * VxFS requires a stack size greater than the default 8K. * The following values allow the kernel stack size * for all threads to be increased to 24K. * set lwp_default_stksize=0x6000 * vxfs_END

The original /etc/system file is copied to /etc/fs/vxfs/system.preinstall.


FOS35_Sol_R1.0_20020930 2-21

Copyright 2002 VERITAS

Modifications to /etc/system The VxFS installation procedure modifies the /etc/system file by adding the following lines:
* vxfs_START -- do not remove the following lines: * * VxFS requires a stack size greater than the default 8K. * The following values allow the kernel stack size * for all threads to be increased to 24K. * set lwp_default_stksize=0x6000 * vxfs_END

The original /etc/system file is copied to /etc/fs/vxfs/ system.preinstall. The modifications are removed during a pkgrm.

2-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Verifying Package Installation


To list all installed packages:
# pkginfo

To list all installed VERITAS packages:


# pkginfo | grep VRTS

To list detailed information about a package:


# pkginfo -l VRTSvxvm

FOS35_Sol_R1.0_20020930

2-22

Copyright 2002 VERITAS

Verifying Package Installation If you are not sure if the VxVM or VxFS packages are installed, or if you want to verify which packages are installed on the system, you can use the pkginfo command to view information about installed packages. Listing Installed VERITAS Packages To list all installed packages on the system, you type the command pkginfo at the command line:
# pkginfo

To restrict the list to installed VERITAS packages, you type:


# pkginfo | grep VRTS

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-31

Example: Listing Installed VERITAS Packages


# pkginfo | grep VRTS system VRTSfsdoc VERITAS File System Documentation application VRTSfspro VERITAS File System Management Services Provider application VRTSob VERITAS Enterprise Administrator Service application VRTSobgui VERITAS Enterprise Administrator application VRTSvlic VERITAS License Utilities system VRTSvmdoc VERITAS Volume Manager (user documentation) system VRTSvmman VERITAS Volume Manager, Manual Pages application VRTSvmpro VERITAS Volume Manager Management Services Provider system VRTSvxfs VERITAS File System system VRTSvxvm VERITAS Volume Manager, Binaries

Listing Detailed Package Information To display detailed information about a package, you type pkginfo -l followed by the name of the package. Example: Listing Detailed Package Information
# pkginfo -l VRTSvxvm PKGINST: VRTSvxvm NAME: VERITAS Volume Manager, Binaries CATEGORY: system ARCH: sparc VERSION: 3.5,REV=06.21.2002.23.14 BASEDIR: / VENDOR: VERITAS Software DESC: Virtual Disk Subsystem PSTAMP: VERITAS-3.5s_PointPatch1.3:26-July-2002 INSTDATE: Aug 07 2002 11:28 HOTLINE: 800-342-0652 EMAIL: support@veritas.com STATUS: completely installed FILES: 601 installed pathname 24 shared pathnames 9 linked files 76 directories 338 executables 159058 blocks used (approx)

2-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Planning VxVM Setup


Which disks to place Which disks to place under VxVM control? under VxVM control? Use enclosure-based Use enclosure-based Physical Disks naming? naming? Exclude any Exclude any disks? disks? Place boot disk Place boot disk under VxVM under VxVM control? control?

Prevent multipathing Prevent multipathing for any disks? for any disks?

Preserve or Preserve or eliminate eliminate disk data? disk data?


Virtual Volumes

Set up all disks in a disk array Set up all disks in a disk array differently or in the same way? differently or in the same way?
FOS35_Sol_R1.0_20020930 2-23

Copyright 2002 VERITAS

Planning VxVM Setup


Planning a First-Time VxVM Setup Before you install and set up Volume Manager for the first time, you should know the contents of your physical disks and decide how you want these disks to be used by Volume Manager. During the installation, you will specify how disks are to be handled by Volume Manager. When you run the installation program, you will answer these questions: Which disks do you want to place under Volume Manager control? Do you want to use enclosure-based naming? Do you want to exclude disks from Volume Manager control? Do you want to suppress dynamic multipathing for any disks? When you place disks under Volume Manager control, do you want to preserve or eliminate data in existing file systems and partitions? Do you want to place the system boot disk under Volume Manager control? Do you want to set up each disk in a disk array differently, or do you want to set up all disks in a disk array in the same way? At the installation stage, the only disk group that is created is rootdg. You create other disk groups after VxVM installation.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-33

Planning VxVM Setup


Which disks to place Which disks to place under VxVM control? under VxVM control?
Physical Disk VxVM Disk Which disk is the boot disk? Which disk will be used to mirror the boot disk? Which disks will be used for other purposes?

A VxVM disk must have:



FOS35_Sol_R1.0_20020930

Two free partitions (private region and public region) Default 2048 sectors (1024K) for Volume Manager header and configuration information
2-24

Copyright 2002 VERITAS

Which disks do you want to place under Volume Manager control? The purpose of running the VxVM installation program is to create the root disk group rootdg. The rootdg disk group is required so that the VxVM configuration daemon (vxconfigd) can start up in enabled mode. Before running the installation program, you should decide which disks to place into the root disk group, rootdg, and which disks to use for other functions. Important: Use rootdg only for the root file system and its mirrors: If you do not plan to bring the operating system (OS) system disk under VxVM control, then you must place a different disk in rootdg. Otherwise, the only disks that you should place under VxVM control during installation are the boot disk and disks that you plan to use to mirror the boot disk. You can add other disks and disk groups after VxVM is installed by using VxVM utilities. Any disk to be managed by the Volume Manager should have two free partitions (one for the private region and one for the public region) and a small amount of free space at the beginning or end of the disk that does not belong to a partition. This space is used for storing disk group configuration and Volume Manager header information. This space ensures that Volume Manager can identify the disk, even if it is moved to a different address or controller, and also helps to ensure correct recovery in case of disk failure. The private region is 2048 sectors (1024K) in size by default and is rounded up to the nearest cylinder boundary; that is, an integer number of cylinders is always allocated for the private region.

2-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Planning VxVM Setup


Use enclosure-based Use enclosure-based naming? naming?
Host Host
Standard device naming is based on controllers, for example, c1t0d0s2. Enclosure-based naming is based on disk enclosures, for example, enc0. Enclosure-based naming benefits a SAN environment.

c1 Fibre Channel Fibre Channel Hub/Switch


Note: Disk naming is covered Note: Disk naming is covered in more detail in the lesson in more detail in the lesson Managing Disks. Managing Disks. FOS35_Sol_R1.0_20020930

Disk Enclosures Disk enc2 enc1 enc0


2-25

Copyright 2002 VERITAS

Do you want to use enclosure-based naming? As an alternative to standard disk device naming (for example, c0t0d0), VxVM 3.2 and later versions provide enclosure-based naming. An enclosure, or disk enclosure, is an intelligent disk array, containing a backplane with a built-in Fibre Channel loop, which permits hot-swapping of disks. With VxVM, disk devices can be named for enclosures rather than for the controllers through which they are accessed. In a storage area network (SAN) that uses Fibre Channel hubs or fabric switches, information about disk location provided by the operating system may not correctly indicate the physical location of the disks. For example, c#t#d#s# naming assigns controller-based device names to disks in separate enclosures that are connected to the same host controller. Enclosure-based naming allows VxVM to access enclosures as separate physical entities. By configuring redundant copies of your data on separate enclosures, you can safeguard against failure of one or more enclosures. Enclosure-based naming is also useful when managing the dynamic multipathing (DMP) feature of VxVM. For example, if two paths (c1t99d0 and c2t99d0) exist to a single disk in an enclosure, VxVM can use a single DMP metanode, represented by an enclosure name such as enc0_0, to access the disk. When you install VxVM, you are prompted as to whether you want to use enclosure-based naming. You can also change the naming scheme at a later time by using VxVM utilities.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-35

Planning VxVM Setup


Exclude any disks? Exclude any disks?
Physical Disk Are there any disks that you want to exclude from Volume Manager control? /etc/vx/disks.exclude c0t1d0 c2t0d2 c3t2d2 /etc/vx/enclr.exclude sena0 sena1 sena2
FOS35_Sol_R1.0_20020930

/etc/vx/cntrls.exclude c1 c4 c5
2-26

Copyright 2002 VERITAS

Do you want to exclude any disks, controllers, or enclosures from Volume Manager control? If there are disks that you want to exclude from Volume Manager control, then you can specify those disks in exclusion files. To exclude specific disks from Volume Manager control, create the file /etc/vx/disks.exclude and add those disks to the file. The installation program ignores any disks that you list in this file. The following is an example of the contents of the file: c0t1d0 To exclude all disks on an entire controller from Volume Manager control, create the file /etc/vx/cntrls.exclude and add the name of the controller to the file. The installation program ignores any controllers that you list in this file. The following is an example of the contents of the file: c0 c1 To exclude all disks in a specific enclosure from Volume Manager control, create the file /etc/vx/enclr.exclude and add the name of the enclosure to the file. The following is an example of the contents of the file: sena0 sena1 Note: The VxVM installation process scans all SCSI controllers for all attached disks. The ability to exclude disks or controllers is useful if you have a large number of disks that are not to be placed in the rootdg disk group, in order to reduce the time required for the installation process.
2-36 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Planning VxVM Setup


Prevent multipathing Prevent multipathing for any disks? for any disks?
Physical Disk For VxVM 3.1.1 and later, the DMP driver must always be present on the system for VxVM to function. However, you can suppress multipathing for specific or all devices. You can suppress devices from DMP or from VxVMs view during the installation process or at a later time using VxVM utilities.

c2t1d0
FOS35_Sol_R1.0_20020930

c1t1d0
2-27

Two I/O paths to a Two paths to single drive or array single drive or array

Copyright 2002 VERITAS

Do you want to suppress dynamic multipathing on any disks? With VxVM version 3.1.1 and later, the DMP driver must always be present on the system for VxVM to function. However, you can prevent VxVM from multipathing some or all devices without removing the DMP layer. You may want to prevent multipathing for some devices if, for example, the devices are using other multipathing software. When you install VxVM, you have the opportunity to prevent dynamic multipathing (DMP) on specific devices or all devices connected to the system. You can also suppress disks from VxVMs view during the installation process. You can change your decision about a device at a later time by using VxVM utilities.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-37

Planning VxVM Setup


Preserve or eliminate Preserve or eliminate disk data? disk data?
Encapsulation
(preserves disk data)

When you bring a disk under Volume Manager control, you must either encapsulate or initialize the disk. Initialization
(eliminates disk data) Private Region Available Space

Encapsulated Data Available Space Private Region


FOS35_Sol_R1.0_20020930

2-28

Copyright 2002 VERITAS

When you place disks under Volume Manager control, do you want to preserve or eliminate data in existing file systems and partitions? When you place a disk under Volume Manager control, you can either preserve the data that exists on the physical disk (encapsulation) or eliminate all of the data on the physical disk (initialization). Encapsulation: Saving the data on a disk brought under Volume Manager control is called disk encapsulation. Disks to be encapsulated must: Contain the required minimum unpartitioned free space of 1024 sectors (512K) (By default, VxVM uses 2048 sectors (1024K).) Contain an s2 slice that represents the full disk (The s2 slice cannot contain a file system.) Contain two free partition table entries The partitions are converted to subdisks that are used to create the volumes that replace the Solaris partitions. Initialization: Eliminating all of the data on a physical disk brought under Volume Manager control is called disk initialization. Any disks that are encapsulated or initialized during installation are placed in the disk group rootdg. If disks are left alone during installation, they can be placed under Volume Manager control later and assigned to disk groups other than rootdg. Note: Encapsulation is covered in detail in a later lesson.

2-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Planning VxVM Setup


Place boot disk under Place boot disk under VxVM control? VxVM control?
Encapsulated Encapsulated boot disk boot disk If you plan to mirror the boot disk, then you must encapsulate it. If you do not plan to mirror the boot disk, then do not encapsulate it. Partitions are converted to subdisks Partitions are converted to subdisks that are used to create the volumes that are used to create the volumes that replace the Solaris partitions. that replace the Solaris partitions.

/ /usr /var swap


FOS35_Sol_R1.0_20020930

rootvol

usr

Boot volumes Boot volumes var swapvol


2-29

Private region

FOS35_Sol_R1.0_20020930 Copyright 2002 VERITAS

2-29

Do you want to place the system boot disk under Volume Manager control? If you plan to mirror the boot disk, then you must place the boot disk under Volume Manager control, which requires encapsulation to preserve the original data on the disk. If you do not plan to mirror the boot disk, then do not place it under Volume Manager control. Encapsulating and mirroring the boot disk is recommended for a high availability environment. When you encapsulate the boot disk, existing data and boot information is saved on the disk, and partitions are converted into volumes: Existing /, /usr, and /var partitions are converted to volumes without removing the partitions. Other partitions are converted to volumes, and then partitions are removed. The existing swap area is converted to a volume. If there is insufficient space for the private region on the boot disk, Volume Manager takes sectors from the swap area of the disk, which makes the private region overlap the public region. The swap partition remains the same size, and the swap volume is resized to be smaller than the swap partition. The /etc/system and /etc/vfstab files are modified. Note: Volume Manager preserves a copy of the original VTOC of any disk that is encapsulated in /etc/vx/reconfig.d/disks.d/cxtydz/vtoc, where cxtydz is the SCSI address of the disk.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-39

Planning VxVM Setup


Set up all disks on a controller Set up all disks on a controller differently or in the same way? differently or in the same way?
c0t0d0 c0 c0t1d0 c0t2d0 c1t0d0 c1 c1t1d0 c1t2d0
FOS35_Sol_R1.0_20020930 2-30

For each controller (or disk array), do you want to:


Encapsulate all disks? Initialize all disks? Encapsulate some and initialize others? Leave all disks alone?

Copyright 2002 VERITAS

Do you want to set up each disk on a controller differently, or do you want to set up all disks on a controller in the same way? When you run the installation program, all disks that are attached to controllers are eligible for VxVM encapsulation or initialization. During installation, you can choose to encapsulate or initialize individual disks, all disks on each controller (or disk array), or all disks on the system.

2-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Typical VxVM Initial Setup


Physical Disks
c0t0d0 c0 c0t1d0 c0t2d0

VxVM Disks
rootdg rootdisk Encapsulated Encapsulated

c1t0d0 c1 c1t1d0
FOS35_Sol_R1.0_20020930

disk01 Initialized Initialized At initial setup, the other four At initial setup, the other four disks are left alone. You can disks are left alone. You can encapsulate or initialize these encapsulate or initialize these disks later using VxVM utilities. disks later using VxVM utilities.

c1t2d0

2-31

FOS35_Sol_R1.0_20020930 Copyright 2002 VERITAS

2-31

Example: Typical Initial VxVM Setup In the example, the boot disk c0t0d0 is encapsulated, and the data disk c1t0d0 is initialized. disk01 is used for a mirror of the rootdisk. To remove single points of failure, it is recommended that you mirror the boot disk across controllers. Therefore, disk01 is on a different controller from the rootdisk. These disks are placed in the disk group rootdg. Note: Adding another disk to rootdg during the vxinstall process does not mean that the boot disk is mirrored, only that a disk has been placed in the rootdg disk group that you can use for mirroring the boot disk. Mirroring is a separate operation that you perform later after a successful installation. The remaining disks are left alone during installation. All of the data on these disks is left intact. These disks are not available for use by Volume Manager until they are either encapsulated or initialized. These disks can be placed under Volume Manager control and can be assigned to disk groups other than rootdg at a later time using VxVM utilities.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-41

The vxinstall Program


Physical Disks
c0t0d0 c0t1d0 c0t2d0

VxVM Disks
rootdg rootdisk

Use vxinstall to create the rootdg disk group and place disks under VxVM control.

rootdg must contain at least one disk at all times for VxVM to operate.

c1t0d0 c1t1d0
FOS35_Sol_R1.0_20020930

disk01 Do not run vxinstall on a system that already has rootdg. Set up disks in disk groups other than rootdg later using other VxVM utilities.
2-32

c1t2d0

FOS35_Sol_R1.0_20020930 Copyright 2002 VERITAS

2-32

Installing VxVM for the First Time


The vxinstall Program After adding the software packages, you are ready to configure Volume Manager for initial use by using the interactive installation program called vxinstall. The sole purpose of running vxinstall is to create the rootdg disk group. Volume Manager requires that the rootdg disk group exists and that it contains at least one disk. At least one disk must remain in rootdg at all times while VxVM is running. You should run vxinstall only once per system, except in the event of special troubleshooting situationsfor example, if rootdg does not exist. You should never run vxinstall on a system that already has a rootdg disk group. When using vxinstall, it is only possible to place disks in the rootdg disk group. Any disks that are to be managed under other disk groups (for example, disks containing application data) can be configured later using the standard VxVM interfaces. You must be logged on as superuser in order to run the vxinstall program.

2-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The vxinstall Process


Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select a naming method. Select a naming method. Suppress multipathing? Suppress multipathing? Select installation method. Select installation method. Encapsulate boot disk? Encapsulate boot disk? Set up nonboot disks. Set up nonboot disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930 2-33

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

The vxinstall Process The vxinstall program is an interactive program that guides you through the installation process. During the installation, you are asked to respond to a series of questions about how you want Volume Manager to handle disks in your system. The main steps in the vxinstall process are: 1 Enter the vxinstall command to begin the installation process. 2 Enter valid license keys when prompted. 3 Select a format for naming devices on the host. 4 Specify whether you want to prevent any devices from dynamic multipathing (DMP). 5 Select an installation method. You can select either the Quick or Custom installation method. 6 Specify whether you want to encapsulate the boot disk. 7 Specify how Volume Manager should handle other disks identified in your system. 8 Verify your selections for bringing disks under Volume Manager control. 9 Shut down and reboot your system.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-43

Start vxinstall
Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930

1. To begin the interactive installation process, you type vxinstall at the command line: # vxinstall 2. Licensing information is displayed, and you are prompted to enter a key: Some licenses are already installed. Do you wish to review them [y,n,q,?] (default: y) y Do you wish to enter another license key [y,n,q,?](default: n)
2-34

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 1: Start the vxinstall Program To start the vxinstall program, you type vxinstall at the command line:
# vxinstall

Step 2: Enter License Keys When prompted, enter a valid license key. The vxinstall program first runs the vxlicense command to initialize the Volume Manager license key file. The vxlicense command displays licensing information and then prompts you for a key. You must enter a valid key in order to proceed with the initialization.
Some licenses are already installed. Do you wish to review them [y,n,q,?] (default: y) y Do you wish to enter another license key [y,n,q,?] (default: n)

Note: The presence of certain hardware arrays, such as A5000, automatically generates a key. The vxinstall program does not prompt for another key.

2-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Select Naming Method


Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930 2-35

3. When prompted, specify whether you want to use the enclosure-based naming format.
VxVM will use the following format to name disks on the host: <enclosurename>_<diskno> ... The Volume Manager has detected the following categories of storage connected to your system: Enclosures: enc01 sena0 sena1 sena3 sena4 sena5 Others: others0

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 3: Select a Naming Method The vxinstall program examines and lists all controllers attached to the system, and then prompts you to specify whether you want to use enclosure-based naming. If you choose to use enclosure-based naming, you receive the following output: Generating list of attached enclosures.... VxVM will use the following format to name disks on the host: <enclosurename>_<diskno> In the above format, <enclosurename> is the logical name of the enclosure to which the disk belongs. VxVM assigns default enclosure names which can be changed according to the user requirements. ... The Volume Manager has detected the following categories of storage connected to your system: Enclosures: enc01 sena0 sena1 sena3 sena4 sena5 Others: others0 You can use the default enclosure-based names assigned by VxVM or you can assign new names to enclosures. You can change the naming format after VxVM installation by using VxVM utilities.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-45

Prevent Multipathing?
Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930

4. To prevent specific devices from multipathing or from VxVMs view, select menu option 3: 1 Quick Installation 2 Custom Installation 3 Prevent multipathing/Suppress devices from VxVMs view ... A reboot is required for any device exclusion procedures. After rebooting, rerun vxinstall to continue with the VxVM installation.
2-36

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 4: Suppress Multipathing (Optional) After a brief introduction to the installation process, a menu of options is displayed. If you want to prevent multipathing for any devices on the system, then you should suppress the devices before you continue the installation process. To prevent multipathing or suppress devices, select menu option 3:
1 Quick Installation 2 Custom Installation 3 Prevent multipathing/Suppress devices from VxVMs view ? Display help about menu ?? Display help about menuing system q Exit from menus

When you select menu option 3, a more detailed menu of options is displayed that enables you to: Suppress all paths through a controller, specific paths, specific disks, or all but one path to a disk from VxVMs view. Prevent multipathing of all disks on a controller or specific disks. List currently suppressed or nonmultipathed devices. If you choose to perform any of the device exclusion procedures, then a reboot is required. When prompted, reboot the system and rerun the vxinstall program to continue with the installation.

2-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Select Installation Method


Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930

5. Select an installation method from the menu of options: 1 Quick Installation 2 Custom Installation Recommended 3 Prevent multipathing/Suppress devices from VxVMs view ... Select an operation to perform: Custom installation enables you to set up each disk individually and is the recommended method. Quick installation sets up all disks per controller in the same way.

2-37

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 5: Select an Installation Method To continue with the VxVM installation process, you select Quick Installation or Custom Installation at the menu of options. Custom Installation is recommended.
1 Quick Installation 2 Custom Installation 3 Prevent multipathing/Suppress devices from VxVMs view ? Display help about menu ?? Display help about menuing system q Exit from menus

Quick Installation: This option enables you to initialize all disks or to encapsulate all disks. Quick installation is generally not recommended. Custom Installation: This option enables you to handle each disk individually and to control which disks are placed under VxVM control. You can initialize or encapsulate all disks on a controller or initialize some disks and encapsulate others. This is the recommended method for setting up Volume Manager. Exiting vxinstall: By typing q anywhere in the vxinstall program, you exit out of the entire program, and no installation is executed. None of your choices are executed or saved. Displaying Help Information: The single question mark (?) displays a help file describing the current operation or menu choices. The double question mark (??) displays general information about using the vxinstall program.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-47

Encapsulate Boot Disk


Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930

6. Specify whether or not to encapsulate the boot disk:


The c0t0d0 disk is your Boot Disk... [Encapsulation] is required if you wish to mirror your root file system or system swap area. Encapsulate Boot Disk [y,n,q,?] (default: n)

Encapsulating the boot disk is recommended, unless you are not planning to mirror your root file system. The default disk media name for the boot disk is rootdisk.
2-38

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 6: Encapsulate the Boot Disk Whether you select the Quick or Custom installation method, the next step in the vxinstall process is to decide whether or not to encapsulate the boot disk. The vxinstall program identifies your boot disk and provides information about bringing your boot disk under Volume Manager control. On most Solaris systems, the boot disk has a device name of c0t0d0. When prompted, indicate whether to encapsulate your boot disk: The c0t0d0 disk is your Boot Disk. You can not add it as a new disk. If you encapsulate it, you will make your root file system and other system areas on the Boot Disk into volumes. This is required if you wish to mirror your root file system or system swap area. Encapsulate Boot Disk [y,n,q,?] (default: n) If you enter y, the vxinstall program encapsulates your root file system as a volume, along with your swap device and all other disk partitions found on your boot disk. The /usr, /opt, and /var file systems, and any other file systems on your boot disk, are also encapsulated. After the boot disk has been configured for encapsulation, you are asked to give the disk a disk media name to identify the boot disk to Volume Manager. The default disk media name for the boot disk is rootdisk. It is recommended that you use this default name.

2-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Set Up Other Disks


Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930

7. Next, you specify if you want to set up any other disks at this time. VxVM examines each disk array and lists detected disks. For each disk array, you can encapsulate all disks, initialize all disks, handle disks one at a time, or leave the disks alone:
1 Install all disks as pre-existing disks. (encapsulate) 2 Install all disks as new disks. (discards data on disks!) 3 Install one disk at a time. 4 Leave these disks alone. ... Select an operation to perform:
2-39

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 7: Set Up Other Disks After specifying how to handle the boot disk, you specify how you want Volume Manager to handle your data disks. The vxinstall program identifies the remaining data disks in each disk array. Note: This section assumes that you selected Custom Installation. The Quick Installation option follows a similar sequence of steps, but you do not have the option of setting up individual disks differently. The vxinstall program examines each disk array and asks you how to handle the disks contained in that array. The vxinstall program first identifies the array and generates a list of its disks. If any disks are listed in the exclusion files, they are listed separately as excluded disks. A menu provides four options for each disk array:
Installation options for enclosure enc0 Menu: VolumeManager/Install/Custom/enc0 1 Install all disks as pre-existing disks. (encapsulate) 2 Install all disks as new disks.(discards data on disks!) 3 Install one disk at a time. 4 Leave these disks alone. ? Display help about menu ??Display help about the menuing system q Exit from menus Select an operation to perform:

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-49

Option 1: Encapsulating All Disks To encapsulate all disks in the disk array, select option 1. Volumes are created from partitions, and the /etc/vfstab file is updated to ensure that file systems previously mounted on disk partitions are mounted as volumes instead. You are prompted to assign disk media names to all of the disks on the controller. It is recommended that you use the default names: Use default disk names for these disks? [y,n,q,?] (default: y) If you type y, the vxinstall program automatically assigns and lists default disk names for each disk, for example: The c1t0d0 disk will be given disk name disk01 The c1t0d0 disk has been configured for encapsulation. The c1t1d0 disk will be given disk name disk02 The c1t1d0 disk has been configured for encapsulation. Hit RETURN to continue. If you type n, the vxinstall program prompts for a disk name for each disk on the controller individually:
Enter disk name for c1t0d0 [<name>,q,?](default: disk01)

For each disk, enter the desired disk name and press Return. When all of the disks in the current disk array are named, press Return to move to the next disk array. Option 2: Initializing All Disks To initialize all disks in the disk array, select option 2. Initializing a disk destroys all data and partitions and makes the disk available as free space for allocating new volumes or mirrors of existing volumes. As with option 1, you are prompted to assign disk media names to all of the disks in the disk array. It is recommended that you use the default names. When all of the disks in the current disk array have been named, press Return to move on to the next disk array. Option 3: Installing Individual Disks To install one disk at a time, select option 3. Each disk is handled separately, and you are prompted for information on a per-disk basis. This allows you to install a disk as a preexisting disk, install it as a new disk, or leave it alone. You are prompted to indicate how you want the named disk to be handled. The options presented are similar to those in the Custom Installation menu. Option 4: Leaving Disks Unaltered To leave all disks in the disk array unaltered, select option 4. No changes are made to the disks, and they are not placed under Volume Manager control. If applications are currently using these disks and you do not want to upgrade these applications to use Volume Manager, use this option to ensure that your applications continue to use the disks without modification.
2-50 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Verify Setup Choices


Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930

8. After specifying which disks will be placed under VxVM control, a summary of your choices is displayed, followed by a prompt:
Is this correct [y,n,q,?] (default: y)

This is your last opportunity to alter your setup choices.


If you enter y, vxinstall encapsulates and initializes your disks as specified. If you enter n, vxinstall prompts you for the name of the disk to be excluded from VxVM control: Enter disk to be removed from your choices. Hit return when done. [<name>,q,?]
2-40

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 8: Verify Your Setup Choices When you have completed the vxinstall procedure for all disk arrays on your system, the vxinstall program displays a summary of the disks you have designated for initialization (New Disk) or encapsulation (Encapsulate). For example: The following is a summary of your choices. c0t0d0 Encapsulate c2t2d3 New Disk Is this correct [y,n,q,?] (default: y) This is your last chance to review and alter your choices for how to handle any of the disks to be placed under Volume Manager control. If you type y, the vxinstall program proceeds to encapsulate all disks listed with Encapsulate and initialize all disks listed with New Disk. If you type n, the vxinstall program prompts you for the name of a disk to be removed from the list and excluded from Volume Manager control: Enter disk to be removed from your choices. Hit return when done. [<name>,q,?] To alter your setup choices, you enter the name of the disk to be removed from the list and press Return. The vxinstall program displays an updated summary without the disks chosen for removal.

Lesson 2: Installing VERITAS Foundation Suite


Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-51

Shut Down and Reboot


Invoke vxinstall. Invoke vxinstall. Enter license keys. Enter license keys. Select naming method. Select naming method. Suppress multipaths? Suppress multipaths? Select install method. Select install method. Encapsulate boot disk? Encapsulate boot disk? Set up other disks. Set up other disks. Verify setup choices. Verify setup choices.
FOS35_Sol_R1.0_20020930 2-41

9. VxVM informs you when a shutdown and reboot are necessary:


The system now must be shut down and rebooted in order to continue the reconfiguration. Shutdown and reboot now [y,n,q,?] (default: n)

Shut down and reboot. Shut down and reboot.


Copyright 2002 VERITAS

Step 9: Shut Down and Reboot After specifying how the vxinstall program processes all of the disks attached to your system, you may need to reboot the system to make changes to your disk partitioning that cannot be made while your disks are in use. The way in which you handle your disks during the vxinstall session determines whether a shutdown and reboot are required. If you encapsulated any disks, a reboot is required. The vxinstall program informs you when a shutdown and reboot are necessary. The setup you choose can require more than one reboot. When it is necessary to shut down and reboot your system, the vxinstall program displays a message similar to the following: The system now must be shut down and rebooted in order to continue the reconfiguration. Shutdown and reboot now [y,n,q,?] (default: n) Type y to begin an immediate shutdown. If you type n, the vxinstall program exits without shutting down. If you select this option, you should shut down and reboot as soon as possible. Do not make any changes to your disk or file system configurations before shutting down and rebooting your system. Note: During reboot, you may be asked several times if you wish to continue an operation. Press the Return key to accept the default answer. If you select a different answer from the default for any of these prompts, the setup may fail.

2-52

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to:
Identify operating system compatibility and other preinstallation considerations. Describe the VxVM, VxFS, and VEA software packages, space requirements, and optional feature sets. Obtain license keys, add licenses by using vxlicinst, and view licenses by using vxlicrep. Add VxVM and VxFS packages interactively, by using the Installer utility, and manually, by using pkgadd. Plan an initial setup of VxVM by determining the desired characteristics of a first-time installation. Configure a first-time installation of VxVM by using the vxinstall program.
FOS35_Sol_R1.0_20020930 2-42

Copyright 2002 VERITAS

Summary
This lesson described guidelines and considerations for planning a first-time installation of VERITAS Foundation Suite. This lesson included procedures for adding license keys, adding the VxVM and VxFS software packages, and running the VxVM installation program. Next Steps After you install the Foundation Suite software and bring your boot disk under Volume Manager control, you are ready to install the VERITAS Enterprise Administrator (VEA) graphical user interface to help you manage Volume Manager processes. In the next lesson, you install VEA and explore the other Volume Manager interfaces. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Installation Guide This guide provides information on installing and initializing VxVM and the VERITAS Enterprise Administrator graphical user interface. VERITAS File System Installation Guide This guide provides information on installing VxFS. VERITAS Volume Manager support Web site: http://support.veritas.com
Lesson 2: Installing VERITAS Foundation Suite
Copyright 2002 VERITAS Software Corporation. All rights reserved.

2-53

Lab 2
Lab 2: Installing VERITAS Foundation Suite
In this lab, you install VxVM and VxFS. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

2-43

Copyright 2002 VERITAS

Lab 2: Installing VERITAS Foundation Suite


Goal In this exercise, you install VxVM and VxFS. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

2-54

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VERITAS Volume Manager Interfaces

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

3-2

3-2

Introduction
Overview This lesson introduces the three interfaces that you can use to manage VERITAS Volume Manager. This lesson describes the VERITAS Enterprise Administrator (VEA) graphical user interface, the command line interface, and the vxdiskadm utility. Procedures for setting up and managing VEA are also covered. Importance VERITAS Volume Manager provides three different tools that you can use to manage VxVM objects. Using these tools interchangeably to perform VxVM administrative functions provides flexibility in how you access VxVM. Outline of Topics VxVM User Interfaces Using the VEA Interface Using the Command Line Interface Using the vxdiskadm Interface Installing the VEA Software Starting the VEA Server and Client Managing the VEA Server Customizing VEA Security

3-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Describe the three VxVM user interfaces. Describe the components of the VEA main window. Access the VxVM CLI commands and manual pages. Access the vxdiskadm main menu. Install the VEA software packages. Start the VEA server and client. Manage the VEA server by displaying server status, version, task logs, and event logs. Customize VEA security.
FOS35_Sol_R1.0_20020930 3-3

Objectives After completing this lesson, you will be able to: Describe the three VxVM user interfaces that you can use to manage disks, volumes, and file systems. Identify the components of the VEA main window that provide access to and information about Volume Manager tasks. Identify and access the VxVM CLI commands and manual pages. Access the vxdiskadm main menu. Install the VEA software packages. Start the VEA server and client. Manage the VEA server by displaying server status, version, task logs, and event logs. Customize VEA security by creating groups of users who can access VEA.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-3

VxVM User Interfaces


VxVM supports three user interfaces: VERITAS Enterprise Administrator (VEA): A GUI that provides access through icons, menus, wizards, and dialog boxes

#_
Menu 1 2

Command Line Interface (CLI): UNIX utilities that you invoke from the command line Volume Manager Support Operations (vxdiskadm): A menu-driven, text-based interface also invoked from the command line
Note: vxdiskadm only provides access to certain disk and disk group management functions.
3-4

FOS35_Sol_R1.0_20020930

VxVM User Interfaces


Volume Manager User Interfaces Volume Manager supports three user interfaces. Volume Manager objects created by one interface are compatible with those created by the other interfaces. VERITAS Enterprise Administrator (VEA): VERITAS Enterprise Administrator (VEA) is a graphical user interface to Volume Manager and other VERITAS products. VEA provides access to VxVM functionality through visual elements such as icons, menus, wizards, and dialog boxes. Using VEA, you can manipulate Volume Manager objects and also perform common file system operations. Command Line Interface (CLI): The command line interface (CLI) consists of UNIX utilities that you invoke from the command line to perform Volume Manager and standard UNIX tasks. You can use the CLI not only to manipulate Volume Manager objects, but also to perform scripting and debugging functions. Most of the CLI commands require superuser or other appropriate privileges. The CLI commands perform functions that range from the simple to the complex, and some require detailed user input. Volume Manager Support Operations (vxdiskadm): The Volume Manager Support Operations interface, commonly called vxdiskadm, is a menu-driven, text-based interface that you can use for disk and disk group administration functions. The vxdiskadm interface has a main menu from which you can select storage management tasks. A single VEA task may perform multiple command-line tasks.

3-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VERITAS Enterprise Administrator


Java-based Java-based graphical graphical interface that interface that consists of consists of server and client server and client

FOS35_Sol_R1.0_20020930

VEA server runs on the same machine as VxVM. VEA server runs on the same machine as VxVM. VEA client runs on any machine that supports the VEA client runs on any machine that supports the Java 1.1 Runtime Environment, including Solaris, Java 1.1 Runtime Environment, including Solaris, HP-UX, AIX, Linux, or Windows. HP-UX, AIX, Linux, or Windows.

3-5

FOS35_Sol_R1.0_20020930

3-5

Using the VEA Interface


VERITAS Enterprise Administrator The VERITAS Enterprise Administrator (VEA) is the graphical user interface for Volume Manager and other VERITAS products. You can use the VxVM features of VEA to administer disks, volumes, and file systems on local or remote machines. Starting with VxVM 3.5, VEA replaces the earlier graphical user interface, Volume Manager Storage Administrator (VMSA). VEA is a Java-based interface that consists of a server and a client. You must install the VEA server on a UNIX machine that is running VERITAS Volume Manager. The VEA client can run on any machine that supports the Java 1.1 Runtime Environment, which can be Solaris, HP-UX, AIX, Linux, or Windows. Some VxVM features of VEA include: Remote Administration: You can perform VxVM administration remotely or locally. The VEA client runs on UNIX or Windows machines. Security: VEA can only be run by users with appropriate privileges, and you can restrict access to a specific set of users. Multiple Host Support: The VEA client can provide simultaneous access to multiple host machines. You can use a single VEA client session to connect to multiple hosts, view the objects on each host, and perform administrative tasks on each host. Each host machine must be running the VEA server. Multiple Views of Objects: VEA provides multiple ways to view Volume Manager objects, including a hierarchical tree layout, a list format, and a variety of graphical views.
Lesson 3: VERITAS Volume Manager Interfaces
Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-5

VEA: Main Window


Menu Bar Menu Bar Toolbar Toolbar

Object Tree Object Tree

Grid Grid

FOS35_Sol_R1.0_20020930

Console/Task History Window Console/Task History Window Status Area Status Area

3-6

FOS35_Sol_R1.0_20020930

3-6

The VEA Main Window VEA provides a variety of ways to view and manipulate Volume Manager objects. When you launch VEA, the VEA main window is displayed. The VEA main window consists of the following components: A hierarchical object tree, located in the left pane of the main window, provides a dynamic display of VxVM objects and other objects on the system. A grid, located in the right pane of the main window, lists objects that belong to the group selected in the object tree. A menu bar and toolbar provide access to tasks. A Console/Task History window, located near the bottom pane of the main window, displays a list of alerts and a list of recently performed tasks. You can view each list by clicking the appropriate tab. A status area, located at the bottom of the main window, identifies the currently selected server host and displays an alert icon when there is a problem with the task being performed. Click the icon to display the VEA Error console, which contains a list of messages related to the error.

3-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Other Views in VEA In addition to the main window, you can also view VxVM objects in other ways: The Disk View window provides a graphical view of objects in a disk group. The Volume View window provides a close-up graphical view of a volume. The Volume to Disk Mapping window provides a tabular view of the relationship between volumes and their underlying disks. The Object Properties window provides information about a selected object.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-7

VEA: Accessing Tasks


1 2
Three ways to Three ways to access tasks: access tasks:
Menu bar Menu bar Toolbar Toolbar Context Context menu menu

FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

By default, VEA Wizards guide you through configuration By default, VEA Wizards guide you through configuration tasks. If you prefer, you can disable Wizard mode through tasks. If you prefer, you can disable Wizard mode through the Preferences window (Tools>Preferences). the Preferences window (Tools>Preferences).

3-7

3-7

Accessing Tasks Through VEA Specific procedures for using VEA to perform specific tasks are covered in detail throughout this training. While this course describes one method for using VEA to access a task, you can access most VEA tasks in three ways: Through the menu bar Through the toolbar Through context-sensitive popup menus Accessing Tasks Through the Menu Bar You can launch most tasks from the menu bar in the main window. The Actions menu is context sensitive and changes its options based on the type of object that you select in the tree or grid. Accessing Tasks Through the Toolbar You can launch some tasks from the toolbar in the main window by clicking on one of the icons. The icons are disabled when the related actions are not appropriate.

Connect

Disconnect

New Volume

New Dynamic Search Disk Group

3-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Accessing Tasks Through Popup Menus You can access context-sensitive popup menus by right-clicking an object. Popup menus provide access to tasks or options that are appropriate for the selected object. Setting VEA Preferences You can customize general VEA environment attributes through the Preferences window. For example, through the Preferences window, you can enable or disable Wizard mode functionality. By default, many VxVM tasks are performed through Wizards which guide you step-by-step through configuration tasks. If you prefer to perform these tasks using a single configuration window (similar to the old Volume Manager Storage Administrator (VMSA)) interface, then you can disable Wizard mode. To disable Wizard mode: 1 In the VEA main window, select Tools>Preferences. 2 In the Volume Manager General tab page, remove the check mark from the Enable wizard mode check box.

3 Click OK.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-9

VEA: Viewing Tasks


The Task History window contains a list of The window tasks performed in the current session. tasks performed the current session.

To view underlying command lines, To view underlying right-click a task and select Properties. right-click a task and select Properties.

FOS35_Sol_R1.0_20020930

3-8

FOS35_Sol_R1.0_20020930

3-8

Viewing Tasks Through VEA VEA logs all task requests. You can view a history of VEA tasks, including tasks in progress, in two ways: Displaying the Task History window Viewing the Command Log file Viewing Commands Through the Task History Window The Task History window displays a history of the tasks performed in the current session. Each task is listed with properties, such as the target object of the task, the host, the start time, the task status, and task progress. Displaying the Task History window: To display the Task History window, click the Tasks tab at the bottom of the main window. Aborting a Task: To abort a task, right-click a task and select Abort Task. Pausing a Task: To temporarily stop a task, right-click a task and select Pause Task. Resuming a Task: To restart a paused task, right-click a task and select Resume Task. Sorting Tasks: To sort the tasks by a particular column or to reverse the sort order, click a column heading. Reducing Task Priority: To slow down an I/O intensive task in progress and reduce the impact on system performance, right-click a task and select Throttle Task. In the Throttle Task dialog box, indicate how much you want to slow down the task. You can select Throttle All Tasks to slow all VxVM tasks.
3-10 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Clearing the Task History: Tasks are persistent in the Task History window. To remove completed tasks from the window, right-click a task and select Clear All Finished Tasks. Viewing CLI Commands: To view the command lines executed for a task, right-click a task and select Properties. The Properties window is displayed for the task. The CLI commands issued are displayed in the Commands executed field.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-11

VEA: Viewing Commands


Command Log File

Located in /var/vx/isis/command.log Displays a history of tasks performed in the current session and in previous sessions
Description: Create Volume Description: Create Volume Date: Thu May 9 15:53:49 2002 Date: Thu May 9 15:53:49 2002 Command: /usr/sbin/vxassist Command: /usr/sbin/vxassist -g datadg -b make -g datadg -b make data2vol 122880s layout=striped data2vol 122880s layout=striped stripeunit=128 ncolumn=2 comment="" stripeunit=128 ncolumn=2 comment="" alloc= alloc= Output: Output: Exit Code: 0 Exit Code: 0
FOS35_Sol_R1.0_20020930 3-9

Example command log file entry:

Viewing Commands Through the Command Log File The command log file contains a history of VEA tasks performed in current and previous sessions. The command log file contains a description of each task and properties, such as the description, date, command issued, output, and the exit code. For failed tasks, the Output field includes relevant error messages. By default, the command log is located in /var/vx/isis/command.log on the server. This file is created after the first execution of a task in VEA. To display the commands that are executed in a separate windowfor example, to use for scriptingyou can open a separate window and use the tail command:
# tail -f /var/vx/isis/command.log

3-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Example Command Log File Entries The following are example command log file entries, representing executed and failed commands:
Description: Date: Command: Output: Exit Code: Description: Date: Command: Deport Diskgroup Sat Apr 20 13:31:19 2002 /usr/sbin/vxdg deport datadg 0 Create Volume Thu May 9 15:53:49 2002 /usr/sbin/vxassist -g datadg -b make data2vol 122880s layout=striped stripeunit=128 ncolumn=2 comment="" alloc= 0 Volume Relayout Thu May 9 15:54:49 2002 /usr/sbin/vxassist -t taskid_103 -g datadg relayout datavol layout=raid5 ncol=3 stripeunit=32 vxvm:vxassist: ERROR: Cannot allocate space for 40960 block volume vxvm:vxassist: ERROR: Relayout operation aborted. (7) 7 Import Diskgroup Sat Apr 20 13:31:35 2002 /usr/sbin/vxdg -n datadg import 1018930848.1063.epdaix01 vxvm:vxdg: ERROR: Disk group 1018930848.1063.epdaix01: import failed: Disk for disk group not found 20

Output: Exit Code: Description: Date: Command: Output:

Exit Code: Description: Date: Command: Output: Exit Code:

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-13

VEA: Viewing Help Information


To access VEA Help, To access VEA Help, select Help>Contents. select Help>Contents.

FOS35_Sol_R1.0_20020930

3-10

FOS35_Sol_R1.0_20020930

3-10

Displaying VEA Help Information VEA contains an extensive database of Help information that is accessible from the menu bar. To access VEA Help information, select Help>Contents. The Help window is displayed. In the Help window, you can view help information in three ways: Click a topic in the Contents tab. Select a topic in the alphabetical index listing on the Index tab. Search for a specific topic by using the Search tab.

3-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Command Line Interface

#_

You can administer CLI commands from the UNIX shell prompt. Commands can be executed individually or combined into scripts. Commands are located in:
/etc/vx/bin /usr/sbin /usr/lib/vxvm/bin

Examples of CLI commands include:



vxassist vxprint vxdg vxdisk Creates and manages volumes Lists VxVM configuration records Creates and manages disk groups Administers disks under VM control

FOS35_Sol_R1.0_20020930

3-11

Using the Command Line Interface


Command Line Interface The Volume Manager command line interface (CLI) provides commands used for administering VxVM from the shell prompt on a UNIX system. CLI commands can be executed individually for specific tasks or combined into scripts. The VxVM command set ranges from commands requiring minimal user input to commands requiring detailed user input. Many of the VxVM commands require an understanding of Volume Manager concepts. Most VxVM commands require superuser or other appropriate access privileges. Many of the CLI commands can be found in the directories: /etc/vx/bin (a link to /usr/lib/vxvm/bin) /usr/sbin /usr/lib/vxvm/bin Add these directories to your PATH environment variable to access the commands. Examples of CLI Commands Some high-level CLI commands include: vxassist: This command creates and manages volumes in a single step. vxprint: This command lists information from the VxVM configuration records. vxdg: This command operates on disk groups. vxdg creates new disk groups and administers existing disk groups.
Lesson 3: VERITAS Volume Manager Interfaces
Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-15

vxdisk: This command administers disks under VxVM control. vxdisk defines special disk devices, initializes information stored on disks, and performs additional special operations.

3-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Accessing Manual Pages

#_

CLI commands are detailed in manual pages. Manual pages are installed by default in /opt/VRTS/man. Add this directory to the MANPATH environment variable. To access a manual page:
# man command_name

Example:
# man vxassist

FOS35_Sol_R1.0_20020930

3-12

Accessing Manual Pages for CLI Commands Detailed descriptions of VxVM commands, the options for each utility, and details on how to use them are located in the Volume Manager manual pages. When you install the VxVM software packages, the manual pages are installed from the VRTSvmman package. By default, VxVM manual pages are installed in /opt/VRTS/man. Add this path to the MANPATH variable in order to view the manual pages. Most commands can be found in /opt/VRTS/man/man1m. Additional commands can be found in: /opt/VRTS/man/man1 /opt/VRTS/man/man4 /opt/VRTS/man/man7 To access the manual page for a specific command, you use the standard UNIX man command followed by the name of the command that you want to display:
man command_name

For example:
# man vxassist

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-17

The following table lists VxVM CLI commands and their manual page locations. Many of these commands are covered in detail throughout this training.
/opt/VRTS/man/man1 vxlicinst.1 /opt/VRTS/man/man1m
vea.1m vradmin.1m vrnotify.1m vrport.1m vxalerttype.1m vxapslice.1m vxassist.1m vxbootsetup.1m vxclustadm.1m vxconfigd.1m vxdarestore.1m vxdco.1m vxdctl.1m vxddladm.1m vxdg.1m vxdisk.1m vxdiskadd.1m vxdiskadm.1m vxdiskconfig.1m vxdisksetup.1m vxdiskunsetup.1m vxdmpadm.1m vxedit.1m vxencap.1m vxevac.1m vxibc.1m vxinfo.1m vxinstall.1m vxintro.1m vxiod.1m vxlicense.1m vxmake.1m vxmemstat vxmend.1m vxmirror.1m vxnotify.1m vxobjecttype.1m vxplex.1m vxprint.1m vxr5check.1m vxreattach.1m vxrecover.1m vxregctl.1m vxrelayout.1m vxrelocd.1m vxresize.1m vxrlink.1m vxrootmir.1m vxrvg.1m vxsd.1m vxserial.1m vxsparecheck.1m vxspcshow.1m vxstat.1m vxsvc.1m vxtask.1m vxtrace.1m vxunreloc.1m vxunroot.1m vxvol.1m

vxlicrep.1

vxlictest.1

/opt/VRTS/man/man4
vol_pattern.4 vxmake.4

/opt/VRTS/man/man7
vxconfig.7 vxdmp.7 vxinfo.7 vxio.7 vxiod.7 vxtrace.7

Note: The vxintro(1m) manual page contains introductory information relating to Volume Manager tasks.

3-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The vxdiskadm Interface


Menu 1 2

# vxdiskadm
Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 4 5 ... ? ?? q Add or initialize one or more disks Encapsulate one or more disks Remove a disk Remove a disk for replacement Replace a failed or removed disk

list List disk information Display help about menu Display help about the menuing system Exit from menus
3-13

FOS35_Sol_R1.0_20020930

Using the vxdiskadm Interface


The vxdiskadm Interface The vxdiskadm command is a CLI command that you can use to launch the Volume Manager Support Operations menu interface. You can use the Volume Manager Support Operations interface, commonly referred to as vxdiskadm, to perform common disk management tasks. The vxdiskadm interface is restricted to managing disk objects and does not provide a means of handling all other VxVM objects. Each option in the vxdiskadm interface invokes a sequence of CLI commands. The vxdiskadm interface presents disk management tasks to the user as a series of questions, or prompts. Starting vxdiskadm To start vxdiskadm, you type vxdiskadm at the command line to display the main menu. The vxdiskadm main menu contains a selection of main tasks that you can use to manipulate Volume Manager objects. Each entry in the main menu leads you through a particular task by providing you with information and prompts. Default answers are provided for many questions, so you can easily select common answers. The menu also contains options for listing disk information, displaying help information, and quitting the menu interface.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-19

# vxdiskadm
Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Add or initialize one or more disks Encapsulate one or more disks Remove a disk Remove a disk for replacement Replace a failed or removed disk Mirror volumes on a disk Move volumes from a disk Enable access to (import) a disk group Remove access to (deport) a disk group Enable (online) a disk device Disable (offline) a disk device Mark a disk as a spare for a disk group Turn off the spare flag on a disk Unrelocate subdisks back to a disk Exclude a disk from hot-relocation use Make a disk available for hot-relocation use Prevent multipathing/Suppress devices from VxVM's view Allow multipathing/Unsuppress devices from VxVM's view List currently suppressed/non-multipathed devices Change the disk naming scheme Get the newly connected/zoned disks in VxVM view

list List disk information ? ?? q Display help about menu Display help about the menuing system Exit from menus

Select an operation to perform:

Displaying Help Information You can enter a single question mark (?) at any time to provide help in using the menu. The output is a list of operations and a definition of each. You can enter two question marks (??) to list inputs that can be used at any prompt. Exiting a Process or the Interface By typing q at the main menu level, you exit the vxdiskadm interface. By typing q at other levels of the interface, you return to the main menu. Use this option if you need to restart a process. The tasks listed in the main menu are covered throughout this training. See the vxdiskadm(1m) manual page for more details on how to use vxdiskadm.

3-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Installing VEA
Install the VEA Install server on a UNIX server on UNIX machine running machine running VxVM. Install VEA client Install VEA client on any machine on any machine that supports the supports the Java 1.1 Runtime Java 1.1 Environment.
Installation administration script: VRTSobadmin
FOS35_Sol_R1.0_20020930

UNIX

VxVM VxVM VEA server VEA server VEA client VEA client

Server packages: VRTSob VRTSvmpro VRTSfspro

VEA VEA client client


Windows

VEA VEA client client


UNIX

Client packages: VRTSobgui (Solaris) win32/VRTSobgui.msi (Windows)


3-14

Installing the VEA Software


The VEA Software Packages VEA consists of a server and a client. You must install the VEA server on a UNIX machine that is running VERITAS Volume Manager. You can install the VEA client on the same machine or on any other UNIX or Windows machine that supports the Java 1.1 Runtime Environment. VEA Server Packages The VEA server packages on your VERITAS CD-ROM include VRTSob, VRTSvmpro, and VRTSfspro. You must run the VEA server on a Solaris machine running VxVM 3.5. VEA Client Packages The VEA client packages include VRTSobgui (for Solaris) and win32/ VRTSobgui.msi (for Windows). The VEA client has the following minimum system requirements: Solaris: SPARCstation 5 with 64 MB of memory Windows: 100 MHz Pentium with 32 MB of memory VEA Installation Administration Script The VEA installation administration script, VRTSobadmin, is located in the volume_manager/scripts directory on the VERITAS CD-ROM. Use this script when installing or upgrading VEA.
Lesson 3: VERITAS Volume Manager Interfaces
Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-21

Installing VEA
Before installing VEA: Install VRTSvlic and VRTSvxvm. Remove earlier VxVM GUIs (VRTSvmsa). To install the VEA server and client: 1. Log on as superuser. 2. Invoke the administration script and add VEA packages: # pkgadd -a /cdrom/CD_name/product_name/scripts VRTSobadmin -d /cdrom/CD_name/product_name/pkgs VRTSob VRTSobgui VRTSvmpro VRTSfspro 3. Add the VEA startup scripts directory to the PATH environment variable: # PATH=$PATH:/opt/VRTSob/bin # export PATH 4. Install VRTSobgui on other client machines.
FOS35_Sol_R1.0_20020930 3-15

Installing the VEA Server and Client on Solaris If you install VxVM by using the Installer utility, you are prompted to install both the VEA server and client packages automatically. If you did not install all of the components by using the Installer, then you can add the VEA packages separately. VEA Installation Prerequisites Before you install the VEA packages, the VERITAS licensing (VRTSvlic) and VERITAS Volume Manager (VRTSvxvm) packages must already be installed. You can use the pkginfo command to verify that these packages are installed. Upgrading from VMSA to VEA VEA is not compatible with earlier VxVM GUIs, such as VMSA. You cannot run VMSA with VxVM 3.5 and later. If you currently have VMSA installed on your machine, close any VMSA clients, kill the VMSA server, and remove the VRTSvmsa package before you add the VEA packages. To verify whether or not VMSA packages already exist, use the command:
# pkginfo -l | grep vmsa

If VRTSvmsa packages exist, they are listed as VRTSvmsa, VRTSvmsa.2, VRTSvmsa.3, and so on. To remove a package:
# pkgrm VRTSvmsa

3-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Installing the VEA Packages To install the VEA server and client on a Solaris machine: 1 Log on as superuser. 2 Mount the software CD-ROM, invoke the installation administration script, and add the VEA packages by using the pkgadd command. Use the -a option to invoke the administration package. the -d option to add the software packages. For example, to add the VEA server and client packages: # pkgadd -a /cdrom/CD_name/product_name/scripts VRTSobadmin -d /cdrom/CD_name/product_name/pkgs VRTSob VRTSobgui VRTSvmpro VRTSfspro 3 VEA startup scripts are installed in /opt/VRTSob/bin. To simplify running the client and server administration commands, add the directory containing the VEA startup scripts to your PATH environment variable in your .profile file: # PATH=$PATH:/opt/VRTSob/bin # export PATH 4 If you plan to run the VEA client from a UNIX machine other than the machine to be administered, install the VRTSobgui package on the machine where the client will run. Installing the VEA Client on Windows The VEA client runs on Windows NT, Windows 2000, Windows XP, Windows ME, Windows 98, and Windows 95 machines. If you plan to run VEA from a Windows machine, install the optional Windows package after you have installed the VEA server on a UNIX machine. Before Installing the VEA Client on Windows Only one VEA package can be installed on a Windows machine at any given time. Before you install VRTSobgui.msi on a Windows machine, you must uninstall any existing VRTSvmsa packages and remove the old setup.exe file from that machine. Note: If you are installing the VEA client on Windows NT 4.0, you must upgrade Windows Installer to version 2.0 and use Service Pack 6. Installing the VEA Client on Windows To install the VEA client on a Windows machine: 1 Log on as administrator. 2 Insert the CD-ROM containing the VEA software. 3 Using Windows Explorer or a command window, navigate to the pkgs\win32 directory and execute the VRTSobgui.msi program. 4 Follow the instructions presented by the installation wizard to complete the installation.
Lesson 3: VERITAS Volume Manager Interfaces
Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-23

Starting the VEA Server


Once installed, the VEA server starts up automatically at system startup. To start the VEA server manually:
1. Log on as superuser. 2. Start the VEA server by invoking the server program:

# /opt/VRTSob/bin/vxsvc
When the VEA server is started:

/var/vx/isis/vxisis.lock contains the server process ID and ensures that only one instance of the VEA server is running. /var/vx/isis/vxisis.log contains server process log messages.
3-16

FOS35_Sol_R1.0_20020930

Starting the VEA Server and Client


Starting the VEA Server Before you start VEA, you must run vxinstall to create the rootdg disk group containing at least one disk. In order to use VEA, the VEA server must be running on the Solaris machine to be administered. Only one instance of the VEA server should be running at a time. Once installed, the VEA server starts up automatically at system startup by the script /etc/rc2.d/S50isisd. The VEA client can provide simultaneous access to multiple host machines. Each host machine must be running the VEA server. Manually Starting the VEA Server To start the VEA server manually: 1 Log on as superuser. 2 Start the server by invoking the VEA server program: # /opt/VRTSob/bin/vxsvc Alternatively, you can invoke the VEA startup script: # /etc/rc2.d/S50isisd start When the VEA server is started, the file /var/vx/isis/vxisis.lock is created. This file contains the process ID of the server and ensures that only one instance of the VEA server is running. Server process log messages are recorded in /var/vx/isis/vxisis.log.

3-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Starting the VEA Client

On Solaris:

# vea &

On Windows: Select Start>Programs>VERITAS> VERITAS Enterprise Administrator.


3-17

FOS35_Sol_R1.0_20020930

Starting the VEA Client After installing VxVM and VEA, and starting the VEA server, you can start the VEA client. 1 To start the VEA client on a UNIX system for administering a local or remote system running the VEA server, type: # vea & To start the VEA client on Windows for administering a remote system running the VEA server, select Start>Programs>VERITAS>VERITAS Enterprise Administrator. 2 In the Connection dialog box, specify your: Server host name User name (The default is root.) Password You can mark the Remember password check box to avoid typing the username and password on subsequent connections from that machine. Do not use this option if the client system is not secure. 3 Click OK. The VEA main window is displayed. Note: Entries for your user name and password must exist in the password file or corresponding Network Information Name Service table on the machine to be administered. Your user name must also be included in the VERITAS administration group (vrtsadm, by default) in the group file or NIS group table. If the vrtsadm entry does not exist, only root can run VEA.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-25

Connecting Automatically at VEA Client Startup


Right-click a Right-click connected host or connected host or host listed under host listed under the History node, the History node, and select Add to and select Add to Favorite Hosts. Favorite Hosts. If the user name If user name and password are and password saved, saved, reconnection is reconnection is automatic at VEA automatic at VEA client startup. client startup.

FOS35_Sol_R1.0_20020930

3-18

Connecting Automatically at VEA Client Startup You can configure VEA to automatically connect to hosts when you start the VEA client. In the VEA main window, the Favorite Hosts node can contain a list of hosts that are reconnected by default at the startup of the VEA client. If the user name and password are saved for the host, the reconnection is automatic. If the authentication information is not saved, you are prompted for the user name and password when you connect. The History node lists all of the hosts that have been connected since the History was last cleared. Adding a Favorite Host To add a host to the Favorite Hosts list, right-click the name of a currently connected host or a host listed under History, and select Add to Favorites.

3-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Removing a Favorite Host To remove a host from the Favorite Hosts list, right-click the host under Favorite Hosts, and select Remove from Favorite Hosts. Disabling Automatic Connection Temporarily To temporarily disable automatic connection to a host, right-click the host under Favorite Hosts, and select Reconnect at Startup. A check mark beside the menu entry indicates that you must reconnect to that host at startup. By removing the check mark, you reenable automatic connection to the host.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-27

Managing VEA
The VEA server program is: /opt/VRTSob/bin/vxsvc To confirm that the VEA server is running: # vxsvc m To stop the VEA server: # vxsvc k To display the VEA version number: # vxsvc -v To monitor VEA tasks and events, click the Logs node in the VEA object tree.
FOS35_Sol_R1.0_20020930 3-19

Managing the VEA Server


The VEA server program is /opt/VRTSob/bin/vxsvc. Confirming VEA Server Startup To confirm that the VEA server is running, type:
# vxsvc -m Current state of server: RUNNING

Stopping the VEA Server To stop the VEA server, type:


# vxsvc -k

Alternatively, you can kill the server process:


# kill cat /var/vx/isis/vxisis.lock

Displaying the VEA Version To display the VEA version number, type:
# vxsvc -v 3.0.2.255

3-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Monitoring VEA Event and Task Logs You can monitor VEA server events and tasks from the VEA main window. The Logs node in the VEA object tree provides access to the VEA Event Log and Task Log. When you click on a log, the contents of the log are displayed in the grid. You can double-click an event or task to display more detailed information. You can also view the VEA log file located at /var/vx/isis/vxisis.log. This file contains trace messages for the VEA server and VEA service providers.

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-29

Controlling Access to VEA


Create the group vrtsadm in /etc/group and specify users who have permission to access VEA:
root::0:root root::0:root other::1: other::1: bin::2:root,bin,daemon bin::2:root,bin,daemon sys::3:root,bin,sys,adm sys::3:root,bin,sys,adm adm::4:root,adm,daemon adm::4:root,adm,daemon ... ... sysadmin::14: sysadmin::14: nobody::60001: nobody::60001: noaccess::60002: noaccess::60002: nogroup::65534: nogroup::65534: teleback::100: teleback::100: vrtsadm::20:root,maria,bill vrtsadm::20:root,maria,bill
FOS35_Sol_R1.0_20020930 3-20

Customizing VEA Security


Controlling User Access to VEA Only users with appropriate privileges can run VEA. By default, only root can run VEA. If users other than root need to access VEA, you can set up the optional security feature and specify which users can run VEA. You specify which users have access to VEA after you install the software. To set up a list of users who have permission to use VEA, add a group named vrtsadm to the group file /etc/group or to the Network Information Name Service group table on the machine to be administered. The vrtsadm group does not exist by default. If the vrtsadm group does not exist, only root has access to VEA. If the vrtsadm group exists, the vrtsadm group must include the user names of any users, including root, that you want to have access to VEA. root must be included in the vrtsadm group for root to access VEA. For example, to give users root, maria, and bill access to VEA, you add the following line in the /etc/group file:
vrtsadm::999:root,maria,bill

3-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Customizing VEA Security


VEA uses a registry file to store configuration information, attributes, and values: /etc/vx/isis/Registry Use vxregctl to modify the registry values:
vxregctl /etc/vx/isis/Registry setvalue keyname [attribute...]

For example, to authorize both the vrtsadm and vxadmins groups to access VEA:
# vxregctl /etc/vx/isis/Registry setvalue SOFTWARE/VERITAS/VxSvc/CurrentVersion AccessGroups REG_SZ "vrtsadm;vxadmins"

FOS35_Sol_R1.0_20020930

3-21

Modifying Group Access All VEA configuration information is stored in a registry file, which is located by default at /etc/vx/isis/Registry. The registry file is used to contain VEA configuration settings, values, and other information. You can control some aspects of VEA, such as modifying group access, by modifying the values stored in the registry file. Note: Normally, the default registry settings are adequate. It is good practice to back up the registry file before making any changes. To modify, add, or delete registry entries in the registry file, use the vxregctl command:
vxregctl /etc/vx/isis/Registry setvalue keyname [attribute...]

For example, the vrtsadm group is the default group name. You can change the groups that are granted VEA access by changing the string value AccessGroups under the key HKEY_LOCAL_MACHINE/SOFTWARE/VERITAS/VxSvc/ CurrentVersion in the Registry file. To authorize both vrtsadm and vxadmins, type:
# vxregctl /etc/vx/isis/Registry setvalue SOFTWARE/VERITAS/VxSvc/CurrentVersion AccessGroups REG_SZ "vrtsadm;vxadmins"

Lesson 3: VERITAS Volume Manager Interfaces


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-31

You can authorize individual users without adding them to a specific group with the value named AccessUsers under the same key, with similar syntax. No users are authorized this way by default. It is better practice to authorize groups rather than users. When you make a change to the registry file, you can use the vxregctl queryvalue command to verify the value that you set:
vxregctl /etc/vx/isis/Registry queryvalue keyname [attribute...]

For example, to verify the value of the AccessGroups attribute:


# vxregctl /etc/vx/isis/Registry queryvalue SOFTWARE/VERITAS/VxSvc/CurrentVersion AccessGroups Value of AccessGroups is: vrtsadm;vxadmins

For more information on the vxregctl command, see the vxregctl(1m) manual page.

3-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to:
Describe the three VxVM user interfaces. Describe the components of the VEA main window. Access the VxVM CLI commands and manual pages. Access the vxdiskadm main menu. Install the VEA software packages. Start the VEA server and client. Manage the VEA server by displaying server status, version, task logs, and event logs. Customize VEA security.
FOS35_Sol_R1.0_20020930 3-22

Summary
This lesson introduced the three interfaces that you can use to manage VERITAS Volume Manager. This lesson described the VERITAS Enterprise Administrator (VEA) graphical user interface, the command line interface, and the vxdiskadm utility. Procedures for setting up and managing VEA were also covered. Next Steps You have been introduced to the interfaces used to perform Volume Manager administration tasks. In the next lesson, you begin using Volume Manager by learning how to manage disks. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Installation Guide This guide provides detailed procedures for installing and initializing VERITAS Volume Manager and VERITAS Enterprise Administrator. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager.
Lesson 3: VERITAS Volume Manager Interfaces
Copyright 2002 VERITAS Software Corporation. All rights reserved.

3-33

Lab 3
Lab 3: VxVM Interfaces In this lab, you set up VEA and explore its interface and options. You also invoke the vxdiskadm menu interface and display information about CLI commands by accessing the VxVM manual pages. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

3-23

Lab 3: VxVM Interfaces


Goal In this lab, you set up VEA and explore its interface and options. You also invoke the vxdiskadm menu interface and display information about CLI commands by accessing the VxVM manual pages. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

3-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Disks

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

4-2

4-2

Introduction
Overview In this lesson, you learn how to perform basic disk tasks. This lesson describes device-naming schemes, how to place a disk under Volume Manager control, how to view disk information, and how to add a disk to a disk group. This lesson also covers removing a disk from a disk group, renaming a disk, and moving a disk from one disk group to another. Importance Before you can create virtual volumes, you must learn how to configure your physical disks so that VERITAS Volume Manager (VxVM) can manage the disks. By bringing physical disks under Volume Manager control and adding those disks to a disk group, you enable VxVM to use the disk space to create volumes. Outline of Topics Naming Disk Devices VxVM Disk Configuration Stages Adding a Disk to a Disk Group Viewing Disk Information Removing a Disk from a Disk Group Renaming a Disk Moving a Disk

4-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Describe the features and benefits of the two devicenaming schemes: traditional and enclosure-based naming. Identify the three stages of VxVM disk configuration. Add a disk to a VxVM disk group. View disk information and identify disk status. Evacuate disk data and remove a disk from a disk group. Change the disk media name for a disk. Move an empty disk from one disk group to another.
FOS35_Sol_R1.0_20020930 4-3

Objectives After completing this lesson, you will be able to: Describe the features and benefits of the two device-naming schemes available in VxVM, traditional device naming and enclosure-based naming, and the method for changing between the two naming schemes. Identify the three stages of VxVM disk configuration: initializing a disk, assigning a disk to a disk group, and assigning disk space to volumes. Add a disk to a VxVM disk group by using VEA, vxdiskadm, and command line utilities, and identify default disk naming conventions. View disk information and identify disk status by using VEA, vxdiskadm, and command line utilities, such as vxdisk list. Evacuate disk data and remove a disk from a disk group by using VEA, vxdiskadm, and command line utilities. Change the disk media name for a disk by using VEA and command line utilities. Move an empty disk from one disk group to another by using VEA, vxdiskadm, and command line utilities.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-3

Traditional Device Naming


Traditional device naming in VxVM is: Operating system-dependent Based on the controller, target, and disk number Examples: Solaris: HP-UX: /dev/[r]dsk/c1t9d0s2 /dev/[r]dsk/c3t2d0 (no slice)

FOS35_Sol_R1.0_20020930

4-4

Naming Disk Devices


Device Naming Schemes In VxVM, device names can be represented using the traditional operating systemdependent format or using an OS-independent format based on enclosure names. Traditional Device Naming Traditionally, device names in VxVM have been represented in the way that the operating system represents them. For example, Solaris and HP-UX both use the format c#t#d# in device naming, which is derived from the controller, target, and disk number. VxVM parses disk names in this format to retrieve connectivity information for disks. Other operating systems have different conventions.

4-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Enclosure-Based Naming
Enclosure-based naming in VxVM: Is operating system independent Is based on logical name of the enclosure in which the disk resides:
Disk array: Sun SENA A5000 Default enclosure name: sena0 Enclosure-based disk names: sena0_1, sena0_2, ...

Can be customized to make enclosure names meaningful:


Default enclosure names: Location: Customized names:
FOS35_Sol_R1.0_20020930

sena0, sena1 Engineering Lab englab0, englab1


4-5

Enclosure-Based Naming With VxVM version 3.2 and later, VxVM provides a new device naming scheme, called enclosure-based naming. With enclosure-based naming, the name of a disk is based on the logical name of the enclosure, or disk array, in which the disk resides. The default logical name of an enclosure is typically based on the vendor ID. For example:
Disk Array Sun SENA A5000 Sun StorEdge T3 EMC Default Enclosure Name sena0 purple0 emc0 Default Enclosure-Based Disk Names sena0_1, sena0_2, sena0_3, ... purple0_1, purple0_2, purple0_3, ... emc0_1, emc0_2, emc0_3, ...

You can customize logical enclosure names to provide meaningful names, such as based on the location of an enclosure in a building or lab. For example, you can rename three T3 disk arrays in an engineering lab as follows:
Default Enclosure Name purple0 purple1 purple2 Location Engineering Lab Engineering Lab Engineering Lab Customized Enclosure Name englab0 englab1 englab2

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-5

Benefits of Enclosure-Based Naming


Easier fault isolation Device-name independence Improved management of SANs, clusters, and DMP

Host Host c1

c2

Fibre Fibre Channel Switches Switches

englab0 englab0 englab2 englab2 englab1 englab1

Disk Enclosures Disk Enclosures


FOS35_Sol_R1.0_20020930 4-6

Benefits of Enclosure-Based Naming Benefits of enclosure-based naming include: Easier fault isolation: By using enclosure information in establishing data placement policies, VxVM can more effectively place data and metadata to ensure data availability. You can configure redundant copies of your data on separate enclosures to safeguard against failure of one or more enclosures. Device-name independence: By using enclosure-based naming, VxVM is independent of arbitrary device names used by third-party drivers. Improved SAN management: By using enclosure-based disk names, VxVM can create better location identification information about disks in large disk farms and SAN environments. In a typical SAN environment, host controllers are connected to multiple enclosures in a daisy chain or through a Fibre Channel hub or fabric switch. In this type of configuration, enclosure-based naming can be used to refer to each disk within an enclosure, which enables you to quickly determine where a disk is physically located in a large SAN configuration. Improved cluster management: In a cluster environment, disk array names on all hosts in a cluster can be the same. Improved dynamic multipathing (DMP) management: With multipathed disks, the name of a disk is independent of the physical communication paths, avoiding confusion and conflict.

4-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Selecting a Naming Scheme


You can select a naming scheme:
When you run vxinstall Anytime using vxdiskadm If you select traditional naming, VxVM uses the c#t#d# format for all disks except for fabric disks, which always use the enclosure-based format. If you select enclosure-based naming, disks are displayed in three categories: Enclosures: Supported RAID disk arrays are displayed in the enclosurename_# format. Disks: Supported JBOD disk arrays are displayed with the prefix DISK_. Others: Disks that do not return a path-independent identifier to VxVM are displayed in the traditional OS-based format.
FOS35_Sol_R1.0_20020930 4-7

Selecting a Naming Scheme When you set up VxVM using vxinstall, you are prompted to specify whether you want to use the traditional or enclosure-based naming scheme. If you choose to display devices in the traditional format, the operating systemspecific naming scheme (c#t#d#) is used for all disk devices except for fabric mode disks. Fabric disks, disks connected to a host controller through a Fibre Channel hub or fabric switch, are always displayed in the enclosure-based naming format. If you select enclosure-based naming, vxinstall detects the devices connected to your system and displays the devices in three categories: Enclosures, Disks (formerly known as JBOD disks), and Others. The naming convention used is based on these categories: Enclosures: Recognized RAID disk arrays are named by default with a manufacturer-specific name in the format enclosurename_#. Disks: Recognized JBOD disk arrays are classified in the DISKS category and are named with the prefix DISK_. Others: Disks that do not return a path-independent identifier to VxVM are categorized as OTHERS and are named in the c#t#d# format. Fabric disks in this category are named with the prefix fabric_. Note: Disk devices controlled by Suns multipathing driver MPXIO are always in fabric mode (irrespective of hardware configuration) and are therefore named in the enclosure name format.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-7

Administering Enclosure Naming


Enclosure names are:
Persistent across reboots
Enclosure names are persistent. Disk names within enclosures persist if the position does not change.

Customizable
You can provide meaningful names that are, for example, based on location in a building or lab.

Used by VxVM utilities


Disk management utilities (for example, vxinstall, vxdiskadm, and vxdisk) display devices in terms of enclosures. Volume management utilities (for example, vxassist) are enclosure-aware. Enclosure-awareness is used in administering DMP, hot relocation, and other VxVM processes.
FOS35_Sol_R1.0_20020930 4-8

Administering Enclosure-Based Naming Enclosure names are: Persistent: Logical enclosure names are persistent across reboots. Disk names within enclosures are persistent as long as the relative position of the disk inside the enclosure remains unchanged. Customizable: Logical enclosure names are customizable. You can provide meaningful names that are, for example, based on their location in a building or lab site. You can rename enclosures during the vxinstall process or later by using command line utilities. Used by VxVM utilities: With enclosure-based naming, VxVM utilities such as vxinstall, vxdiskadm, and vxdisk display disk device names in terms of the enclosures in which they are located. Volume management utilities such as vxassist are also enclosure-aware. When you create volumes and allocate disk space to volumes, you can take advantage of VxVMs enclosure awareness to specify data placement policies. Enclosure awareness is also used in administering multipathed disks, and internally, the VxVM configuration daemon vxconfigd uses enclosure information to determine metadata placement policies. The hot relocation feature of VxVM uses enclosure information to perform proximity calculations for devices that have enclosure-based names.

4-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Changing the Naming Scheme


Menu 1 2

At the vxdiskadm main menu, select option 20:


Volume Manager Support Operations Menu: VolumeManager/Disk 18 19 20 . . . Follow the prompts to change the disk naming scheme from the c#t#d# format to the enclosure-based format, or vice versa. This operation requires the VxVM configuration daemon, vxconfigd, to be stopped and restarted.
4-9

Allow multipathing ... List currently suppressed devices Change the disk naming scheme

FOS35_Sol_R1.0_20020930

Changing the Disk-Naming Scheme You can change the disk-naming scheme at any time by using the vxdiskadm menu interface. To change the disk-naming scheme, select menu item 20, Change the disk naming scheme from the vxdiskadm main menu. When you select this option, output similar to the following is displayed:
Change the disk naming scheme Menu: VolumeManager/Disk/NamingScheme Use this screen to change the disk naming scheme (from the c#t#d# format to the enclosure based format and vice versa). NOTE: This operation will result in vxconfigd being stopped and restarted. Volume Manager is currently using the enclosure based format to name disks on the system. Do you want to change the naming scheme ? [y,n,q,?] (default: n)

Enter y to change the naming scheme. The vxconfigd daemon is restarted to bring the new disk naming scheme into effect.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-9

Disk Configuration Stages


c1t2d0
Partitions

To use the space in a physical disk to build VxVM volumes, the disk must be configured for VxVM use. Stages in VxVM disk configuration:
1. A disk is initialized by VxVM. 2. The disk is added to a disk group. 3. The disk space is assigned to a volume.

Uninitialized Disk

FOS35_Sol_R1.0_20020930

4-10

VxVM Disk Configuration Stages


Placing a Disk Under Volume Manager Control In order to use the space of a physical disk to build VxVM volumes, you must place the disk under Volume Manager control. After installing VxVM, you run vxinstall to place at least one disk into the rootdg disk group. You can also use vxinstall to encapsulate your boot disk and place it under VxVM control. You should run the vxinstall program only once on a system. You place other disks under Volume Manager control using any of the VxVM interfaces. Before Configuring a Disk for Use by VxVM Before a disk can be placed under Volume Manager control, the disk media must be formatted outside of VxVM using the standard UNIX format command. SCSI disks are usually preformatted. The format command typically is needed only if the disk format becomes severely damaged. Once a disk is formatted, the disk can be initialized for use by Volume Manager. In other words, if the disks are not detected using format, then VxVM cannot detect the disks, either. Stages of Disk Configuration A disk goes through the following stages when it is configured for use by VxVM: 1 An uninitialized disk is initialized by VxVM. 2 An initialized disk is assigned to a VxVM disk group. 3 The disk space of a disk in a disk group is assigned to VxVM volumes.

4-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Configuration Stages


c1t2d0
Partitions

Stage 1: Stage 1: Initialize disk. Initialize disk. c1t2d0

Uninitialized Disk

Private region Public region

Free Disk Pool


FOS35_Sol_R1.0_20020930 4-11

FOS35_Sol_R1.0_20020930

4-11

Stage One: Initialize Disk A formatted physical disk is considered uninitialized until it is initialized for use by VxVM. When a disk is initialized, partitions for the public and private regions are created, and VM disk header information is written to the private region. Any data or partitions that may have existed on the disk are removed. An initialized disk is placed into the VxVM free disk pool. The VxVM free disk pool contains disks that have been initialized but that have not yet been assigned to a disk group. These disks are under Volume Manager control but cannot be used by Volume Manager until they are added to a disk group. Note: Encapsulation is another method of placing a disk under VxVM control in which existing data on the disk is preserved. Encapsulating disks is covered in a later lesson.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-11

Disk Configuration Stages


c1t2d0
Partitions

Stage 1: Stage 1: Initialize disk. Initialize disk. c1t2d0 Stage 2: Stage 2: Assign disk Assign disk to disk group. to disk group.

Uninitialized Disk

Private region Public region

c1t2d0
Disk Media Name: datadg01

Free Disk Pool


FOS35_Sol_R1.0_20020930

Disk Group: datadg

4-12

FOS35_Sol_R1.0_20020930

4-12

Stage Two: Assign a Disk to a Disk Group When an initialized disk is placed into a disk group, a disk media name is assigned to the disk. Space in the public region is made available for assignment to volumes, and the private region of the disk is updated with the disk media name, disk group name, and disk group ID. When you add a disk to a disk group, it becomes a Volume Manager disk. Volume Manager has full control of the disk, and the disk can be used to allocate space for volumes. The free space pool in a disk group refers to the space on all disks within the disk group that has not been allocated as subdisks. When you place a disk into a disk group, its space becomes part of the free space pool of the disk group. Note: A disk can also be added to a disk group as a spare disk (a disk that is available for relocation) or a reserved disk (a disk that is not used for any purpose except when explicitly named in a command line command). These options are covered in the Introduction to Recovery lesson.

4-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Configuration Stages


c1t2d0
Partitions

Stage 1: Stage 1: Initialize disk. Initialize disk. c1t2d0 Stage 2: Stage 2: Assign disk Assign disk to disk group. to disk group. Stage 3: Stage 3: Assign disk Assign disk space to space to volumes. volumes.
Volume
4-13

Uninitialized Disk

Private region Public region

c1t2d0
Disk Media Name: datadg01

Free Disk Pool


FOS35_Sol_R1.0_20020930

Disk Group: datadg

FOS35_Sol_R1.0_20020930

4-13

Stage Three: Assign Disk Space to Volumes When you create volumes, space in the public region of a disk is assigned to the volumes. Some operations, such as removal of a disk from a disk group, are restricted if space on a disk is in use by a volume.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-13

Before Adding a Disk


Which disk do you Which disk do you want to add? want to add? Do you want to add Do you want to add a single disk or a single disk or multiple disks? multiple disks? Do you want to Do you want to preserve or eliminate preserve or eliminate data on the disk? data on the disk?

To which disk To which disk group do you want group do you want to add the disk? to add the disk?

Disk Disk

Disk Group Disk Group Is the disk group new Is the disk group new or does it exist? or does it exist?
FOS35_Sol_R1.0_20020930

Add the disk to the Add the disk to the free disk pool? free disk pool?
4-14

Adding a Disk to a Disk Group


Before You Add a Disk When you add a disk to a disk group, you are prompted to make a series of choices in which you specify how you want the disks to be set up. Therefore, before you add a disk to a disk group, you should be able to answer these questions: Which disk do you want to add? To which disk group do you want to add the disk? Is the disk group new or does the disk group already exist? Do you want to add a single disk or multiple disks? Do you want to add all disks associated with a target? Do you want to add all disks on a specific controller? Does the disk contain data that you want to preserve or does the disk contain data that is not important to you? Do you want to add the disk to the free disk pool and add it to a disk group at a later time?

4-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding Disks to a Disk Group


Adding disks to a disk group makes the space available for creating volumes. You can add a single disk or multiple disks. You cannot add a disk to more than one disk group. Default disk media names vary with the interface used to add the disk to a disk group, but are conventionally in the format diskgroup##, such as datadg00, datadg01, and so on. Disk media names must be unique within a disk group.
FOS35_Sol_R1.0_20020930 4-15

Adding Disks Adding a disk to a disk group makes the disk space available for use in creating VxVM volumes. You can add a single disk or multiple disks to a disk group. You cannot add a disk to more than one disk group. To add a disk to a disk group, you select an uninitialized disk or a free disk. If the disk is uninitialized, you must initialize the disk before you can add it to a disk group. Disk Naming When you add a disk to a disk group, the disk is assigned a disk media name. The disk media name is a logical name used for VxVM administrative purposes. The disk media name must be unique within the disk group. You can assign a meaningful name or use the default name assigned by VxVM. Default Disk Naming The default disk media names depend on the interface used to add them to a disk group: If you add a disk to a disk group using VEA or vxdiskadm, default disk media names for disks (other than disks in rootdg) are in the format diskgroup##, where diskgroup is the name of the disk group and ## is a two-digit number starting with either 00 (in VEA) or 01 (in vxdiskadm).

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-15

If you add a disk to a disk group by using a CLI command, such as vxdg adddisk, default disk media names are the same as the device tag, in the form c#t#d#. Disks added to the rootdg disk group use a different default disk naming convention when added with vxdiskadm. Default disk media names for disks added to rootdg with vxdiskadm are disk01, disk02, and so on.

Notes on Disk Naming You can change disk media names after the disks have been added to disk groups. However, if you must change a disk media name, it is recommended that you make the change before using the disk for any volumes. Renaming a disk does not rename the subdisks on the disk, which may be confusing. You should assign logical media names, rather than use the device names, to facilitate transparent logical replacement of failed disks. Assuming that you have a sensible disk group naming strategy, the VEA or vxdiskadm default disk naming scheme is a reasonable policy to adopt.

4-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding a Disk: Methods


VEA Select the disk that you want to add. Select Actions>Add Disk to Dynamic Disk Group.
Menu 1 2

vxdiskadm Select option 1, Add or initialize one or more disks, from the main menu. CLI vxdisksetup: Configures a disk for VxVM vxdg adddisk: Adds initialized disks to a disk group
4-16

#_

FOS35_Sol_R1.0_20020930

Adding a Disk: Methods You can use any of the following methods to add a disk to a disk group. These methods are detailed in the sections that follow.
VEA Select the disk that you want to add. Select Actions>Add Disk to Dynamic Disk Group. Specify the disk group to which you want to add the disk. Encapsulate or initialize the disk.

vxdiskadm CLI

Select option 1, Add or initialize one or more disks. vxdisksetup: Configures a disk for use by VxVM by creating the private and public regions on a specified disk vxdg adddisk: Adds initialized disks to a disk group

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-17

Adding a Disk: VEA


Select Actions>Add Disk to Dynamic Disk Group. Select Actions>Add Disk to Dynamic Disk Group.

Disk group name Disk group name

All All available available disks disks

Disk to be Disk to be added added Disk media name Disk media name

FOS35_Sol_R1.0_20020930

4-17

Adding a Disk: VEA To place a disk under Volume Manager control by using VEA, you select a disk and add the disk to a disk group of your choice. Note: You cannot add a disk to the free disk pool with VEA. 1 Select a free or uninitialized disk. 2 In the Actions menu, select Add Disk to Dynamic Disk Group. Note: In VEA, a disk group is also called a dynamic disk group. These terms are synonymous. 3 The Add Disk to Dynamic Disk Group wizard is displayed. Click Next at the welcome page. 4 Specify the disk group to which you want to add the disk: Dynamic disk group name: In the Dynamic disk group name list, you can select an existing disk group. New dynamic disk group: Click the New dynamic disk group button to add the disk to a new disk group. Select the disk to add: The disk that you want to add should be displayed in the Selected disks field. A list of available disks is displayed in the Available disks field. You can move disks between the two fields by using the Add and Remove buttons. Disk Name(s): By default, Volume Manager assigns a disk media name that is based on the disk group name of a disk. You can assign a different name to the disk by typing a name in the Disk name(s) field. If you are adding more than one disk, place a space between each name in the Disk name(s) field.
4-18 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Comment: Add comments about the disk in the Comment field. Click Next to continue. 5 If the disk is uninitialized, you are prompted to initialize or encapsulate the disk. If the disk is from the free disk pool, you are not prompted to initialize or encapsulate. Select Initialize and click Next. 6 Confirm your actions to complete the wizard and the disk initialization process. When the disk is placed under VxVM control, the Type property changes to Dynamic, and the Status property changes to Imported. Before placing the disk under VxVM control:

After placing the disk under VxVM control:

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-19

Adding a Disk: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select option 1:


Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 . . . Add or initialize one or more disks Encapsulate one or more disks Remove a disk

Name of the device to add Whether to encapsulate or initialize Name of the disk group to which the disk will be added
FOS35_Sol_R1.0_20020930 4-18

Adding a Disk: vxdiskadm To add a disk using the vxdiskadm interface: 1 In the vxdiskadm main menu, select option 1, Add or initialize one or more disks. 2 You are prompted to select the disk device that you want to add. Type list to display a list of available devices, or type the name of the device to add. You can specify the device name using either the c#d#t# format or the enclosurebased format. Select disk devices to add: [<pattern-list>,all,list,q,?] c1t2d0 You can also specify: All disks on a controller For example, to add all disks on controller 0, at the prompt you type c0. All disks on a specific controller and target For example, to add all disks on controller 0 target 3, at the prompt you type c0t3. All disks on a specific enclosure For example, to add all disks on the enclosure named emc0, at the prompt you type emc0_. All disks detected by the system If any disks are already initialized, they are skipped. For example, to add all disks detected by the system, at the prompt you type all.
VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-20

Any combination of controllers, targets, disks, or enclosures separated by spaces For example, to add all disks on controller 0, all disks on controller 1 target 3, and the disk emc0_2, at the prompt you type c0 c1t3 emc0_2.

3 After verifying the disk that you want to add, you identify the disk group to which you want to add the disk. You can specify any existing disk group or type the name of a new disk group. Which disk group [<group>,none,list,q,?] (default: rootdg)datadg Notes: If the disk group is new, you must verify that you want to create a new disk group. You cannot add a disk to more than one disk group. If you try to add a disk to a disk group that already belongs to another disk group, the disk is ignored. If you want to add a disk to Volume Manager control for future use, type none instead of selecting a disk group name. The disk is initialized or encapsulated and then placed in the free disk pool. The disk cannot be used until it is added to a disk group. 4 Through a series of prompts, you can specify VxVM disk names and hot relocation properties, or you can accept the default responses at each prompt. 5 Next, vxdiskadm prompts you to encapsulate or initialize the disk. Encapsulate this device? [y,n,q,?] (default: y) n Instead of encapsulating, initialize? [y,n,q,?] (default: y) y

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-21

Adding a Disk: CLI

#_

The vxdisksetup command: Configures a disk for VxVM use Creates public and private regions on a disk Syntax: /usr/lib/vxvm/bin/vxdisksetup -i device_tag [attributes] Example: To configure the disk c1t0d0: # vxdisksetup -i c1t0d0

FOS35_Sol_R1.0_20020930

4-19

Adding a Disk: CLI While the VEA and vxdiskadm interfaces provide step-by-step, easy-to-use instructions, the benefit of CLI commands is that you can use them within scripts to perform disk and volume management tasks. The vxdisksetup Command The vxdisksetup command configures a disk for use by Volume Manager by creating the private and public region partitions on a disk. These two partitions have tags that identify them as appropriate to VxVM. This command is located in /usr/lib/vxvm/bin:
/usr/lib/vxvm/bin/vxdisksetup -i device_tag [attributes]

The device_tag defines the controller, target, and SCSI logical unit number of the disk to be set up and takes the form c#t#d#. The -i option writes a disk header to the disk, making the disk directly usable, for example, as a new disk in a disk group. If you are using enclosure-based naming, you specify the disk to be configured in the enclosure format. The name must reference a valid disk with partition devices under the /dev/vx/rdmp directory. For example, to configure the disk c1t0d0, you type:
# vxdisksetup -i c1t0d0

Several attributes are available that affect the layout strategy.

4-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Caution: Attributes should be used with caution, as they can render a disk unusable by VxVM.
Attribute config Description Sets up kernel logs or configuration databases on the disk config is the converse of the noconfig attribute and is the default. config is ignored unless the -i option is specified. All lengths and offsets are rounded up to cylinder boundaries due to restrictions on the layout of partitions. Prevents setting up kernel logs or configuration databases on the disk The size of the default private region partition is set to 80 blocks, which is the minimum allowed private region size. noconfig is ignored unless the -i option is specified. Specifies the length of the private region partition of the disk The default size of private region is 2048 blocks (sectors); the maximum size is 524288 blocks (sectors). With 512-byte blocks, the default size is 1048576 bytes (1 MB) and the maximum size is 268435456 bytes (256 MB) Indicates the sector offset of the private region on the disk The default offset for the private area is at the beginning of the disk. A negative offset relocates the private region at an offset relative to the end of the disk. Specifies the length of the public region partition of the disk The default is the size of the disk minus the private area on the disk. Sets the offset on the disk where the public region partition starts The default is the end of the private region partition unless the private region partition is moved from the beginning of the disk, in which case the public region offset defaults to follow the private region partition.

noconfig

privlen=length

privoffset=[-]offset

publen=length

puboffset=offset

For more information on vxdisksetup attributes, see the vxdisksetup(1m) manual page.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-23

Adding a Disk: CLI

#_

The vxdg adddisk command: Adds a disk to a disk group Copies disk group configuration to the disk Syntax:
vxdg -g diskgroup adddisk disk_name=device_tag

Example: To add c2t0d0 to newdg and name it newdg02:


# vxdg -g newdg adddisk newdg02=c2t0d0

FOS35_Sol_R1.0_20020930

4-20

The vxdg adddisk Command The vxdg utility performs many basic administrative operations on disk groups, including creating, importing, and deporting disk groups, in addition to adding a disk to a disk group. After configuring a disk for VxVM, you use the vxdg adddisk command to add the disk to a disk group.
vxdg -g diskgroup adddisk disk_name=device_tag

The disk must not already be part of a disk group. Use the -g diskgroup option to specify the disk group to which you will add the disk. The disk_name specifies the disk media name of the VxVM disk. Note: If you do not specify a disk media name, the device name is used to identify the disk to VxVM. Using a device name as a disk media name is not recommended. The device_tag specifies the name of the device in the form c#t#d# or using an enclosure-based name. When you add a disk to a disk group, the disk group configuration is copied onto the disk, and the disk is stamped with the system host ID. For example, to add the disk c2t0d0 to the disk group newdg and assign a disk media name of newdg02, you type:
# vxdg -g newdg adddisk newdg02=c2t0d0

4-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Viewing Disks: Methods


VEA View disks in the object tree and grid. View disk properties using Actions>Properties.
Menu 1 2

vxdiskadm Select the list option, List disk information, from the main menu. CLI vxdisk list: Lists disk information prtvtoc: Displays VTOC configuration (Solaris command)
4-21

#_

FOS35_Sol_R1.0_20020930

Viewing Disk Information


Keeping Track of Your Disks By viewing disk information, you can: Determine if a disk has been initialized and placed under Volume Manager control. Determine if a disk has been added to a disk group. Verify the changes that you make to disks. Keep track of the status and configuration of your disks. Viewing Disk Information: Methods You can use any of the following methods to view disk information. These methods are detailed in the sections that follow.
VEA View disk categories and status in the object tree and grid of the VEA main window. View properties of a specific disk by selecting Properties from the Actions menu. Select the list option, List disk information, from the main menu.

vxdiskadm CLI

vxdisk list: List detailed or summary information for all disks or for a specific disk. prtvtoc: Display the volume table of contents (VTOC) for a disk.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-25

View All Disks: VEA


Select the Disks node in the object tree. Disks and their Select the Disks node in the object tree. Disks and their properties are displayed in the grid. properties are displayed in the grid.
Free: Initialized, but Free: Initialized, but not in a disk group not in a disk group

FOS35_Sol_R1.0_20020930

Imported: Initialized and Imported: Initialized and added to a disk group 4-22 added to a disk group Not Setup: Not initialized Not Setup: Not initialized

FOS35_Sol_R1.0_20020930

4-22

Displaying Disk Information: VEA Viewing Disk Information In VEA, disks are represented under the Disks node in the object tree, in the Disk View window, and in the grid for several object types, including controllers, disk groups, enclosures, and volumes. In the grid of the main window, under the Disks tab, you can identify many disk properties, including disk name, disk group name, size of disk, amount of unused space, and disk status. In particular, the status of a disk can be: Not Setup: The disk is not under VxVM control. The disk may be in use as a raw device by an application. Free: The disk is in the free disk pool; it is initialized by VxVM but is not in a disk group. You cannot place a disk in this state using VEA, but VEA recognizes disks that have been initialized through other interfaces. Imported: The disk is in an imported disk group. Deported: The disk is in a deported disk group. Disconnected: The disk contains subdisks that are not available because of hardware failure. This status applies to disk media records for which the hardware has been unavailable and has not been replaced within VxVM. External: The disk is in use by a foreign manager, such as Logical Volume Manager.

4-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

View Disk Details: VEA


Select a specific disk in the object tree to display more Select a specific disk in the object tree to display more detailed disk information in the grid. detailed disk information in the grid.
Volumes that Volumes that use the disk use the disk Regions of Regions of the disk the disk Connected Connected controllers controllers DMP paths DMP paths to this disk to this disk Disk Disk problems problems

Subdisk Subdisk layout and layout and disk usage disk usage

FOS35_Sol_R1.0_20020930

4-23

FOS35_Sol_R1.0_20020930

4-23

Viewing Disk Details When you select a disk in the object tree, many details of the disk layout are displayed in the grid. You can access these details by clicking the associated tab: Volumes: This page displays the volumes that use this disk. Disk Regions: This page displays the disk regions of the disk. Controllers: This page displays the controllers to which this disk is connected. Paths: This page shows the dynamic multipaths available to this disk. Disk View: This page displays the layout of any subdisks created on this disk media, and details of usage. The Disk View window has the same view of all related disks with more options available. To launch the Disk View window, select an object (such as a disk group or volume), the select Actions>Disk View. Alerts: This page displays any problems with a drive. Under each tab (except for the Disk View), general information about the disk is also displayed:

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-27

View Disk Properties: VEA


Right-click a disk and select Properties. Right-click a disk and select Properties. The Disk Properties The Disk Properties window is displayed. window is displayed.

FOS35_Sol_R1.0_20020930

Select a unit Select a unit to display to display capacity and capacity and unallocated unallocated space in space in other units. other units.

4-24

FOS35_Sol_R1.0_20020930

4-24

Viewing Disk Properties In VEA, you can also view disk properties in the Disk Properties window. To open the Disk Properties window, right-click a disk and select Properties. The Disk Properties window includes the capacity of the disk and the amount of unallocated space. You can select the units for convenient display in the unit of your choice.

4-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

View Disk Information: CLI

#_

To display basic information about all disks: # vxdisk list


DEVICE c0t0d0 c0t1d0 c0t2d0 c1t0d0 c1t1d0 c1t2d0 sena0_0 sena0_1 TYPE sliced sliced sliced sliced sliced sliced sliced sliced DISK rootdisk datadg01 GROUP rootdg datadg STATUS online VxVM online Disks online error Free error Disk error error error

Uninitialized Disks
FOS35_Sol_R1.0_20020930 4-25

Displaying Disk Information: CLI Displaying Basic Disk Information You use the vxdisk list command to display basic information about all disks attached to the system. The vxdisk list command displays: Device names for all recognized disks Type of disk, that is, how a disk is placed under VxVM control Disk names Disk group names associated with each disk Status of each disk For example:
# vxdisk list DEVICE c0t0d0 c0t1d0 c0t2d0 c1t0d0 c1t1d0 c1t2d0 sena0_0 sena0_1 sena0_2 TYPE sliced sliced sliced sliced sliced sliced sliced sliced sliced DISK rootdisk datadg01 GROUP rootdg datadg STATUS online online online online error error error error error

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-29

In the output: A status of online in addition to entries in the Disk and Group columns indicates that the disk has been initialized or encapsulated, assigned a disk media name, and added to a disk group. The disk is under Volume Manager control and is available for creating volumes. A status of online without entries in the Disk and Group columns indicates that the drive has been initialized or encapsulated, but is not currently assigned to a disk group. The disk is in the free disk pool. A status of error indicates that the disk has neither been initialized nor encapsulated by VxVM. The disk is not under VxVM control. Note: To show all disk groups in the output, use vxdisk -o alldgs list. Disk groups that are not imported are displayed in parentheses.

4-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

View Detailed Information: CLI

#_

To display detailed information for a disk:


vxdisk list disk_name # vxdisk list datadg01
Device: devicetag: type: hostid: disk: group: . . . c1t10d0s2 c1t10d0 sliced train12 name=datadg01 id=1000753057.1114.train12 name=datadg id=1000753077.1117.train12

FOS35_Sol_R1.0_20020930

4-26

Displaying Detailed Disk Information To display detailed information about a disk, you use the vxdisk list command with the name of the disk:
# vxdisk list disk_name

For example:
# vxdisk list datadg01 Device: c1t10d0s2 devicetag: c1t10d0 type: sliced hostid: train12 disk: name=datadg01 id=1000753057.1114.train12 group: name=datadg id=1000753077.117.train12 . . .

In the output: Device is the full UNIX device name of the disk. devicetag is the name used by VxVM to reference the physical disk. type is how a disk was placed under VM control. sliced is the default type. hostid is the name of the system that currently manages the disk group to which the disk belongs; if blank, no host is currently controlling this group. disk is the VM disk media name and internal ID. group is the disk group name and internal ID.
Lesson 4: Managing Disks
Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-31

The complete output from displaying detailed information about a disk is as follows:
# vxdisk list datadg01 Device: c1t10d0s2 devicetag: c1t10d0 type: sliced hostid: train12 disk: name=datadg01 id=1000753057.1114.train12 group: name=datadg id=1000753077.1117.train12 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/c1t10d0s4 char=/dev/vx/rdmp/c1t10d0s4 privpaths: block=/dev/vx/dmp/c1t10d0s3 char=/dev/vx/rdmp/c1t10d0s3 version: 2.2 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=17671311 private: slice=3 offset=1 len=6925 update: time=1019676334 seqno=0.5 headers: 0 248 configs: count=1 len=5083 logs: count=1 len=770 Defined regions: config priv 000017-000247[000231]:copy=01 offset=000000 enabled config priv 000249-005100[004852]:copy=01 offset=000231 enabled log priv 005101-005870[000770]:copy=01 offset=000000 enabled Multipathing information: numpaths: 1 c1t10d0s2 state=enabled

4-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

View Disk Summaries: CLI

#_

To display summary information for all disks: # vxdisk -s list


Disk: type: flags: diskid: dgname: dgid: hostid: Disk: type: . . . c0t0d0s2 sliced online ready private autoconfig autoimport imported 1000748424.1036.train12 rootdg 1000748418.1025.train12 train12 c1t10d0s2 sliced
4-27

FOS35_Sol_R1.0_20020930

Displaying Disk Summary Information To view a summary of information for all disks, you use the -s option with the vxdisk list command. The output lists device names with the type, flags, disk ID, disk group name, disk group ID, and host ID.
# vxdisk -s list Disk: c0t0d0s2 type: sliced flags: online ready private autoconfig autoimport imported diskid: 1000748424.1036.train12 dgname: rootdg dgid: 1000748418.1025.train12 hostid: train12 info:
Disk: type: flags: diskid: dgname: dgid: hostid: info: c1t10d0s2 sliced online ready private autoconfig autoimport imported 1000753057.1114.train12 datadg 1000753077.1117.train12 train12 privoffset=1

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-33

Descriptions of Flags
Flag online ready private autoconfig autoimport imported Description The specified disk is online and is ready to use. The disk has a private region where the configuration database and kernel log are defined and enabled/disabled. The specified disk is part of a disk group that is autoconfigured. The specified disk is part of a disk group that can be imported at boot time. The specified disk is part of a disk group that is currently imported. When the disk group is deported, this field is empty. The specified disk is part of a cluster shareable disk group.

shared

4-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

View Disk VTOC: CLI

#_

To display the volume table of contents for a disk:


# prtvtoc physical_disk_path

When a disk is under Volume Manager control:


Tag 14 is always the public region. Tag 15 is always the private region.
# prtvtoc /dev/rdsk/c1t10d0s2 # prtvtoc /dev/rdsk/c1t10d0s2 . . . . . . * First * First * Partition Tag Flags Sector * Partition Tag Flags Sector 2 5 01 0 2 5 01 0 3 15 01 3591 3 15 01 3591 FOS35_Sol_R1.0_20020930 4 14 01 4 14 01 10773 10773
FOS35_Sol_R1.0_20020930

Sector Last Sector Last Count Sector Mount... Count Sector Mount... 17682084 17682083 17682084 17682083 7182 10772 7182 10772 4-28 17671311 17682083 17671311 17682083
4-28

Displaying the Volume Table of Contents To display disk configuration information from the volume table of contents for a disk, you use the standard UNIX prtvtoc command. You can use the prtvtoc command to determine if a disk has been initialized for VxVM or if it retains its original disk formatting. When you initialize or encapsulate a disk, VxVM sets the partition tags for the public and private regions: Tag 14 is always used for the public region of the disk. Tag 15 is always used for the private region of the disk. In the following example, compare the output of prtvtoc for an initialized disk that is under Volume Manager control and a disk that is not under Volume Manager control.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-35

Disk Under Volume Manager Control: prtvtoc


# * * * * * * * * * * * * * * * * * * * * * prtvtoc /dev/rdsk/c1t10d0s2 /dev/rdsk/c1t10d0s2 partition map
Dimensions: 512 bytes/sector 133 sectors/track 27 tracks/cylinder 3591 sectors/cylinder 4926 cylinders 4924 accessible cylinders Flags: 1: 10: unmountable read-only

Unallocated space: First Sector 0 Sector Count 3591 Last Sector 3590 First Sector 0 3591 10773 Sector Count 17682084 7182 17671311 Last Sector 17682083 10772 17682083

Partition 2 3 4

Tag Flags 5 15 14 01 01 01

Mount Dir

Disk Not Under Volume Manager Control: prtvtoc


# * . * * * * * * * prtvtoc /dev/rdsk/c1t10d0s2 /dev/rdsk/c1t10d0s2 partition map . . Unallocated space: First Sector Last Sector Count Sector 0 17682084 17682083
First Sector 0 Sector Count 17682084 Last Sector 17682083

Partition 2

Tag Flags 5 01

Mount Dir

4-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Listing Disks: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select the list option:


Volume Manager Support Operations Menu: VolumeManager/Disk ... list List disk information

Disk information is displayed:


DEVICE c0t0d0 c1t8d0 c1t9d0 c1t10d0 c1t11d0
FOS35_Sol_R1.0_20020930

DISK rootdisk datadg01 -

GROUP rootdg datadg -

STATUS online error error online online


4-29

Displaying Disk Information: vxdiskadm If you are using the vxdiskadm utility, you can display disk information by using the list option from the main menu. The list option displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk. To display disk information: 1 From the vxdiskadm main menu, select the list option. Select an operation to perform: list 2 At the prompt, press Return or type all to display a list of all disks. Enter disk device or "all" [<address>,all,q,?] (default: all) all 3 A list of all disks is displayed. To display disk information about a specific disk, including the device name, the type of disk, and information about the public and private partitions, you can type the address of a specific device, for example, c0t0d0: DEVICE DISK GROUP STATUS c0t0d0 rootdisk rootdg online c1t8d0 error c1t9d0 error c1t10d0 datadg01 datadg online c1t11d0 online Device to list in detail [<address>,none,q,?] (default: none) c0t0d0
Lesson 4: Managing Disks
Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-37

Removing a Disk
When removing a disk from a disk group, you have two options:
Move the disk to the free disk pool. Return the disk to an uninitialized state.

You cannot remove the last disk in a disk group, unless you destroy the disk group:
In CLI, you must destroy the disk group to free the last disk in the disk group. In VEA, when you remove the last disk in a disk group, the disk group is automatically destroyed.

Before removing a disk, ensure that the disk does not contain data that is needed.
FOS35_Sol_R1.0_20020930 4-30

Removing a Disk from a Disk Group


Removing Disks If a disk is no longer needed in a disk group, you can remove the disk. After you remove a disk from a disk group, the disk cannot be accessed. When removing a disk from a disk group, you have two options: Move the disk to the free disk pool. With this option, the disk remains under Volume Manager control. Send the disk back to an uninitialized state. With this option, the disk is no longer under Volume Manager control. To remove the last disk in a disk group, you must destroy the disk group. From the command line, to remove the last disk in a disk group, you must manually destroy the disk group. In VEA, when you remove the last disk in a disk group, the disk group is automatically destroyed. The last disk in the rootdg disk group can never be removed. Before You Remove a Disk Before removing a disk, make sure that the disk contains no data, the data is no longer needed, or the data is moved to other disks. Removing a disk that contains volumes can result in lost data or lost data redundancy.

4-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Evacuating a Disk
Before removing a disk, you may need to evacuate data from the disk to another disk in the disk group that has sufficient free space.

VEA

Menu 1 2

Select the disk that you want to evacuate. Select Actions>Evacuate Disk.

vxdiskadm
Select option 7, Move volumes from a disk, from the main menu.

#_

CLI
Syntax:

vxevac -g diskgroup from_disk to_disk # vxevac -g datadg datadg02 datadg03


4-31

Example:

FOS35_Sol_R1.0_20020930

Evacuating a Disk Evacuating a disk moves the contents of the volumes on a disk to another disk. The contents of a disk can be evacuated only to disks in the same disk group that have sufficient free space. You must evacuate a disk if you plan to remove the disk or if you want to use the disk elsewhere. Evacuating a Disk: VEA To evacuate a disk: 1 Select the disk that contains the objects and data to be moved to another disk. 2 Select Actions>Evacuate Disk. 3 Complete the Evacuate Disk dialog box. Auto Assign destination disks: VxVM selects the destination disks to contain the content of the disk to be evacuated. Manually assign destination disks: To manually select a destination disk, highlight the disk in the left field and click Add to move the disk to the right field. Disks in the right field are the destination of evacuated data.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-39

4 Click OK to complete the task. Evacuating a Disk: vxdiskadm To evacuate a disk using vxdiskadm: 1 In the vxdiskadm main menu, select option 7, Move volumes from a disk. 2 When prompted, specify the name of the disk that contains the data that you want to move: Enter disk name [<disk>,list,q,?] datadg01 3 When prompted, specify the disks onto which you want to move the data. If you do not indicate specific disks, then any available space in the disk group is used. 4 Confirm the operation to complete the evacuation. Evacuating a Disk: CLI To evacuate a disk from the command line, use the vxevac command:
vxevac -g diskgroup from_diskname to_diskname

For example, to evacuate the data from datadg02 to datadg03:


# vxevac -g datadg datadg02 datadg03

4-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Removing a Disk: Methods


VEA
Select the disk that you want to remove. Select Actions>Remove Disk from Dynamic Disk Group. Specify where to send the disk upon removal.
Menu 1 2

vxdiskadm
Select option 3, Remove a disk, from the main menu.

#_

CLI
vxdg rmdisk: Returns a disk to the free disk pool vxdiskunsetup: Deconfigures a disk by returning it to an uninitialized state
4-32

FOS35_Sol_R1.0_20020930

Removing a Disk: Methods You can use any of the following methods to remove a disk. These methods are detailed in the sections that follow.
VEA Select the disk that you want to remove. Select Actions>Remove Disk from Dynamic Disk Group. Specify where to send the disk upon removal. Select option 3, Remove a disk. vxdg rmdisk: Removes specified disks and places them in the free disk pool vxdiskunsetup: Deconfigures a disk by returning it to an uninitialized state

vxdiskadm CLI

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-41

Removing a Disk: VEA


Select Actions>Remove Disk from Dynamic Disk Group. Select Actions>Remove Disk from Dynamic Disk Group.
Disk group that contains Disk group that contains the disk to be removed the disk to be removed

Available Available disks in the disks in the disk group disk group

Disks to be Disks to be removed removed

FOS35_Sol_R1.0_20020930

4-33

FOS35_Sol_R1.0_20020930

4-33

Removing a Disk: VEA To remove a disk using VEA: 1 In the main window, select the disk to be removed. 2 In the Actions menu, select Remove Disk from Dynamic Disk Group. 3 In the Remove Disk dialog box: Specify the disk group that contains the disk to be removed. Specify the disk to be removed. The disk to be removed should be displayed in the Selected disks field. Only empty disks are displayed in the list of available disks as candidates for removal. Note: If you select all disks for removal from the disk group, the disk group is automatically destroyed. 4 To complete the task, click OK.

4-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Removing a Disk: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select option 3:


Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 . . . Add or initialize one or more disks Encapsulate one or more disks Remove a disk

Follow the prompts by specifying the name of the disk to remove.


Note: This option returns a disk to the free disk pool. You cannot return a disk to an uninitialized state by using the vxdiskadm menu.
FOS35_Sol_R1.0_20020930 4-34

Removing a Disk: vxdiskadm To remove a disk from a disk group using vxdiskadm: 1 In the vxdiskadm main menu, select option 3, Remove a disk. 2 At the prompt, enter the disk media name of the disk to be removed. You can type list to display a list of available disks. Enter disk name [<disk>,list,q,?] datadg01 3 At the verification prompt, press Return to continue: Requested operation is to remove disk datadg01 from group datadg. Continue with operation? [y,n,q,?] (default: y) 4 The disk is removed from the disk group. You can remove another disk or return to the main menu. Removal of disk datadg01 is complete. Remove another disk? [y,n,q,?] (default: n) When you remove a disk using the vxdiskadm interface, the disk is returned to the free disk pool. The vxdiskadm interface does not have an option to return a disk to an uninitialized state.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-43

Removing a Disk: CLI

#_

The vxdg rmdisk Command: Removes a disk from a disk group Places the disk in the free disk pool Syntax:
vxdg -g diskgroup rmdisk disk_name

Example: To remove newdg02 from newdg:


# vxdg -g newdg rmdisk newdg02
FOS35_Sol_R1.0_20020930 4-35

Removing a Disk: CLI The vxdg rmdisk Command To remove a disk from a disk group from the command line, you use the command vxdg rmdisk. This command removes the disk from a disk group and places it in the free disk pool.
vxdg [-g diskgroup] rmdisk disk_name

By default, the vxdg command removes disks from the rootdg disk group. You can specify a disk group other than rootdg by using the -g diskgroup option. The disk_name is the disk media name of the disk to be removed. For example, to remove the disk newdg02 from the disk group newdg, you type:
# vxdg -g newdg rmdisk newdg02

You can verify the removal by using the vxdisk list command to display disk information. A deconfigured disk has a status of online but no longer has a disk media name or disk group assignment.

4-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Uninitializing a Disk: CLI

#_

The vxdiskunsetup Command: Reverses the VxVM disk configuration Removes public and private regions Syntax:
/usr/lib/vxvm/bin/vxdiskunsetup [-C] device_tag

Example: To deconfigure the disk c1t0d0:


# vxdiskunsetup c1t0d0
FOS35_Sol_R1.0_20020930 4-36

The vxdiskunsetup Command Once the disk has been removed from its disk group, you can remove it from Volume Manager control completely by using the vxdiskunsetup command. This command reverses the configuration of a disk by removing the public and private regions that were created by the vxdisksetup command. The vxdiskunsetup command does not operate on disks that are active members of a disk group. This command is located in /usr/lib/vxvm/bin.
/usr/lib/vxvm/bin/vxdiskunsetup [-C] device_tag

The device_tag defines the controller, target, and SCSI logical unit number of the disk to be deconfigured and takes the form c#t#d#. If you are using enclosure-based naming, you specify the disk to be deconfigured in the enclosure format. This command does not usually operate on disks that appear to be imported by some other hostfor example, a host that shares access to the disk. You can use the -C option to force deconfiguration of the disk, removing host locks that may be detected. For example, to deconfigure the disk c1t0d0, you type:
# vxdiskunsetup c1t0d0

You can verify the deconfiguration by using the vxdisk list command to display disk information. A deconfigured disk has a status of error.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-45

Renaming a Disk
VEA
Select the disk that you want to rename. Select Actions>Rename Disk. Specify the original disk name and the new name. Syntax: vxedit -g diskgroup rename old_name new_name Example: # vxedit -g datadg rename datadg01 datadg03

#_

CLI

Notes: The new disk name must be unique within the disk group. Renaming a disk does not automatically rename subdisks on that disk.
FOS35_Sol_R1.0_20020930 4-37

Renaming a Disk
Changing the Disk Media Name VxVM creates a unique disk media name for a disk when you add a disk to a disk group. Sometimes you may need to change a disk name to reflect changes of ownership or use of the disk. You can change the disk media name assigned to a disk by using the VEA interface or a CLI command. The vxdiskadm utility does not have an option for renaming a disk. Renaming a disk does not change the physical disk device name. The new disk name must be unique within the disk group. Before You Rename a Disk Before you rename a disk, you should carefully consider the change. VxVM names subdisks based on the disks on which they are located. A disk named datadg01 contains subdisks that are named datadg01-01, datadg01-02, and so on. Renaming a disk does not automatically rename its subdisks. Renaming a Disk: VEA To rename a disk using VEA: 1 Select the disk to be renamed. 2 From the Actions menu, select Rename Disk. 3 The Rename Disk dialog box is displayed. The name of the selected disk is displayed in the Disk name field.
4-46 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

4 In the New name field, type the new disk media name.

5 To complete the task, click OK. Renaming a Disk: CLI From the command line, you can rename a disk by using the vxedit rename command:
# vxedit -g diskgroup rename old_diskname new_diskname

For example, to rename datadg01 to datadg03, you type:


# vxedit -g datadg rename datadg01 datadg03

Note: You can also use the vxedit rename command to change the name of volumes, plexes, and subdisks. You cannot use this command to change the name of a disk group. Renaming a disk group involves deporting the disk group and importing it under a new name, using vxdg deport and vxdg import. This procedure is covered in the Managing Disk Groups lesson.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-47

Moving an Empty Disk


To move an empty disk from one disk group to another, combine the remove and add tasks. Use Actions>Remove Disk from Dynamic Disk Group, followed by Actions>Add Disk to Dynamic Disk Group.

#_

To move the disk c0t3d0, that has a disk media name of datadg04, from datadg to mktdg:
# vxdg g datadg rmdisk datadg04 # vxdg -g mktdg adddisk mktdg02=c0t3d0

Menu 1 2

Select option 3, Remove a disk, followed by option 1, Add or initialize a disk.


4-38

FOS35_Sol_R1.0_20020930

Moving a Disk
Moving an Empty Disk from One Disk Group to Another To reuse an empty VxVM disk that is assigned to one disk group by moving it to another disk group, you: 1 Remove the VxVM disk from its current disk group and assign it to the free disk pool or to an uninitialized state. 2 Add the disk to another disk group. Note: Only empty disks can be moved between disk groups. Moving a Disk: VEA 1 Select the disk that you want to remove. 2 Select Actions>Remove Disk from Dynamic Disk Group, and complete the dialog box. 3 Select the disk that you removed. 4 Select Actions>Add Disk to Dynamic Disk Group, and complete the wizard. Moving a Disk: vxdiskadm From the main menu: 1 Select option 3, Remove a disk, and respond to the prompts as appropriate. 2 Select option 1, Add or initialize a disk, and respond to the prompts as appropriate.

4-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Moving a Disk: CLI Use the vxdg rmdisk command followed by the vxdg adddisk command. For example, to move the physical disk c0t3d0, that has a disk media name of datadg04, from disk group datadg to disk group mktdg, you type:
# vxdg -g datadg rmdisk datadg04 # vxdg -g mktdg adddisk mktdg02=c0t3d0

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-49

Summary
You should now be able to:
Describe the features and benefits of the two devicenaming schemes: traditional and enclosure-based naming. Identify the three stages of VxVM disk configuration. Add a disk to a VxVM disk group. View disk information and identify disk status. Evacuate disk data and remove a disk from a disk group. Change the disk media name for a disk. Move an empty disk from one disk group to another.
FOS35_Sol_R1.0_20020930 4-39

Summary
In this lesson, you learned how to perform basic disk tasks. This lesson described device-naming schemes, how to place a disk under Volume Manager control, how to view disk information, and how to add a disk to a disk group. This lesson also covered removing a disk from a disk group, renaming a disk, and moving a disk from one disk group to another. Next Steps In the next lesson, you learn how to manage disk groups. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Installation Guide This guide provides detailed procedures for installing and initializing VERITAS Volume Manager and VERITAS Enterprise Administrator. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager.

4-50

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 4
Lab 4: Managing Disks In this lab, you use the VxVM interfaces to view the status of disks, initialize disks, move disks to the free disk pool, and move disks into and out of a disk group. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

4-40

Lab 4: Managing Disks


Goal In this lab, you use the VxVM interfaces to view the status of disks, initialize disks, move disks to the free disk pool, and move disks into and out of a disk group. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 4: Managing Disks


Copyright 2002 VERITAS Software Corporation. All rights reserved.

4-51

4-52

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Disk Groups

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

5-2

5-2

Introduction
Overview In this lesson, you learn how to perform tasks associated with the management of disk groups. This lesson describes procedures for creating, deporting, importing, destroying, and upgrading a disk group. Importance A disk group is an organizational structure that enables VxVM to perform disk management tasks. Managing disk groups is important in effectively managing your virtual storage environment. Outline of Topics Purposes of Disk Groups Creating a Disk Group Creating Spare Disks for a Disk Group Deporting a Disk Group Importing a Disk Group Moving Disk Groups Between Systems Renaming a Disk Group Destroying a Disk Group Viewing Disk Group Information Upgrading a Disk Group

5-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Identify the purposes of disk groups. Create a disk group. Create spare disks for a disk group. Deport a disk group. Import a disk group. Move a disk group from one system to another. Rename a disk group. Destroy a disk group. View information about a disk group. Upgrade the disk group version.
FOS35_Sol_R1.0_20020930 5-3

Objectives After completing this lesson, you will be able to: Describe how disk groups assist in disk management, identify the characteristics of VxVM disk groups, and the identify key differences of the rootdg disk group. Create a disk group by using VEA, vxdiskadm, and command line utilities. Designate a disk for use as a hot relocation spare by setting the spare flag on a disk by using VEA, vxdiskadm, and command line utilities. Disable access to a disk group by deporting a disk group using VEA, vxdiskadm, and command line utilities. Reenable access to a deported disk group by importing a disk group using VEA, vxdiskadm, and command line utilities. Move a disk group from one system to another by combining the deport and import tasks using VEA, vxdiskadm, and command line utilities. Rename a disk group by deporting and importing a disk group with a new name using VEA and command line utilities. Destroy a disk group to free all disks in the disk group by using VEA and command line utilities. View information about a disk group by displaying disk group properties in VEA and by using the vxdisk and vxdg command line utilities. Upgrade the VxVM disk group version number to the current disk group version by using VEA or to a specific disk group version from the command line.
Lesson 5: Managing Disk Groups
Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-3

Disk Group Purposes


rootdg
volume

acctdg
volume

Disk groups enable you to:


Group disks into logical collections for a set of users or applications. Easily move groups of disks from one host to another. Create high availability environments through deport and import operations.
5-4

VM disks

VM disks

engdg
volume

hrdg
volume

VM disks
FOS35_Sol_R1.0_20020930

VM disks

Purposes of Disk Groups


What Is a Disk Group? A disk group is a collection of physical disks, volumes, plexes, and subdisks which are used for a common purpose. A disk group is created when you place at least one disk in the disk group. When you add a disk to a disk group, a disk group entry is added to the private region header of that disk. Because a disk can only have one disk group entry in its private region header, one disk group does not know about other disk groups, and therefore disk groups cannot share resources, such as disk drives, plexes, and volumes. A volume with a plex can belong to only one disk group, and subdisks and plexes of a volume must be stored in the same disk group. You can never have an empty disk group, because you cannot remove all disks from a disk group without destroying the disk group. Why Are Disk Groups Needed? Disk groups assist disk management in several ways: Disk groups enable the grouping of disks into logical collections for a particular set of users or applications. Disk groups enable a set of disks to be easily moved from one host machine to another. Disk groups enable high availability. Disk drives can be shared by two or more hosts, but accessed by only one host at a time. If one host crashes, the other host can take over its disk groups and therefore its disks.
5-4 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Management
Disk Group: datadg

c0t4d2

c0t4d2
datadg01 Disk Media Name Disk Media Name

Private Region

/dev/[r]dsk/c0t4d2 Disk Access Name Disk Access Name

Public Region

Public Region

FOS35_Sol_R1.0_20020930

Initialized Disk Initialized Disk

Disk Added to a Disk Group Disk Added to a Disk Group

5-5

Disk Management When you add a disk to a disk group, VxVM assigns the disk media name to the disk and maps this name to the disk access name. The host name is also recorded in the private region. This information is written to the private region of the disk. Disk media name: A disk media name is the logical disk name assigned to a drive by VxVM. VxVM uses this name to identify the disk for volume operations, such as volume creation and mirroring. Disk access name: A disk access name represents the UNIX path to the device. A disk access record maps the physical location to the logical name and represents the link between the disk media name and the disk access name. Disk access records are dynamic and can be re-created when vxdctl enable is run. The information stored in the private region of a disk enables a disk group to have a logical configuration that is separate from the physical configuration. Despite changes to the hardware configuration, the logical configuration can be maintained without change. Once disks are placed under Volume Manager control, storage is managed in terms of the logical configuration. File systems mount to logical volumes, not to physical partitions. Logical names, such as /dev/vx/[r]dsk/diskgroup_name/ volume, replace physical locations, such as /dev/[r]dsk/c0t4d2s5. Whenever the VxVM configuration daemon is started (or vxdctl enable is run), the system reads the private region on every disk and establishes the connections between disk access records and disk media names.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-5

Disk Management: Example


Original Configuration Original Configuration Rebuilt Configuration Rebuilt Configuration

engdg01 > c1t1d0 engdg02 > c1t2d0 engdg03 > c1t3d0 engdg04 > c1t4d0 engdg05 > c1t5d0
FOS35_Sol_R1.0_20020930

engdg06 > c1t1d0 engdg02 > c1t2d0 engdg01 > c1t3d0 engdg04 > c1t4d0 engdg03 > c1t5d0 engdg05 > c1t6d0
5-6

Disk Management: Example Suppose that you have five drives attached to a controller. Reconfiguration Without VxVM Without VxVM, if you rebuild the system to add and reorder the drives, the system tries to mount the partitions on c1t1d0 when it comes up. These partitions are no longer the partitions from the previous reboot. References to physical partitions using special device files within application configurations, such as file systems in /etc/vfstab or database configurations, are no longer valid and need to be changed. Reconfiguration with VxVM With VxVM, when a system is reconfigured, mounting is based on the logical device named engdg01 and not the physical location. After the reconfiguration, VxVM mounts the device named engdg01, which happens to be at the physical location c1t3d0.

5-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The rootdg Disk Group


The rootdg disk group: Uses different default disk media name conventions Cannot be destroyed

Disk Group: rootdg rootdisk

Disk Group: otherdg otherdg01

disk01

otherdg02

disk02
FOS35_Sol_R1.0_20020930

otherdg03
5-7

The rootdg Disk Group The rootdg disk group is a special disk group that is created when you install VxVM during the vxinstall process. VxVM requires that the rootdg disk group exist and that it contain at least one disk. It is recommended that at least two disks are in the rootdg disk group so that the VxVM configuration database can be maintained on at least two disks. If you want your boot disk to be bootable under VxVM, then the boot disk must be in the rootdg disk group. The rootdg disk group follows different conventions than all other disk groups. Some key differences between the rootdg disk group and all other disk groups include: The rootdg disk group has a different convention for default disk media names when adding a disk with vxdiskadm. When you add a disk to the rootdg disk group, the default disk media names are disk01, disk02, disk03, and so on. For all other disk groups, the default disk media names are diskgroup01, diskgroup02, diskgroup03, and so on, when adding a disk with vxdiskadm. For example, if you add two disks to the disk group datadg, the default disk media names are datadg01 and datadg02. The rootdg disk group cannot be destroyed and must exist on every system, because it is an essential part of the VxVM boot process. The rootdg disk group is covered in detail in a later lesson.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-7

Example: HA Environment
Computer A Computer A Computer B Computer B

rootdg
rootvol

rootdg
rootvol

acctdg
vol01

engdg
vol01

FOS35_Sol_R1.0_20020930

Additional Disks Additional Disks

5-8

Example: Disk Groups and High Availability The example in the diagram represents a high availability environment. In the example, Computer A and Computer B each have their own rootdg on their own private SCSI bus. The two hosts are also on a shared SCSI bus. On the shared bus, each host has a disk group, and each disk group has a set of VxVM disks and volumes. There are additional disks on the shared SCSI bus that have not been added to a disk group. If Computer A fails, then Computer B, which is on the same SCSI bus as disk group acctdg, can take ownership or control of the disk group and all of its components.

5-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Disk Group


VEA Select the Disk Groups node. Select Actions>New Dynamic Disk Group.
Menu 1 2

vxdiskadm Select option 1, Add or initialize one or more disks, from the main menu. CLI vxdg init: Creates a disk group using an initialized disk
5-9

#_

FOS35_Sol_R1.0_20020930

Creating a Disk Group


Creating a Disk Group A disk must be placed into a disk group before it can be used by VxVM. To create a disk group, you must add a disk to the disk group, because a disk group cannot exist without having at least one associated disk. When you create a new disk group, you specify a name for the disk group and at least one disk to add to the disk group. The disk group name must be unique for the host machine. Creating a Disk Group: Methods You can use any of the following methods to create a disk group. These methods are detailed in the sections that follow.
VEA Select the Disk Groups node. Select Actions>New Dynamic Disk Group. Specify a disk group name and add a disk to the disk group. Select the option 1, Add or initialize one or more disks, from the main menu. When prompted for the name of the disk group, specify a new disk group name. To create a disk group using a disk that has already been initialized: vxdg init diskgroup disk_name=device_name

vxdiskadm

CLI

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-9

Creating a Disk Group: VEA


Select Actions>New Dynamic Disk Group. Select Actions>New Dynamic Disk Group.

Specify a name for Specify a name for the new disk group. the new disk group.

Add at least Add at least one disk. one disk.

FOS35_Sol_R1.0_20020930

Specify disk media names Specify disk media names for disks that you add. for disks that you add.

5-10

FOS35_Sol_R1.0_20020930

5-10

Creating a Disk Group: VEA To create a disk group: 1 In the object tree of the main window, select the Disk Groups node, or a free or uninitialized disk. 2 In the Actions menu, select New Dynamic Disk Group. 3 Complete the New Dynamic Disk Group wizard: Group Name: Type the name of the disk group to be created. Create cluster group: To create a shared disk group, mark the Create cluster group check box. This option is only applicable in a cluster environment. Available/Selected disks: Select at least one disk to be placed in the new disk group. To select a disk for the new disk group, highlight a disk in the list of Available disks and click Add. The disk is moved into the Selected Disks field and will be used in creating the disk group. Disk name(s): to specify a disk media name for the disk that you are placing in the disk group, type a name in the Disk name(s) field. If no disk name is specified, VxVM assigns a default name. If you are adding multiple disks and specify only one disk name, VxVM appends numbers to the disk name so that each disk name is unique within the disk group. Comment: To apply a comment to disks that are placed in the disk group, type the information in the Comment field. 4 Click Next and confirm your decisions to complete the task.

5-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Disk Group: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select option 1:


Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 . . . Add or initialize one or more disks Encapsulate one or more disks Remove a disk

Follow the prompts to add a disk. When prompted to specify a disk group, type the name of a new disk group.
FOS35_Sol_R1.0_20020930 5-11

Creating a Disk Group: vxdiskadm To create a new disk group, you follow the procedure for adding a disk. When prompted to specify the disk group to which you want to add the disk, you specify a new disk group. 1 From the vxdiskadm main menu, select option 1, Add or initialize one or more disks. 2 Specify the device name of the disk to be placed under Volume Manager control. Type list to see a list of available disks. Select disk devices to add: [<pattern-list>,all,list,q,?] c1t2d0 When asked to continue the operation, enter y or press Return. 3 Specify the disk group to which the disk should be added. To add the disk to a new disk group, you type a name for the new disk group. Which disk group [<group>,none,list,q,?] (default: rootdg)data2dg When prompted, confirm that you want to create the new disk group. 4 You can accept the default disk name or enter a different disk name: Use a default disk name for the disk? [y,n,q,?] (default: y) 5 You are prompted to make decisions about using the disk for hot relocation purposes. By default, a disk that you add is not excluded from hot relocation, but will not be specifically designated as a spare. You can modify these properties or press Return to accept the default choices.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-11

6 When prompted, confirm that you want to create a new disk group. The disk is examined to determine if it has been initialized for Volume Manager control. If the disk has not been initialized, you are prompted to initialize or encapsulate the disk. 7 The new disk group is created, and the disk is added to the disk group. You can initialize additional disks (y) or return to the vxdiskadm main menu (n): Add or initialize other disks? [y,n,q,?] (default: n)

5-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Disk Group: CLI

#_

The vxdg init command:


Defines a new disk group Uses a disk that has been initialized with vxdisksetup

Syntax:
vxdg init diskgroup disk_name=device_tag

Example:
To create the disk group newdg using device c1t1d0 and a disk media name of newdg01: # vxdg init newdg newdg01=c1t1d0
FOS35_Sol_R1.0_20020930 5-12

Creating a Disk Group: CLI To create a disk group from the command line, use the vxdg init command:
# vxdg init diskgroup disk_name=device_tag

You specify the name of the new disk group, a disk media name for the disk, and the device tag of the physical disk. If you do not specify a disk media name, then the device tag is used as the disk media name. For example, to create a disk group named newdg on device c1t1d0 and specify a disk media name of newdg01, you type:
# vxdg init newdg newdg01=c1t1d0

Before you create a disk group, the disk device must already have been initialized using vxdisksetup. The disk must not already belong to a disk group. To verify that the disk group was created, you can use vxdisk list:
# vxdisk list DEVICE c0t0d0s2 c1t0d0s2 c1t1d0s2 TYPE sliced sliced sliced DISK disk01 newdg01 GROUP rootdg newdg STATUS online online online rootdisk rootdg

In the example, the new disk group newdg was created using the disk media name newdg01 on the device c1t1d0s2.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-13

Creating Spare Disks


VEA

Menu 1 2

Add a disk to a disk group. Highlight the disk and select Actions>Set Disk Usage. Mark the Spare check box.

vxdiskadm
Select option 1, Add or initialize one or more disks. When prompted, add the disk as a spare. or Select option 12, Mark a disk as a spare for a disk group.

#_

CLI vxedit g diskgroup set spare=on|off disk_name


5-13

FOS35_Sol_R1.0_20020930

Creating Spare Disks for a Disk Group


Designating a Disk As a Hot-Relocation Spare When you add a disk to a disk group, you can specify that the disk be added to the pool of spare disks available to the hot relocation feature of VxVM. If an I/O failure occurs, hot relocation automatically relocates any redundant (mirrored or RAID-5) subdisks to spare disks and restores the affected Volume Manager objects and data. The system administrator is notified of the failure and relocation details through email. After successful relocation, you can replace the failed disk. Any disk in the same disk group can use the spare disk. To ensure that sufficient space is available for relocation, try to provide at least one hot-relocation spare disk per disk group. While designated as a spare, a disk is not used in creating a new volume unless you specifically name the disk in a command line operation. Note: Hot relocation is covered in detail in the Introduction to Recovery lesson. Setting Up a Disk As a Spare: VEA To designate a disk as a hot relocation spare: 1 Initialize a disk and add it to a disk group. 2 In the main window, highlight the disk to be designated as a spare. 3 Select Actions>Set Disk Usage. 4 In the Set Disk Usage window, mark the Spare check box. 5 Click OK.

5-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

To remove the disk from the pool of hot-relocation spares, open the Set Disk Usage window and clear the Mark disk for hot relocation or hot spare check box. Setting Up a Disk As a Spare: vxdiskadm By using the vxdiskadm interface, you can set up the disk as a spare disk when you add a disk to a disk group. 1 Select menu item 1, Add or initialize one or more disks from the main menu. 2 When vxdiskadm prompts whether this disk should become a hot-relocation spare, enter y to set up the disk as a spare disk: Add disk as a spare disk for rootdg? [y,n,q,?] (default: n) y Note: You can also use the vxdiskadm option 12, Mark a disk as a spare for a disk group, to set up a disk as a spare disk. This option and other hot relocation options are covered in the Introduction to Recovery lesson. Setting Up a Disk As a Spare: CLI To set up a disk as a spare from the command line, you use the vxedit command to set the spare flag on for a disk. If the spare flag is set for a disk, then the disk is designated for use by the hot relocation facility. A disk media record with the spare flag set is used only for hot relocation.
vxedit -g diskgroup set spare=on|off disk_media_name

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-15

Deporting a Disk Group


acctdg
volume

What is a deported disk group?


The disk group and its volumes are unavailable. The disks cannot be removed. The disk group cannot be accessed until it is imported.

VM disks

Deport Deport

olddg
volume

When you deport a disk group, you can specify:


A new host A new disk group name
5-14

FOS35_Sol_R1.0_20020930

VM disks

Deporting a Disk Group


Making a Disk Group Unavailable A deported disk group is a disk group over which management control has been surrendered. This means that the objects within the disk group cannot be accessed, and the disk group configuration cannot be changed. Deporting a disk group makes a disk group and its volumes unavailable. To resume management of the disk group, it must be imported. Specifying a New Host When you deport a disk group using VEA or CLI commands, you have the option to specify a new host to which the disk group is imported at reboot. If you know the name of the host to which the disk group will be imported, then you should specify the new host during the operation. If you do not specify the new host, then the disks could accidentally be added to another disk group, resulting in data loss. You cannot specify a new host using the vxdiskadm utility. Deporting and Renaming When you deport a disk group using VEA or CLI commands, you also have the option to rename the disk group when you deport it. You cannot rename a disk group when deporting using the vxdiskadm utility.

5-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Deporting a Disk Group


Before deporting a disk group, you must unmount any file systems used in the disk group. VEA Select the disk group to be deported. Select Actions>Deport Dynamic Disk Group.
Menu 1 2

vxdiskadm
Select option 9, Disable access to (deport) a disk group, from the main menu. CLI
vxdg deport: Deports a disk group
5-15

#_

FOS35_Sol_R1.0_20020930

Before You Deport a Disk Group A disk group cannot be deported if any volumes in that disk group are in use. Before you deport a disk group, you must unmount file systems and stop any volumes in the disk group. The rootdg disk group cannot be deported. Deporting a Disk Group: Methods You can use any of the following methods to deport a disk group. These methods are detailed in the sections that follow.
VEA Select the disk group to be deported. Select Actions>Deport Dynamic Disk Group. Specify the name of the disk group to be deported. Select option 9, Disable access to (deport) a disk group, from the main menu. vxdg deport: Deports a disk group

vxdiskadm CLI

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-17

Deporting a Disk Group: VEA


Select Actions>Deport Dynamic Disk Group. Select Actions>Deport Dynamic Disk Group.

Specify the disk group Specify the disk group to be deported. to be deported. Options enable you to specify Options enable you to specify a new name and a new host a new name and a new host for the disk group. for the disk group.

FOS35_Sol_R1.0_20020930

5-16

Deporting a Disk Group: VEA To deport a disk group: 1 In the main window, select the disk group to be deported. 2 From the Actions menu, select Deport Dynamic Disk Group. 3 Complete the Deport Dynamic Disk Group dialog box: Group name: Verify the name of the disk group to be deported. New name: To change the name of the disk group when you deport it, type a new disk group name in the New name field. New Host: To specify a host machine to import the deported disk group at reboot, type the host ID in the New Host field. If you are importing the disk group to another system, then you should specify the name of the new host. 4 Click OK to complete the task. Disks that were in the disk group now have a state of Deported. If the disk group was deported to another host, the disk state is Locked.

5-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Deporting a Disk Group: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select option 9:


Volume Manager Support Operations Menu: VolumeManager/Disk . . . 7 8 9 10 . . . Follow the prompts to deport a disk group. Offline the disks to disable access to the disks while the disk group is deported.
5-17

Move volumes from a disk Enable access to (import) a disk group Remove access to (deport) a disk group Enable (online) a disk device

FOS35_Sol_R1.0_20020930

Deporting a Disk Group: vxdiskadm To deport a disk group using the vxdiskadm utility, you select option 9, Remove access to (deport) a disk group, from the vxdiskadm main menu. You are prompted for the name of a disk group and asked if the disks should be disabled. To deport a disk group: 1 From the vxdiskadm main menu, select option 9, Remove access to (deport) a disk group. 2 When prompted, specify the name of the disk group to be deported: Enter name of disk group [<group>,list,q,?] (default: list) newdg 3 Next, you are asked if you want to disable, or offline, the disks in the disk group. You should offline the disks if you plan to remove a disk from a system without rebooting or physically move a disk to reconnect it to another system. Disable (offline) the indicated disks? [y,n,q,?] (default: n)n Note: If you offline the disks, you must manually online the disks before you import the disk group. To online a disk, use vxdiskadm option 10, Enable (online) a disk device. 4 When prompted, confirm the operation. After the disk group is successfully deported, a message is displayed. You can disable another disk group or return to the vxdiskadm main menu. Removal of disk group newdg was successful. Disable another disk group? [y,n,q,?] (default: n)
Lesson 5: Managing Disk Groups
Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-19

Deporting a Disk Group: CLI

#_

Before deporting a disk group: # umount /filesystem1 To deport a disk group: vxdg deport diskgroup # vxdg deport newdg To deport and rename a disk group: vxdg -n new_name deport old_name # vxdg -n newerdg deport newdg To deport a disk group and specify a new host: vxdg -h hostname deport diskgroup # vxdg -h server1 deport newdg

FOS35_Sol_R1.0_20020930

5-18

Deporting a Disk Group: CLI Before deporting a disk group: Unmount all file systems used within the disk group that is to be deported:
# umount /filesystem1

Deport a disk group: To deport a disk group from the command line, you use the vxdg deport command followed by the name of the disk group:
vxdg deport diskgroup

For example, to deport the disk group newdg:


# vxdg deport newdg

Deport and rename: To deport a disk group and rename the disk group at the same time, you specify the new disk group name and the old disk group name in the vxdg deport command:
vxdg -n new_name deport old_name

For example, to deport the disk group newdg and rename it as newerdg:
# vxdg -n newerdg deport newdg

Deport to a new host: To deport a disk group and specify the new host that imports the disk group, you use the -h hostname option:
vxdg -h hostname deport diskgroup

For example, to deport the disk group newdg and specify a new host of server1:
# vxdg -h server1 deport newdg

5-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Importing a Disk Group


newacctdg
volume

Importing a disk group reenables access to the disk group. When you import a disk group, you can:
Specify a new disk group name. Clear host locks. Import as temporary. Force an import.

VM disks

Import Import

olddg
volume

FOS35_Sol_R1.0_20020930

VM disks

5-19

Importing a Disk Group


Importing a Deported Disk Group Importing a disk group reenables access to a deported disk group by bringing the disk group under VM control in a new system. To move a disk group from one system to another, the disk group must be deported from the original system and then imported to the new system. Only deported disk groups can be imported. Importing and Renaming A deported disk group cannot be imported if another disk group with the same name has been created since the disk group was deported. You can import and rename a disk group at the same time. Clearing Host Locks When a disk group is created, the system writes a lock on all disks in the disk group. The lock is actually a value in the hostname field within the disk group header. The lock ensures that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time. If a system crashes, the locks stored on the disks remain, and if you try to import a disk group containing those disks, the import fails. If you are sure that the disk group is not in use by another host, you can clear the host locks when you import the disk group.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-21

Importing a Disk Group


VEA Select a deported disk group. Select Actions>Import Dynamic Disk Group.
Menu 1 2

vxdiskadm Select option 8, Enable access to (import) a disk group, from the main menu. CLI
vxdg import: Imports a disk group

#_

FOS35_Sol_R1.0_20020930

5-20

Importing As Temporary You can temporarily import a disk group by using options in the VxVM interfaces. A temporary import does not persist across reboots. A temporary import can be useful, for example, if you need to perform administrative operations on the temporarily imported disk group. If there is name collision, temporary importing can be used to keep the original name. Note: Temporary imports are also useful in a cluster environment. Because a temporary import changes the autoimport parameter, the disk group is not automatically reimported after a system crash. Forcing an Import A disk group import fails if the VxVM configuration daemon cannot find all of the disks in the disk group. If the import fails because a disk has failed, you can force the disk group to be imported using options in the VxVM interfaces. Forcing an import is not recommended and should be performed with caution.

5-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Importing a Disk Group: Methods You can use any of the following methods to import a disk group. These methods are detailed in the sections that follow.
VEA Select the deported disk group. Select Actions>Import Dynamic Disk Group. Specify the name of the disk group to be imported. Select option 8, Enable access to (import) a disk group, from the main menu. vxdg import: Imports a disk group

vxdiskadm CLI

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-23

Importing a Disk Group: VEA


Select Actions>Import Dynamic Disk Group. Select Actions>Import Dynamic Disk Group.

Options include: Options include: Clearing host IDs Clearing host IDs at import at import Forcing an import Forcing an import Starting all volumes Starting all volumes Importing as a Importing as a shared disk group shared disk group
FOS35_Sol_R1.0_20020930 5-21

Importing a Disk Group: VEA To import a disk group: 1 In the main window, select the disk group that you want to import. 2 From the Actions menu, select Import Dynamic Disk Group. 3 Complete the Import Dynamic Disk Group dialog box: Group name: Verify the name of the disk group to be imported. New name: To change the name of the disk group at import, type a new disk group name in this field. Clear host ID: This option clears the existing host ID stamp on all disks in the disk group at import. Do not use this option if another host is using any disks in the disk group. Force: Use this option with caution. This option forces the disk group import when the host cannot access all disks in the disk group. This option can cause disk group inconsistency if all disks are still usable. Start all volumes: This option starts all volumes upon import and is selected by default. Import shared: This option imports the disk group as a shared dynamic disk group (applicable only in a cluster environment). 4 Click OK to complete the task. By default, when you import a disk group by using VEA, all volumes in the disk group are started automatically. Note: VEA does not support temporary import of a disk group.

5-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Importing a Disk Group: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select option 8:


Volume Manager Support Operations Menu: VolumeManager/Disk . . . 7 8 9 10 . . . Move volumes from a disk Enable access to (import) a disk group Remove access to (deport) a disk group Enable (online) a disk device

Follow the prompts to import a disk group.


FOS35_Sol_R1.0_20020930 5-22

Importing a Disk Group: vxdiskadm To import a disk group using the vxdiskadm utility, you select option 8, Enable access to (import) a disk group, from the vxdiskadm main menu. A disk group must be deported from its previous system before it can be imported to the new system. During the vxdiskadm import operation, the system checks for host import locks. If any locks are found, you are prompted to clear the locks. By default, the vxdiskadm import option starts all volumes in the disk group. Note: The vxdiskadm interface does not have as much functionality as the other VxVM interfaces when importing disk groups. To import a disk group: 1 From the vxdiskadm main menu, select option 8, Enable access to (import) a disk group. 2 When prompted, enter the name of the disk group to import. You can type list to view a list of all disk groups. Select disk group to import [<group>,list,q,?] (default: list) newdg 3 When the disk group is successfully imported, you can import another disk group or return to the main menu. The import of newdg was successful. Select another disk group? [y,n,q,?] (default: n)

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-25

Importing a Disk Group: CLI

#_

To import a disk group: vxdg import diskgroup # vxdg import newdg After importing the disk group, start all volumes: vxdg -g diskgroup startall # vxdg -g newdg startall To import and rename a disk group: vxdg -n new_name import old_name # vxdg -n newerdg import newdg To import and rename temporarily: vxdg -t -n new_name import old_name # vxdg -t -n newerdg import newdg To clear import locks, add the -C option: # vxdg -tC -n newerdg import newdg

FOS35_Sol_R1.0_20020930

5-23

Importing a Disk Group: CLI To import a disk group from the command line, you use the vxdg import command with the name of the disk group:
vxdg import diskgroup

For example, to import the disk group newdg:


# vxdg import newdg

When you import a disk group from the command line, you must manually start all volumes in the disk group by using the command:
vxvol -g diskgroup startall

Import and rename: To import and rename a disk group at the same time, you specify the new disk group name and the old disk group name in the vxdg import command:
vxdg -n new_name import old_name

For example, to import the disk group newdg and rename it as newerdg:
# vxdg -n newerdg import newdg

Import and temporarily rename: To temporarily rename an imported disk group, you use the -t temporary_name option. This option imports the disk group temporarily and does not set the autoimport flag, which means that the import cannot survive a reboot:
vxdg -t -n temporary_name import real_name

5-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

For example, to import newdg and temporarily rename it as tempdg:


# vxdg -t -n tempdg import newdg

Import and clear locks: To clear import locks on a disk group, you add the -C option to the vxdg import command. For example, to clear import locks when you import and temporarily rename a disk group:
# vxdg -tC -n tempdg import newdg

Forcing an import: Typically, a disk group is imported if some disks in the disk group cannot be found by the local host. You can use the -f option to force an import if, for example, one of the disks is currently unusable or inaccessible.
# vxdg -f import newdg

Note: Be careful when using the -f option, because it can import the same disk group twice from disjointed sets of disks and make the disk group inconsistent.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-27

Moving a Disk Group


Host A rootdg rootdg Host B

acctdg

acctdg

Deport Deport

acctdg

Import Import

FOS35_Sol_R1.0_20020930

5-24

Moving Disk Groups Between Systems


One of the main benefits of disk groups is that they can be moved between systems. When you move a disk group from one system to another, all of the VxVM objects within the disk group are moved, and you do not have to specify the configuration again. The disk group configuration is relocated to the new system. To move a disk group from one system to another, you deport the disk group from one host and then import the disk group to another host. Moving a Disk Group: VEA To move a disk group from one machine to another: 1 Unmount file systems and stop all volumes in the disk group to be moved. 2 Deport the disk group to be moved to the other system. 3 Attach all of the physical disks in the disk group to the new system. 4 On the new system, import the deported disk group. 5 Restart and recover all volumes in the disk group on the new system. Note: To move a disk group between two systems, VxVM must be running on both systems.

5-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Moving a Disk Group: vxdiskadm To move a disk group between systems using the vxdiskadm utility, you perform the deport and import options in sequence: 1 Deport a disk group from one system using option 9, Remove access to (deport) a disk group. 2 Move all the disks to the second system and perform necessary systemdependent steps to make the second system and Volume Manager recognize the new disks. A reboot may be required. 3 Import the disk group to the new system using option 8, Enable access to (import) a disk group. Moving a Disk Group: CLI To move a disk group between systems: 1 On the first system, deport the disk group to be moved. # vxdg -h hostname deport diskgroup 2 Move all the disks to the second system and perform necessary systemdependent steps to make the second system and Volume Manager recognize the new disks. A reboot may be required. 3 Import the disk group to the new system: # vxdg import diskgroup 4 After the disk group is imported, start all volumes in the disk group.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-29

Renaming a Disk Group


Host A rootdg

oldnamedg

newnamedg

Deport Deport

Import Import

FOS35_Sol_R1.0_20020930

5-25

Renaming a Disk Group


Only one disk group of a particular name can exist for each system. You cannot import or deport a disk group when the target system already has a disk group of the same name. To avoid name collision when moving disk groups or to provide a more appropriate name for a disk group, you can rename a disk group. To rename a disk group when moving it from one system to another, you specify the new name during the deport or during the import operations. To rename a disk group without moving the disk group, you must still deport and reimport the disk group on the same system. You can rename a disk group by using VEA or from the command line. The vxdiskadm utility does not have an option to rename a disk group. Renaming a Disk Group: VEA The VEA interface has a Rename Dynamic Disk Group menu option. On the surface, this option appears to be simply renaming the disk group. However, the option works by deporting and reimporting the disk group with a new name. To rename a disk group: 1 In the object tree of the main window, select the disk group to be renamed. 2 From the Actions menu, select Rename Dynamic Disk Group. 3 Complete the Rename Dynamic Disk Group dialog box: Group name: Specify the disk group to be renamed.

5-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

New name: Type the new name for the disk group.

4 Click OK to complete the task. Renaming a Disk Group: CLI To rename a disk group from the command line, use the -n new_name option in the vxdg deport or vxdg import commands. You can specify the new name during the deport or during the import operation:
vxdg -n new_name deport old_name vxdg import new_name

or
vxdg deport old_name vxdg -n new_name import old_name

Starting Volumes After Renaming a Disk Group When you rename a disk group from the command line, you must restart all volumes in the disk group by using the vxvol command:
vxvol -g new_name startall

The vxvol utility performs operations of Volume Manager volumes. For more information on vxvol, see the vxvol(1m) manual page. Renaming a Disk Group: CLI Example For example, to rename the disk group datadg to mktdg, you can use either of the following sequences of commands:
# vxdg -n mktdg deport datadg # vxdg import mktdg # vxvol -g mktdg startall

or
# vxdg deport datadg # vxdg -n mktdg import datadg # vxvol -g mktdg startall
Lesson 5: Managing Disk Groups
Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-31

Destroying a Disk Group


olddg
Destroying a disk group:
Removes the disk group from VM control Returns all disks to the free disk pool Is the only method for freeing the last disk in a disk group Cannot be performed on rootdg

Destroy Destroy

FOS35_Sol_R1.0_20020930

Free Disk Pool

5-26

Destroying a Disk Group


Destroying a disk group permanently removes a disk group from Volume Manager control. When you destroy a disk group, all of the disks in the disk group are reinitialized as empty disks and are returned to the free disk pool. Volumes and configuration information about the disk group are removed. Because you cannot remove the last disk in a disk group, destroying a disk group is the only method to free the last disk in a disk group for reuse. A disk group cannot be destroyed if any volumes in that disk group are in use or contain mounted file systems. The rootdg disk group cannot be destroyed. Caution: Destroying a disk group can result in data loss. Only destroy a disk group if you are sure that the volumes and data in the disk group are not needed. You can destroy a disk group using VEA or from the command line. The vxdiskadm utility does not have an option for destroying a disk group.

5-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Destroying a Disk Group


VEA Select the host node that contains the disk group to be destroyed. Select Actions>Destroy Dynamic Disk Group. Type the name of the disk group to be destroyed.

#_

CLI vxdg destroy diskgroup For example, to destroy the disk group olddg and place its disks in the free disk pool: # vxdg destroy olddg

FOS35_Sol_R1.0_20020930

5-27

Destroying a Disk Group: VEA To destroy a disk group: 1 In the object tree of the main window, select the host that contains the disk group to be destroyed. 2 From the Actions menu, select Destroy Dynamic Disk Group. 3 Complete the Destroy Dynamic Disk Group dialog box by specifying the name of the disk group to be destroyed.

4 Click OK to complete the task.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-33

Destroying a Disk Group: CLI To destroy a disk group from the command line:
vxdg destroy diskgroup

For example, to destroy the disk group newdg, you type:


# vxdg destroy newdg

5-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Viewing Disk Group Information


VEA

Menu 1 2

Select a disk group. Right-click a disk group, and select Properties.

vxdiskadm
This utility does not have an option for displaying properties, except for the list option.

#_

CLI
vxdisk -s list: Displays disk group names and IDs vxdg list: Displays data for imported disk groups vxdisk -o alldgs list: Displays data for all disk groups, including deported disk groups vxdg -g diskgroup free: Displays free space in a disk group
5-28

FOS35_Sol_R1.0_20020930

Viewing Disk Group Information


Viewing Disk Group Information: Methods You can use any of the following methods to view information about disk groups. These methods are detailed in the sections that follow.
VEA Select a disk group in the object tree to display properties in the grid. Right-click a disk group, and select Properties to display additional information. This utility does not have an option for displaying disk group properties. The list option, List disk information, displays a list of disks attached to the system and includes a column that displays the disk group to which each disk belongs. vxdisk -s list: Displays disk group names and IDs vxdg list: Displays imported disk group information vxdisk -o alldgs list: Displays information about all disk groups, including deported disk groups vxdg -g diskgroup free: Displays information about free space in a disk group

vxdiskadm

CLI

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-35

Viewing Disk Groups: VEA


Right-click a disk group, and select Properties. Right-click a disk group, and select Properties.

Refers to disk Refers to disk group versioning group versioning Refers to cluster Refers to cluster environments environments
FOS35_Sol_R1.0_20020930 5-29

FOS35_Sol_R1.0_20020930

5-29

Viewing Disk Group Properties: VEA The object tree in the VEA main window contains a Disk Groups node that displays all of the disk groups attached to a host. When you click a disk group, the VxVM objects contained in the disk group are displayed in the grid. To view disk group properties: 1 In the object tree of the main window, select a disk group. Information about the disk group and related objects are displayed in the grid on tab pages: Disks, Volumes, File Systems, Disk View, and Alerts.

2 To view additional information about a specific disk group, right-click a disk group, and select Properties. The Disk Group Properties window is displayed. This window contains basic disk group properties, including: Disk group name, status, ID, and type Number of disks and volumes Disk group version Disk group size and free space
5-36 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Viewing Disk Groups: CLI

#_

To display disk group names and IDs for each disk:

# vxdisk -s list
Disk: type: flags: c0t0d0s2 sliced online ready private autoconfig autoimport imported diskid: 969583616.1030.cassius dgname: rootdg dgid: 969583613.1025.cassius hostid: cassius info: Disk: c1t0d0s2 type: sliced . . .

FOS35_Sol_R1.0_20020930

5-30

Viewing Disk Group Properties: CLI To view disk group information from the command line, you can use any of the following commands. The vxdisk and vxdg Commands Use vxdisk -s list to display information, including disk group names and disk group IDs for each disk.
# vxdisk Disk: type: flags: diskid: dgname: dgid: hostid: info:
Disk: type: flags: diskid: dgname: dgid: hostid:

-s list c0t0d0s2 sliced online ready private autoconfig autoimport imported 969583616.1030.cassius rootdg 969583613.1025.cassius cassius

c1t0d0s2 sliced online ready private autoconfig autoimport imported 970699117.1111.cassius newdg 971216408.1133.cassius cassius

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-37

Viewing Disk Groups: CLI

#_

To display imported disk groups only: # vxdg list NAME rootdg newdg STATE enabled enabled ID 969583613.1025.cassius 971216408.1133.cassius

To display all disk groups, including deported disk groups: # vxdisk -o alldgs list DEVICE TYPE c0t0d0s2 sliced c1t0d0s2 sliced DISK rootdisk GROUP rootdg (acctdg) STATUS online online

To display free space in a disk group: # vxdg free (for all disk groups that the host can detect) # vxdg -g diskgroup free (for a specific disk group)
FOS35_Sol_R1.0_20020930 5-31

Use vxdg list to display disk group names, states, and IDs for all imported disk groups in the system.
# vxdg list NAME rootdg datadg mktdg STATE enabled enabled enabled ID 980807572.1025.cassius 980900494.1163.cassius 980900711.1166.cassius

Use vxdisk -o alldgs list to display all disk groups, including deported disk groups. In the example, the deported disk group acctdg is displayed in parentheses.
# vxdisk -o alldgs list DEVICE c0t0d0s2 c1t0d0s2 c1t1d0s2 c1t2d0s2 c1t3d0s2 c1t8d0s2 c1t9d0s2 TYPE sliced sliced sliced sliced sliced sliced sliced DISK rootdisk datadg01 mktdg01 datadg02 GROUP rootdg (acctdg) (acctdg) datadg mktdg datadg STATUS online online online online online online online

5-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Use vxdg free to display free space on each disk. This command displays free space on all disks in the disk group. Note: This command does not show space on spare disks. Reserved disks are displayed with an r in the FLAGS column.
# vxdg free GROUP rootdg datadg datadg mktdg DISK rootdisk datadg01 datadg02 mktdg01 DEVICE TAG OFFSET LENGTH FLAGS c0t0d0s2 c0t0d0 16821503 976752 c1t2d0s2 c1t2d0 64638 c1t9d0s2 c1t9d0 21546 c1t8d0s2 c1t8d0 0

17613855 17656947 17678493 -

Add -g diskgroup to restrict the output to a specific disk group:


# vxdg -g datadg free DISK datadg01 datadg02 DEVICE TAG OFFSET LENGTH 64638 21546 17613855 17656947 FLAGS c1t2d0s2 c1t2d0 c1t9d0s2 c1t9d0

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-39

Disk Group Versioning


All disk groups have a version number based on the VxVM release. Each disk group version supports a set of features. You must upgrade old disk group versions in order to use new features.
VxVM Release 1.2 1.3 2.0, 2.1 2.2 2.3, 2.4 2.5 3.0 3.1 3.1.1 3.2, 3.5 Disk Group Version 10 15 20 30 40 50 60 70 80 90 Supported Disk Group Versions 10 15 20 30 40 50 2040, 60 2070 2080 2090

FOS35_Sol_R1.0_20020930

5-32

Upgrading a Disk Group


Disk Group Versioning All disk groups have an associated version number. Each VxVM release supports a specific set of disk group versions and can import and perform tasks on disk groups with those versions. Some new features and tasks only work on disk groups with the current disk group version, so you must upgrade existing disk groups in order to perform those tasks. Prior to the release of VxVM 3.0, the disk group version was automatically upgraded (if needed) when the disk group was imported. Starting with VxVM release 3.0, the two operations of importing a disk group and upgrading its version are separate. You can import a disk group from a previous version and use it without upgrading it. You must upgrade older version disk groups before you can use new VxVM features with those disk groups. Once you upgrade a disk group, the disk group becomes incompatible with earlier releases of VxVM that do not support the new version. If you do not upgrade older version disk groups, the disk groups can still be used provided that you do not try to use the features of the current version. Attempts to use a feature of the current version that is not a feature of the version the disk group was imported from result in an error message similar to this:
vxvm:vxedit: ERROR: Disk group version doesnt support feature

5-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary of Features Not Supported for Early Disk Group Versions The following table summarizes the disk group versions corresponding to each VxVM release, along with some unsupported features for earlier versions:
VxVM Release 1.2 1.3 2.0, 2.1 2.2 2.3, 2.4 2.5 3.0 3.1 3.1.1 3.2, 3.5 Disk Group Version 10 15 20 30 40 50 60 70 80 90 Supported Disk Group Versions 10 15 20 30 40 50 2040, 60 2070 2080 2090 Features Not Supported RAID-5 volumes, recovery checkpointing, dirty region logging, mirrored volumes logging VxSmartSync Recovery Accelerator Hot relocation Disk group versioning, task monitor, layered volumes, online relayout, safe RAID-5 subdisk moves

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-41

Summary of Supported Features for Disk Group Versions The following table summarizes supported features for each disk group version:
Disk Group Version 90 New Features Supported Disk group move, split and join, device discovery layer (DDL), ordered allocation, OS-independent naming support, persistent FastResync, cluster support for Oracle resilvering, layered volume support in clusters VERITAS Volume Replicator (VVR) enhancements Nonpersistent FastResync, VVR enhancements, Unrelocate Online relayout, safe RAID-5 subdisk moves Storage Replicator for Volume Manager (an earlier version of what is now VVR) Hot relocation VxSmartSync Recovery Accelerator Dirty region logging, disk group configuration copy limiting, mirrored volumes logging, new-style stripes, RAID-5 volumes, recovery checkpointing Previous Version Features Supported 20, 30, 40, 50, 60, 70, 80

80 70 60 50 40 30 20

20, 30, 40, 50, 60, 70 20, 30, 40, 50, 60 20, 30, 40 20, 30, 40 20, 30 20

You can upgrade the disk group version using VEA or from the command line. The vxdiskadm utility does not have an option to upgrade a disk group.

5-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Upgrading a Disk Group: VEA


In the Disk Group Properties window:
If the Current version property is Yes, then the disk group version is current. If the Current version property is No, then the disk group version is not current.

To upgrade a disk group:


1. Select the disk group to be upgraded. 2. Select Actions>Upgrade Dynamic Disk Group Version. 3. Confirm the upgrade when prompted. Note: You cannot upgrade to a specific version using VEA. You can only upgrade to the current version. To upgrade to a specific version, use a CLI command.
FOS35_Sol_R1.0_20020930 5-33

Upgrading a Disk Group: VEA Determining the Disk Group Version Status To determine if a disk group needs to be upgraded, you can view the status of the disk group version in the Disk Group Properties window. The Current version field states whether or not the disk group has been upgraded to the latest version. A status of Yes means that the disk group has the current version. A status of No means that the disk group version is not current.

Upgrading a Disk Group To upgrade a disk group: 1 In the main window, select the disk group to be upgraded. 2 In the Actions menu, select Upgrade Dynamic Disk Group Version. 3 Confirm that you want to upgrade the disk group to the current version. To upgrade the disk group, click Yes. The disk group is updated to the current version. When you view the disk group properties, the Current version field states Yes.

Note: You cannot upgrade to a specific disk group version by using VEA. You can only upgrade to the current version. To upgrade to a specific version, use the command line.
Lesson 5: Managing Disk Groups
Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-43

Displaying the Version: CLI

#_

To display the disk group version:


# vxdg list newdg Group: dgid: . . . version: # vxprint l Disk group: newdg Group: info: version: newdg dgid=971216408.1133.cassius 90
5-34

newdg 971216408.1133.cassius 90

FOS35_Sol_R1.0_20020930

. . .

Upgrading a Disk Group: CLI Displaying the Disk Group Version To display the disk group version for a specific disk group, you use the command:
vxdg list diskgroup

For example, to display the version of disk group newdg, you type:
# vxdg list newdg Group: newdg dgid: 971216408.1133.cassius . . . version: 90 . . .

You can also determine the disk group version by using the vxprint command with the -l option.
# vxprint -l Disk group: newdg Group: newdg info: dgid=971216408.1133.cassius version: 90 . . .

5-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Upgrading a Disk Group: CLI

#_

To upgrade a disk group:


vxdg [-T version] upgrade diskgroup

To upgrade datadg from version 40 to the current version, 90:


# vxdg upgrade datadg

To upgrade datadg from version 20 to version 40:


# vxdg -T 40 upgrade datadg

To create a version 50 disk group, you add -T 50 to the vxdg init command:
# vxdg -T 50 init newdg newdg01=c0t3d0s2
FOS35_Sol_R1.0_20020930 5-35

Upgrading the Disk Group Version To upgrade a disk group from the command line, you use the vxdg upgrade command. By default, VxVM upgrades a disk group to the highest version supported by the VxVM release:
vxdg [-T version] upgrade diskgroup

To specify a different version, you use the -T version option. For example, to upgrade the disk group datadg from version 40 to the latest version, 90, you type:
# vxdg upgrade datadg

To upgrade the disk group datadg from version 20 to version 40, you type:
# vxdg -T 40 upgrade datadg

You can also use the -T version option when creating a disk group. For example, to create a disk group that can be imported by a system running VxVM 2.5, the disk group must be version 50 or less. To create a version 50 disk group, you add -T 50 to the vxdg init command:
# vxdg -T 50 init newdg newdg01=c0t3d0s2

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-45

Summary
You should now be able to:
Identify the purposes of disk groups. Create a disk group. Create spare disks for a disk group. Deport a disk group. Import a disk group. Move a disk group from one system to another. Rename a disk group. Destroy a disk group. View information about a disk group. Upgrade the disk group version.
FOS35_Sol_R1.0_20020930 5-36

Summary
In this lesson, you learned how to perform tasks associated with the management of disk groups. This lesson describes procedures for creating, deporting, importing, destroying, and upgrading a disk group. Next Steps In the next lesson, you learn how to create a volume. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.

5-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 5
Lab 5: Managing Disk Groups In this lab, you create new disk groups, remove disks from disk groups, deport and import disk groups, and destroy disk groups. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

5-37

Lab 5: Managing Disk Groups


Goal In this lab, you create new disk groups, remove disks from disk groups, deport and import disk groups, and destroy disk groups. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 5: Managing Disk Groups


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5-47

5-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Volume

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

6-2

6-2

Introduction
Overview This lesson describes how to create a volume in VxVM. This lesson covers how to create a volume using different volume layouts, how to display volume layout information, and how to remove a volume. Importance By creating volumes, you begin to take advantage of the VxVM concept of virtual storage. Volumes enable you to span data across multiple disks using a variety of storage layouts and to achieve data redundancy and resilience. Outline of Topics Selecting a Volume Layout Creating a Volume Displaying Volume Layout Information Removing a Volume

6-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Identify the features, advantages, and disadvantages of volume layouts supported by VxVM. Create concatenated, striped, mirrored, and RAID-5 volumes by using VEA and from the command line. Display volume layout information by using VEA and by using the vxprint command. Remove a volume from VxVM by using VEA and from the command line.
FOS35_Sol_R1.0_20020930 6-3

Objectives After completing this lesson, you will be able to: Identify the features, advantages, and disadvantages of volume layouts (concatenated, striped, mirrored, and RAID-5) supported by VxVM. Create concatenated, striped, mirrored, and RAID-5 volumes by using VEA and from the command line. Display volume layout information by using windows available in VEA and by using the vxprint command from the command line. Remove a volume from VxVM by using VEA and from the command line.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-3

What Is Volume Layout?


Physical Disks VM Disks subdisk subdisk subdisk Volume subdisk subdisk subdisk

Disk Spanning Disk Spanning Redundancy Redundancy


Mirroring Mirroring Parity Parity

Concatenation Concatenation Striping Striping

subdisk subdisk subdisk subdisk subdisk subdisk


FOS35_Sol_R1.0_20020930

plex
Volume subdisk subdisk subdisk

Resilience Resilience

plex

plex

6-4

FOS35_Sol_R1.0_20020930

6-4

Selecting a Volume Layout


What Is Volume Layout? VxVM uses logical volumes to organize and manage disk space. A volume is made up of portions of one or more physical disks, so a volume does not have the physical limitations of a physical disk. A volume can provide greater capacity and better availability and performance than a single physical disk. A volume can be extended across multiple disks to increase capacity, mirrored on another disk to provide data redundancy, or striped across multiple disks to improve I/O performance. Volume layout is the way plexes are organized to remap the volume address space through which I/O is redirected at run-time. Each volume layout has different advantages and disadvantages. The layouts that you use depend on the levels of performance and reliability required by your system. Volume layouts are based on the concepts of: Disk spanning Data redundancy Resilience RAID

6-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Spanning Disk spanning is the combining of disk space from multiple physical disks to form one logical drive. Disk spanning has two forms: Concatenation: Concatenation is the mapping of data in a linear manner across two or more disks. Striping: Striping is the mapping of data in equal-sized chunks alternating across multiple disks. Striping is also called interleaving. Redundancy To protect data against disk failure, the volume layout must provide some form of data redundancy. Redundancy is achieved in two ways: Mirroring: Mirroring is maintaining two or more copies of volume data. Parity: Parity is a calculated value used to reconstruct data after a failure by doing an exclusive OR (XOR) procedure on the data. Parity information can be stored on a disk. If part of a volume fails, the data on that portion of the failed volume can be re-created from the remaining data and parity information. Resilience A resilient volume, also called a layered volume, is a volume that is built on one or more other volumes. Resilient volumes enable the mirroring of data at a more granular level. For example, a resilient volume can be concatenated or striped at the top level and then mirrored at the bottom level.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-5

RAID Levels in VxVM

RAID-0 RAID-1 RAID-5 RAID-0+1 RAID-1+0

Concatenation or striping Mirroring Striping with parity calculation Mirroring above striping Mirroring below striping (Mirroring at a more granular level, that is, mirroring each column in a striped plex)
6-5

FOS35_Sol_R1.0_20020930

RAID RAID is an acronym for Redundant Array of Independent Disks. RAID is a storage management approach in which an array of disks is created, and part of the combined storage capacity of the disks is used to store duplicate information about the data in the array. By maintaining a redundant array of disks, you can regenerate data in the case of disk failure. RAID configuration models are classified in terms of RAID levels, which are defined by the number of disks in the array, the way data is spanned across the disks, and the method used for redundancy. Each RAID level has specific features and performance benefits that involve a trade-off between performance and reliability. VxVM supports the following RAID levels: RAID-0 RAID-1 RAID-5 RAID-0+1 RAID-1+0

6-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM-Supported RAID Levels VxVM-supported RAID levels are described in the following table:
RAID Level RAID-0 Description RAID-0 refers to simple concatenation or striping. Disk space is combined sequentially from two or more disks or striped across two or more disks. RAID-0 does not provide data redundancy. RAID-1 refers to mirroring. Data from one disk is duplicated on another disk to provide redundancy and enable fast recovery. RAID-5 is a striped layout that also includes the calculation of parity information, and the striping of that parity information across the disks. If a disk fails, the parity is used to reconstruct the missing data. Adding a mirror to a concatenated or striped layout results in RAID-0+1, a combination of concatenation or striping (RAID-0) with mirroring (RAID-1). Striping plus mirroring is called the mirror-stripe layout. Concatenation plus mirroring is called the mirror-concat layout. In these layouts, the mirroring occurs above the concatenation or striping. RAID-1+0 combines mirroring (RAID-1) with striping or concatenation (RAID-0) in a different way. The mirroring occurs below the striping or concatenation in order to mirror each column of the stripe or each chunk of the concatenation. This type of layout is called a layered volume.

RAID-1 RAID-5

RAID-0+1

RAID-1+0

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-7

Volume Layouts
Concatenated Concatenated Striped Striped

Mirrored Mirrored

RAID-5 RAID-5

Layered Layered

FOS35_Sol_R1.0_20020930

Note: Layered volume layouts are covered Note: Layered volume layouts are covered in the Configuring Volumes lesson. in the Configuring Volumes lesson.

6-6

VxVM Volume Layout Types When you create a volume using the VxVM interfaces, you can specify the layout type. The volume layouts supported by VxVM include: Concatenated Striped Mirrored RAID-5 Layered volumes Note: The layered volume layouts supported by VxVM are striped-mirror and concatenated-mirror. In the VEA interface, these layouts are called Striped Pro and Concatenated Pro, respectively.

6-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Concatenated Layout
VM Disks datadg01
Physical Disk datadg01-01 datadg01-02 datadg01-03
88GB GB

datavol
datadg01-01 14 GB datadg02-03 14 GB

datadg02
Physical Disk datadg02-01 datadg02-02 datadg02-03

datavol-01
66GB GB

Concatenation solves disk space problems, but does not protect against disk failure.
FOS35_Sol_R1.0_20020930 6-7

Concatenated Layout A concatenated volume layout maps data in a linear manner onto one or more subdisks in a plex. Subdisks do not have to be physically contiguous and can belong to more than one VM disk. Storage is allocated completely from one subdisk before using the next subdisk in the span. Data is accessed in the remaining subdisks sequentially until the end of the last subdisk. For example, if you have 14 GB of data, then a concatenated volume can logically map the volume address space across subdisks on different disks. The addresses 0 GB to 8 GB of volume address space map to the first 8-gigabyte subdisk, and addresses 9 GB to 14 GB map to the second 6-gigabyte subdisk. An address offset of 12 GB, therefore, maps to an address offset of 4 GB in the second subdisk. Concatenation: Advantages Removes size restrictions: Concatenation removes the restriction on size of storage devices imposed by physical disk size. Better utilization of free space: Concatenation enables better utilization of free space on disks by providing for the ordering of available discrete disk space on multiple disks into a single addressable volume. Simplified administration: Concatenation enables large file systems to be created and reduces overall system administration complexity. Concatenation: Disadvantages No protection against disk failure: Concatenation does not protect against disk failure. A single disk failure may result in the failure of the entire volume.
Lesson 6: Creating a Volume
Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-9

Striped Layout
VM Disks datadg01
Physical Disk datadg01-01 datadg01-02 datadg01-03 SU = Stripe Unit = 64K (default) SU = Stripe Unit = 64K (default)

datavol datavol-01
SU1 SU2 SU3 SU4 SU5 SU6 SU7 SU8 SU9 SU10 SU11 SU12 Column Column 11 Column Column 22 Column Column 33 Stripe Stripe Stripe Stripe Stripe Stripe Stripe Stripe

datadg02
Physical Disk datadg02-01 datadg02-02 datadg02-03

datadg03
Physical Disk FOS35_Sol_R1.0_20020930 datadg03-01 datadg03-02 datadg03-03

6-8

FOS35_Sol_R1.0_20020930

6-8

Striped Layout A striped volume layout maps data so that the data is interleaved, or allocated in stripes, among two or more subdisks on two or more physical disks. Data is allocated alternately and evenly to the subdisks of a striped plex. The subdisks are grouped into columns. Each column contains one or more subdisks and can be derived from one or more physical disks. To obtain the maximum performance benefits of striping, you should not use a single disk to provide space for more than one column. All columns must be the same size. The minimum size of a column should equal the size of the volume divided by the number of columns. Data is allocated in equal-sized units, called stripe units, that are interleaved between the columns. Each stripe unit is a set of contiguous blocks on a disk. The stripe unit size can be in units of sectors, kilobytes, megabytes, or gigabytes. The default stripe unit size is 128 sectors (64K), which provides adequate performance for most general purpose volumes. Performance of an individual volume may be improved by matching the stripe unit size to the I/O characteristics of the application using the volume.

6-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Striped Layout
Advantages Parallel data transfer Load balancing Improved performance Disadvantages Striping alone offers no redundancy. Single disk failure may result in volume failure.

FOS35_Sol_R1.0_20020930

6-9

Striping: Advantages Parallel data transfer: Striping is useful if you need large amounts of data written to or read from the physical disks quickly by using parallel data transfer to multiple disks. Load balancing: Striping is also helpful in balancing the I/O load from multiuser applications across multiple disks. Improved performance: Improved performance is obtained by increasing the effective bandwidth of the I/O path to the data. This may be achieved by a single volume I/O operation spanning across a number of disks or by multiple concurrent volume I/O operations to more than one disk at the same time. Striping: Disadvantages No redundancy: Striping alone offers no redundancy or recovery features. Disk failure: Striping a volume increases the chance that a disk failure results in failure of that volume. For example, if you have three volumes striped across two disks, and one of the disks is used by two of the volumes, then if that one disk goes down, both volumes go down.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-11

Mirrored Layout
VM Disks datadg01
Physical Disk datadg01-01 datadg01-02 datadg01-03 Each plex must have disk Each plex must have disk space from different disks to space from different disks to achieve redundancy. achieve redundancy.

datavol

datadg02
Physical Disk datadg02-01 datadg02-02 datadg02-03

datavol-01 datavol-02
datadg01-03 datadg03-01 datadg02-02

datadg03
Physical Disk
FOS35_Sol_R1.0_20020930

datadg03-01 datadg03-02 datadg03-03

Each plex in a mirrored layout Each plex in a mirrored layout can have a different layout type. can have a different layout type.

6-10

FOS35_Sol_R1.0_20020930

6-10

Mirrored Layout By adding a mirror to a concatenated or striped volume, you create a mirrored layout. A mirrored volume layout consists of more than one plex that duplicate the information contained in a volume. Each plex in a mirrored layout contains an identical copy of the volume data. In the event of a physical disk failure and when the plex on the failed disk becomes unavailable, the system can continue to operate using the unaffected mirrors. Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy. Volume Manager uses true mirrors, which means that all copies of the data are the same at all times. When a write occurs to a volume, all plexes must receive the write before the write is considered complete. Each plex in a mirrored configuration can have a different layout. For example, one plex can be concatenated, and the other plex can be striped. You should distribute mirrors across controllers to eliminate the controller as a single point of failure.

6-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Mirrored Layout
Advantages Improved reliability and availability Improved read performance Disadvantages Requires more disk space Slower write performance

FOS35_Sol_R1.0_20020930

6-11

Mirroring: Advantages Improved reliability and availability: With concatenation or striping, failure of any one disk can make the entire plex unusable. With mirroring, data is protected against the failure of any one disk. Mirroring improves the reliability and availability of a striped or concatenated volume. Improved read performance: Reads benefit from having multiple places from which to read the data. Mirroring: Disadvantages Requires more disk space: Mirroring requires twice as much disk space, which can be costly for large configurations. Each mirrored plex requires enough space for a complete copy of the volumes data. Slightly slower write performance: Writing to volumes is slightly slower, because multiple copies have to be written in parallel. The overall time the write operation takes is determined by the time needed to write to the slowest disk involved in the operation. The slower write performance of a mirrored volume is not generally significant enough to decide against its use. The benefit of the resilience that mirrored volumes provide outweighs the performance reduction.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-13

RAID-5 Layout
Physical Disks Physical Disks

VM Disks
datadg01 datadg01-01 datadg02 datadg02-01 datadg03 datadg03-01 datadg04

SU = Stripe Unit = 16K (default) SU = Stripe Unit = 16K (default) P = Parity; a calculated P = Parity; a calculated value used to reconstruct value used to reconstruct data after disk failure. data after disk failure.

datavol-01
Stripe Stripe SU1 SU2 SU3 P SU5 SU6 P SU4 Stripe Stripe SU9 P SU7 SU8 Stripe Stripe P SU10 SU11 SU12 Column Column 11 Column Column 33 Column Column 22 Column Column 44 Stripe Stripe

FOS35_Sol_R1.0_20020930

datadg04-01

6-12

FOS35_Sol_R1.0_20020930

6-12

RAID-5 A RAID-5 volume layout has the same attributes as a striped plex, but includes one additional column of data that is used for parity. Parity provides redundancy. Parity is a calculated value used to reconstruct data after a failure. While data is being written to a RAID-5 volume, parity is calculated by doing an exclusive OR (XOR) procedure on the data. The resulting parity is then written to the volume. If a portion of a RAID-5 volume fails, the data that was on that portion of the failed volume can be re-created from the remaining data and parity information. RAID-5 volumes keep a copy of the data and calculated parity in a plex that is striped across multiple disks. Parity is spread equally across disks. Given a five-column RAID-5 where each column is 1 GB in size, the RAID-5 volume size is 4 GB. One column of space is devoted to parity, and the remaining four 1-GB columns are used for data. The default stripe unit size for a RAID-5 volume is 32 sectors (16K). Each column must be the same length but may be made from multiple subdisks of variable length. Subdisks used in different columns must not be located on the same physical disk. RAID-5 requires a minimum of three disks for data and parity. When implemented as recommended, an additional disk is required for the log. RAID-5 cannot be mirrored.

6-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

RAID-5 Layout
Advantages Redundancy through parity Requires less space than mirroring Improved read performance Fast recovery through logging Disadvantage Slow write performance

FOS35_Sol_R1.0_20020930

6-13

RAID-5: Advantages Redundancy through parity: With a RAID-5 volume layout, data can be re-created from remaining data and parity in case of disk failure. Requires less space than mirroring: RAID-5 stores parity information, rather than a complete copy of the data. Improved read performance: RAID-5 provides similar improvements in read performance as in a normal striped layout. Fast recovery through logging: RAID-5 logging minimizes recovery time in case of disk failure. RAID-5: Disadvantages Slow write performance: The performance overhead for writes can be substantial, because a write can involve much more than simply writing to a data block. A write can involve reading the old data and parity, computing the new parity, and writing the new data and parity.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-15

Creating a Volume
VM Disks

Before creating a volume, initialize disks and assign them to disk groups. Striped: Minimum two disks Mirrored: Minimum one disk for each plex RAID-5: Minimum three disks Logging: Minimum one additional disk per log

FOS35_Sol_R1.0_20020930

6-14

FOS35_Sol_R1.0_20020930

6-14

Creating a Volume
Creating a Volume When you create a volume using VEA or CLI commands, you indicate the desired volume characteristics, and VxVM automatically creates the underlying plexes and subdisks. The VxVM interfaces require minimal input if you use default settings. For experienced users, the interfaces also enable you to enter more detailed specifications regarding all aspects of volume creation. Note: Most volume tasks cannot be performed with the vxdiskadm menu interfacea management tool used for disk objects. When you create a volume, two device node files are created that can be used to access the volume: /dev/vx/dsk/diskgroup/volume_name /dev/vx/rdsk/diskgroup/volume_name Before You Create a Volume Before you create a volume, you should ensure that you have enough disks to support the layout type. A striped volume requires at least two disks. A mirrored volume requires at least one disk for each plex. A mirror cannot be on the same disk that other plexes are using. A RAID-5 volume requires at least three disks. Enabling logging requires at least one additional disk to contain the log.

6-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Volume: Methods


VEA Under the Disk Groups node, select a disk group. Select Actions>New Volume.

#_

CLI vxassist make

FOS35_Sol_R1.0_20020930

6-15

Creating a Volume: Methods You can use any of the following methods to create a volume. These methods are detailed in the sections that follow.
VEA In the Disk Groups node, select a disk group. Select Actions>New Volume. Specify volume characteristics.

CLI

vxassist -g diskgroup make volume_name length attributes

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-17

Creating a Volume: VEA

Select Actions>New Volume to Select Actions>New Volume to launch the New Volume wizard. launch the New Volume wizard.
Specify volume Specify volume attributes. attributes.

FOS35_Sol_R1.0_20020930

Default options Default options change based change based on the layout 6-16 on the layout type you select. type you select.
6-16

Creating a Volume: VEA To create a volume: 1 Expand the Disk Groups node in the object tree, and select a disk group within which to create the volume. 2 In the Actions menu, select New Volume. 3 Complete the New Volume wizard by: Specifying volume attributes Selecting disks to use for the volume Creating a file system on the volume 4 Confirm your selections and click Finish to complete the wizard. Specifying Attributes for a New Volume When you create a volume using the New Volume wizard, you can specify the following attributes: Group name: Assign the volume to an existing disk group. The disk group you selected is displayed by default. Volume name: Assign a meaningful name to the volume that describes the data stored in the volume. By default, VxVM assigns a volume name of vol##, where ## represents a two-digit number. Comment: You can provide an optional comment to describe the volume.

6-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Size: Specify a size for the volume. The default unit is MB. If you select the Max Size button, VxVM determines the largest size possible for the volume based on the layout selected and the disks to which the volume is assigned. Select a size for the volume based on the volume layout and the space available in the disk group. The size of the volume must be less than or equal to the available free space on the disks. For a RAID-5 volume, the size specified in the Size field is the usable space in the volume. VxVM allocates additional space for the volumes parity information. The disks across which the RAID-5 volume is striped should contain additional free space for the volumes parity information. The free space available for constructing a volume of a specific layout is generally less than the total free space in the disk group unless the layout is concatenated with no mirroring or logging. Layout: Select a layout type from the group of options. The default layout is concatenated. Concatenated: The volume is created using one or more regions of specified disks. Striped: The volume is striped across two or more disks. The default number of columns across which the volume is striped is two, and the default stripe unit size is 128 sectors (64K). You can specify different values in the Number of Columns field and the Stripe Unit Size field. RAID-5: In the Number of Columns field, specify the number of columns (disks) across which the volume is striped. The default number of columns is three, and the default stripe unit size is 32 sectors (16K). Note: RAID-5 requires one more column than the number of data columns. The extra column is used for parity. A RAID-5 volume requires at least one more disk than the number of columns. One disk is needed for logging, which is enabled by default. Concatenated Pro and Striped Pro: These options denote layered volume layouts. Layered volume layouts are covered in another lesson. Mirror info: Mirrored: Mirroring is recommended. To mirror the volume, mark the Mirrored check box. Only striped or concatenated volumes can be mirrored. RAID-5 volumes cannot be mirrored. Total mirrors: Type the total number of mirrors for the volume. A volume can have up to 32 plexes; however, the practical limit is 31. One plex is reserved by VxVM to perform restructuring or relocation operations. Enable logging: To enable logging, mark the Enable logging check box. If you enable logging, a log is created that tracks regions of the volume that are currently being changed by writes. In case of a system failure, the log is used to recover only those regions identified in the log. VxVM creates a dirty region log or a RAID-5 log, depending on the volume layout. If the layout is RAID-5, logging is enabled by default, and VxVM adds an appropriate number of logs to the volume.
6-19

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Initialize zero: To clear the volume before enabling it for general use, mark the Initialize zero check box. In what situations should you consider using the Initialize zero option? Under RAID-5 creation, creation time of the RAID-5 volume can be up to 25 percent faster when you initialize zero. With this method of initialization, 0s are written unconditionally to the volume, instead of the traditional initialization method of XORing each cell. For security purposes, you can use the Initialize Zero option to overwrite all existing data in the volume area. You should also consider this option when creating a new pair of volumes on remote systems while using VERITAS Volume Replicator (VVR). By zeroing, you are assured that corresponding volumes in the primary and secondary replicated volume groups (RVGs) are initialized accordingly, avoiding the need for full synchronization of the volumes. No layered volumes: To prevent the creation of a layered volume, mark the No layered volumes check box. This option ensures that the volume has a nonlayered layout. If a layered layout is selected, this option is ignored.

6-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Assigning Disks: VEA

FOS35_Sol_R1.0_20020930

In the New Volume In the New Volume wizard, select disks to wizard, select disks to use for the volume. use for the volume.

Disks can be Disks can be Included for or Included for or Excluded from Excluded from volume use. volume use.
6-17

6-17

Selecting Disks for a New Volume By default, VxVM locates available space on all disks in the disk group and assigns the space to a volume automatically based on the layout you choose. If you prefer, you can assign specific disks to be used by the volume. To place the volume on or stripe the volume across specific disks, select the Manually select disks for use by this volume option. A list of available devices is displayed in the left pane. To specify disks to use for the volume, move the disks into the Included field by using the arrow button (>). To exclude disks from use by the volume, move the disks into the Excluded field by using the arrow button (>). Mark the Mirror Across check box to mirror the volume across a controller, tray, target, or enclosure. Mark the Stripe Across check box to stripe the volume across a controller, tray, target, or enclosure. Mark the Ordered check box to implement ordered allocation. Ordered allocation is a method of allocating disk space to volumes based on a specific set of VxVM rules. Ordered allocation is covered in another lesson.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-21

6-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding a File System: VEA


In the New Volume In the New Volume wizard, you can create wizard, you can create a file system on the a file system on the volume. volume.

File system type File system type

Mount point Mount point

Create and Create and mount options mount options


FOS35_Sol_R1.0_20020930

Mount at boot Mount at boot 6-18


6-18

Creating a File System on a New Volume When you create a volume, you can place a file system on the volume and specify options for mounting the file system. You can use a traditional UNIX file system (UFS) or a VERITAS File System (VxFS), if VxFS is installed. You can place a file system on a volume when you create a volume or any time after creation. The default option is No file system. To place a file system on the volume, select the Create a file system option and specify: File system type: Specify the file system type as either vxfs (VERITAS File System) or ufs (UNIX File System). To add a VERITAS file system, the VxFS product must be installed with appropriate licenses. Create Options: Compress: If your platform supports file compression, this option compresses the files on your file system (not available on Solaris). Allocation unit: Select an allocation unit size (not available on Solaris). Block size: Select the file system block size in bytes. New File System Details: Click this button to specify additional file system-specific mkfs options. For VxFS, the only explicitly available additional options are large file support and log size. You can specify other options in the Extra Options field. Mount Options: Mount Point: Specify the mount point directory on which to mount the file system. The new file system is mounted immediately after it is created. Leave this field empty if you do not want to mount the file system.
Lesson 6: Creating a Volume
Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-23

Create mount point: Mark this check box to create the directory if it does not exist. The mount point must be specified. Read only: Mark this check box to mount the file system as read only. Honor setuid: Mark this check box to mount the file system with the suid mount option. This option is marked by default. Add to file system table: Mark this check box to include the file system in the /etc/vfstab file. Mount at boot: Mark this check box to mount the file system automatically whenever the system boots. fsck pass: Specify the fsck pass number. Mount File System Details: Click this button to specify additional mount options. For VxFS, the explicitly available additional options include Quick I/O, QuickLog, and caching policy. You can specify other options, such as quota, in the Extra options field.

6-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Volume: CLI

#_

To create a volume:
vxassist -g diskgroup make volume_name length [attributes]

Two device node files are created to access the volume:


/dev/vx/dsk/diskgroup/volume_name /dev/vx/rdsk/diskgroup/volume_name

FOS35_Sol_R1.0_20020930

6-19

Creating a Volume: CLI To create a volume from the command line, you use the vxassist command. You specify the basic attributes of the desired volume layout, and VxVM automatically creates the underlying plexes and subdisks. This command uses default values for volume attributes, unless you provide specific values.
vxassist [-g diskgroup] make volume_name length [attributes]

In the syntax: Use the -g option to specify the disk group in which to create the volume. If you do not specify a disk group, VxVM creates the volume in rootdg. make is the keyword for volume creation. volume_name is a name you give to the volume. Specify a meaningful name. length specifies the number of sectors in the volume. You can specify the length in kilobytes, megabytes, or gigabytes by adding an m, k, or g to the length. If no unit is specified, sectors are assumed. You can specify many additional attributes, such as volume layout or specific disks. For detailed descriptions of all attributes that you can use with vxassist, see the vxassist(1m) manual page. When you create a volume, block and character (raw) device files are set up that you can use to access the volume: /dev/vx/dsk/diskgroup/volume is the block device file for volume. /dev/vx/rdsk/diskgroup/volume is the character device file for volume.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-25

Concatenated Volume: CLI

#_

To create a concatenated volume:


# vxassist -g datadg make datavol 10m
Disk group name Disk group name Volume name Volume name Volume size Volume size

If an /etc/default/vxassist file exists with a different default layout, use:


# vxassist -g datadg make datavol 10m layout=nostripe

To create a concatenated volume on specific disks:


# vxassist -g datadg make datavol 10g datadg02 datadg03
FOS35_Sol_R1.0_20020930

Disk media names Disk media names

6-20

Creating a Concatenated Volume By default, vxassist creates a concatenated volume that uses one or more sections of disk space. The vxassist command attempts to locate sufficient contiguous space on one disk for the volume. However, if necessary, the volume is spanned across multiple disks. VxVM selects the disks on which to create the volume. To create a concatenated volume called datavol with a length of 10 megabytes, in the disk group datadg, using any available disks, you type:
# vxassist -g datadg make datavol 10m

Note: To guarantee a concatenated volume is created, you should include the attribute layout=nostripe in the vxassist make command. Without the layout attribute, the default layout is used that may have been changed by the creation of the /etc/default/vxassist file. For example:
# vxassist -g datadg make datavol 10m layout=nostripe

Creating a Concatenated Volume on a Specific Disk If you want the volume to reside on specific disks, you can designate the disks by adding the disk media names to the end of the command. More than one disk can be specified.
vxassist [-g diskgroup] make volume_name length [disks...]

6-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

To create the volume datavol with a length of 10 gigabytes, in the disk group datadg, on the disks datadg02 and datadg03, you type:
# vxassist -g datadg make datavol 10g datadg02 datadg03

Note: The second disk in the list is used only when there is no space left on the first disk.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-27

Striped Volume: CLI

#_

To create a striped volume:


vxassist -g diskgroup make volume length layout=stripe ncol=n stripeunit=size [disks...]

Examples:
# vxassist -g acctdg make payvol 20m layout=stripe ncol=3 !acctdg04 # vxassist -g acctdg make expvol 20m layout=stripe ncol=3 stripeunit=64k acctdg01 acctdg02 acctdg03
FOS35_Sol_R1.0_20020930 6-21

Creating a Striped Volume To create a striped volume, you add the layout type and other attributes to the vxassist make command.
vxassist [-g diskgroup] make volume_name length layout=stripe ncol=n stripeunit=size [disks...]

In the syntax: layout=stripe designates the striped layout. ncol=n designates the number of stripes, or columns, across which the volume is created. This attribute has many aliases. For example, you can also use nstripe=n or stripes=n. When creating a striped volume with vxassist, if you do not provide a number of columns, then VxVM selects a number of columns based on the number of free disks in the disk group. The minimum number of stripes in a volume is 2, and the maximum is 8. You can edit these minimum and maximum values in /etc/default/vxassist. stripeunit=size specifies the size of the stripe unit to be used. The default is 64K. To stripe the volume across specific disks, you can specify the disk media names at the end of the command. The order in which disks are listed on the command line does not imply any ordering of disks within the volume layout. By default, VxVM selects any available disks with sufficient space.

6-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

To exclude a disk or list of disks, add an exclamation point (!) before the disk media names. For example, !datadg01 specifies that the disk datadg01 should not be used to create the volume. Examples: Creating a Striped Volume To create a 20-megabyte striped volume called payvol in acctdg that has three columns, uses the default stripe unit size, and uses any available disks except for acctdg04, you type:
# vxassist -g acctdg make payvol 20m layout=stripe ncol=3 !acctdg04

To create a 20-megabyte striped volume called expvol in acctdg that has three columns, has a stripe unit size of 64K, and is striped across the disks acctdg01, acctdg02, and acctdg03, you type:
# vxassist -g acctdg make expvol 20m layout=stripe ncol=3 stripeunit=64k acctdg01 acctdg02 acctdg03

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-29

RAID-5 Volume: CLI

#_

To create a RAID-5 volume:


vxassist -g diskgroup make volume length layout=raid5 ncol=n stripeunit=size [disks...]

Default ncol=3 Default stripeunit=32 sectors (16K) Log is created by default. Therefore, you need at least one more disk than the number of columns. Example:
# vxassist -g acctdg make payvol 20m layout=raid5
FOS35_Sol_R1.0_20020930 6-22

Creating a RAID-5 Volume To create a RAID-5 volume from the command line, you use the same syntax as for creating a striped volume, except that you use the attribute layout=raid5:
vxassist [-g diskgroup] make volume_name length layout=raid5 ncol=n stripeunit=size [disks...]

Notes: For a RAID-5 volume, the default stripe unit size is 32 sectors (16K). When a RAID-5 volume is created, a RAID-5 log is created by default. This means that you must have at least one additional disk available for the log. If you do not want the default log, then add the nolog option in the syntax, layout=raid5,nolog. Examples: Creating a RAID-5 Volume To create a 20-megabyte RAID-5 volume called expvol in the disk group acctdg that has three columns, has a stripe unit size of 32 sectors, and is striped across any available disks, you type:
# vxassist -g acctdg make expvol 20m layout=raid5

To create the same volume, but specify a stripe unit size of 32K and assign the volume to four specific disks, you type:
# vxassist -g acctdg make expvol 20m layout=raid5 stripeunit=32K acctdg01 acctdg02 acctdg03 acctdg04

6-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Mirrored Volume: CLI

#_

To create a mirrored volume:


vxassist -g diskgroup [-b] make volume length layout=mirror [nmirror=number]
Examples:
Concatenated Concatenated and mirrored and mirrored Striped and Striped and mirrored mirrored Specify three Specify three mirrors. mirrors.

# vxassist -g datadg make datavol 5m layout=mirror # vxassist -g datadg make datavol 5m layout=stripe,mirror # vxassist -g datadg make datavol 5m layout=stripe,mirror nmirror=3 # vxassist -g datadg -b make datavol 5m layout=stripe,mirror nmirror=3

Run process in Run process in background. FOS35_Sol_R1.0_20020930 background.

6-23

Creating a Mirrored Volume To mirror a concatenated volume, you add the layout=mirror attribute in the vxassist command.
vxassist -g diskgroup [-b] make volume_name length layout=mirror [nmirror=number_of_mirrors] To specify more than two mirrors, you add the nmirror attribute. When creating a mirrored volume, the volume initialization process requires that the mirrors be synchronized. The vxassist command normally waits for the mirrors to be synchronized before returning to the system prompt. To run the process in the background, you add the -b option.

To create a 5-megabyte, concatenated, and mirrored volume called datavol in the disk group datadg, you type:
# vxassist -g datadg make datavol 5m layout=mirror

To create a striped volume that is mirrored, you type:


# vxassist -g datadg make datavol 5m layout=stripe,mirror

To specify more than two mirrors, you add the nmirror attribute:
# vxassist -g datadg make datavol 5m layout=stripe,mirror nmirror=3

To run the process in the background, you add the -b option:


# vxassist -g datadg -b make datavol 5m layout=stripe,mirror nmirror=3

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-31

Mirrored Volume with Log: CLI

#_

To create a mirrored volume with a log:


vxassist -g diskgroup [-b] make volume length layout=mirror logtype=[drl|drlseq][nlog=n]
logtype=drl enables dirty region logging. logtype=drlseq enables sequential dirty region logging. nlog=n creates n logs and is used when you want more than one log plex to be created.

To create a concatenated volume that is mirrored and logged:


# vxassist -g datadg make datavol 5m layout=mirror logtype=drl
FOS35_Sol_R1.0_20020930 6-24

Creating a Mirrored and Logged Volume When you create a mirrored volume, you can add a dirty region log by adding the logtype=drl attribute:
vxassist -g diskgroup [-b] make volume_name length layout=mirror logtype=[drl|drlseq] [nlog=n]

In the syntax: Specify logtype=drl to enable dirty region logging. A log plex that consists of a single subdisk is created. If you plan to mirror the log, you can add more than one log plex by specifying a number of logs using the nlog=n attribute, where n is the number of logs. Specify logtype=drlseq for a volume that is written to sequentially, such as a database log volume. This attribute limits the number of dirty bits that can be set in the DRL to the value of the voldrl_max_seq_dirty parameter (default value is 3), which enables faster recovery if a crash occurs. Caution: If applied to volumes that are written to randomly, logtype=drlseq can create a performance bottleneck by limiting the number of parallel writes that can be performed. For example, to create a concatenated volume that is mirrored and logged:
# vxassist -g datadg make datavol 5m layout=mirror logtype=drl

6-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Estimating Volume Size: CLI

#_

To determine largest possible size for a volume:


vxassist -g diskgroup maxsize attributes Example:
# vxassist -g datadg maxsize layout=raid5 Maximum volume size: 376832 (184Mb)

To determine how much a volume can expand:


vxassist -g diskgroup maxgrow volume Example:
# vxassist -g datadg maxgrow datavol Volume datavol can be extended by 366592 to 1677312 (819Mb)
FOS35_Sol_R1.0_20020930 6-25

Estimating Volume Size The vxassist command can determine the largest possible size for a volume that can currently be created with a given set of attributes. vxassist can also determine how much an existing volume can be extended under the current conditions. The vxassist maxsize Command To determine the largest possible size for the volume to be created, use the command:
vxassist -g diskgroup maxsize attributes...

This command does not create the volume but returns an estimate of the maximum volume size. The output value is displayed in sectors, by default. For example, to determine the maximum size for a new RAID-5 volume on available disks, you type:
# vxassist -g datadg maxsize layout=raid5 Maximum volume size: 376832 (184Mb)

If the volume with the specified attributes cannot be created, an error message is returned:
vxvm:vxassist: ERROR: No volume can be created within the given constraints

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-33

The vxassist maxgrow Command To determine how much an existing volume can be expanded, use the command:
vxassist -g diskgroup maxgrow volume_name

This command does not resize the volume but returns an estimate of how much an existing volume can be expanded. The output indicates the amount by which the volume can be increased and the total size to which the volume can grow. The output is displayed in sectors, by default. For example, to estimate how much the volume datavol can be expanded, you type:
vxassist -g datadg maxgrow datavol Volume datavol can be extended by 366592 to 1677312 (819Mb)

6-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Displaying Volume Info: Methods


VEA Object views in the main window Disk View window Volume View window Volume to Disk Mapping window Volume Properties window Volume Layout window

#_

CLI vxprint
6-26

FOS35_Sol_R1.0_20020930

Displaying Volume Layout Information


Displaying Volume Information: Methods You can use any of the following methods to display volume information. These methods are detailed in the sections that follow.
VEA Display volume information through any of the following views: Object views in the main window Disk View window Volume View window Volume to Disk Mapping window Volume Properties window Volume Layout window

CLI

vxprint

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-35

Object Views in the Main Window


Highlight a disk group, and click the Volumes tab. a group, and click the tab.

Highlight a volume, and click the tabs to display details. Highlight volume, click the tabs to

FOS35_Sol_R1.0_20020930

6-27

6-27

Displaying Volume Information: VEA To display information about volumes in VEA, you can select from several different views. Object Views in Main Window You can view volumes and volume details by selecting an object in the object tree and displaying volume properties in the grid: To view the volumes in a disk group, select a disk group in the object tree, and click the Volumes tab in the grid. To explore detailed components of a volume, select a volume in the object tree, and click each of the tabs in the grid.

6-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk View Window


Highlight a volume, and select Actions>Disk View. Highlight a volume, and select Actions>Disk View.

Collapse Collapse Expand Expand

FOS35_Sol_R1.0_20020930

6-28

FOS35_Sol_R1.0_20020930

6-28

Disk View Window The Disk View window displays a close-up graphical view of the layout of subdisks in a volume. To display the Disk View window, select a volume or disk group and select Actions>Disk View. Display options in the Disk View window include: Expand: Click the Expand button to display detailed information about all disks in the Disk View window, including subdisks and free space. Collapse: Click the Collapse button to hide the details for all disks in the Disk View window. Vol Details: Click the Vol Details button to include volume names and layout types for each subdisk. Projection: Click the Projection button to highlight objects associated with a selected subdisk or volume. Projection shows the relationships between objects by highlighting objects that are related to or part of a specific object. Caution: You can move subdisks in the Disk View window by dragging subdisk icons to different disks or to gaps within the same disk. Moving subdisks reorganizes volume disk space and must be performed with care.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-37

Volume View Window


Highlight a volume, and select Actions>Volume View. Highlight a volume, and select Actions>Volume View.

Collapse Collapse Expand New New Volume Volume

FOS35_Sol_R1.0_20020930

6-29

FOS35_Sol_R1.0_20020930

6-29

Volume View Window The Volume View window displays characteristics of the volumes on the disks. To display the Volume View window, select a volume or disk group and select Actions>Volume View. Display options in the Volume View window include: Expand: Click the Expand button to display detailed information about volumes. Collapse: Click the Collapse button to hide the details for all volumes in the Volume View window. New volume: Click the New Volume button to invoke the New Volume wizard.

6-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume to Disk Map Window


Highlight a disk group, and select Actions>Disk/Volume Map. Highlight a disk group, and select Actions>Disk/Volume Map. Click a triangle to display or Click a triangle to display or hide subdisks. hide subdisks.

FOS35_Sol_R1.0_20020930

Click a dot to highlight an 6-30 Click a dot to highlight an intersecting row and column. intersecting row and column.
6-30

FOS35_Sol_R1.0_20020930

Volume to Disk Mapping Window The Volume to Disk Mapping window displays a tabular view of volumes and their relationships to underlying disks. To display the Volume to Disk Mapping window, highlight a disk group, and select Actions>Disk/Volume Map. To view subdisk layouts, click the triangle button to the left of the disk name, or select View>Expand All. To help identify the row and column headings in a large grid, click a dot in the grid to highlight the intersecting row and column.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-39

Volume Layout Window


Highlight a volume, and select Actions>Layout View. Highlight a volume, and select Actions>Layout View.

Select View>Horizontal or Select View>Horizontal or View>Vertical to change the View>Vertical to change the orientation of the diagram. FOS35_Sol_R1.0_20020930 orientation of the diagram.
FOS35_Sol_R1.0_20020930

6-31

6-31

Volume Layout Window The Volume Layout window displays a graphical view of the selected volumes layout, components, and properties. You can select objects or perform tasks on objects in the Volume Layout window. This window is dynamic, so the objects displayed in this window are automatically updated when the volumes properties change. To display the Volume Layout window, highlight a volume, and select Actions>Layout View. The View menu changes the way objects are displayed in this window. Select View>Horizontal to display a horizontal layout and View>Vertical to display a vertical layout.

6-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Properties Window

Right-click a Right-click a volume and volume and select select Properties. Properties.

FOS35_Sol_R1.0_20020930

6-32

FOS35_Sol_R1.0_20020930

6-32

Volume Properties Window The Volume Properties window displays a summary of volume properties. To display the Volume Properties window, right-click a volume and select Properties.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-41

Displaying Volume Info: CLI

#_

To display volume configuration information:


vxprint -g diskgroup [options]
-vpsd -h -r -t -l -a -A Select only volumes (v), plexes (p), subdisks (s), or disks (d). List hierarchies below selected records. Display related records of a volume containing subvolumes. Print single-line output records that depend upon the configuration record type. Display all information from each selected record. Display all information about each selected record, one record per line. Select from all active disk groups.
6-33

-e pattern Show records that match an editor pattern.


FOS35_Sol_R1.0_20020930

Displaying Volume Layout Information: CLI The vxprint Command You can use the vxprint command to display information about how a volume is configured. This command displays records from the VxVM configuration database.
vxprint -g diskgroup [options]

The vxprint command can display information about disk groups, disk media, volumes, plexes, and subdisks. You can specify a variety of options with the command to expand or restrict the information displayed. Only some of the options are presented in this training. For more information about additional options, see the vxprint(1m) manual page.

6-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Common Options
Option Description Select only volumes (v), plexes (p), subdisks (s), or disks (d). Options can be used individually or in combination. List hierarchies below selected records. Display related records of a volume containing subvolumes. Grouping is done under the highest-level volume. Print single-line output records that depend upon the configuration record type. For disk groups, the output consists of the record type, the disk group name, and the disk group ID. Display all information from each selected record. Most records that have a default value are not displayed. This information is in a free format that is not intended for use by scripts. Display all information about each selected recordone record per line, with a one-space character between each field; the list of associated records is displayed. Select from all active disk groups. Show records that match an editor pattern.

-vpsd -h -r -t

-l

-a

-A -e pattern

Additional Options
Option Description Enable the user to define which fields to display. Read a configuration from the standard input. The standard input is expected to be in standard vxmake input format. Display all information about each selected record in a format that is useful as input to the vxmake utility. Display information about each record as one-line output records. Display only the names of selected records. Display only disk group records. Suppress the disk group header that separates each disk group. A single blank line separates each disk group. Suppress headers that would otherwise be printed for the default and the -t and -f output formats.

-F[type:]format_spec -D -

-m

-f -n -G -Q

-q

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-43

Displaying Volume Info: CLI


# vxprint -g datadg -ht | more # vxprint -g datadg -ht | more
DG DM RV RL V PL SD SV DC SP NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX PARENTVOL SNAPVOL default c1t10d0s2 c1t11d0s2 c1t14d0s2 c1t15d0s2 datavol01 datavol01-01 datavol01 datavol01-02 NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME LOGVOL DCO default sliced sliced sliced sliced ENABLED ENABLED datadg01 ENABLED datadg01 MINORS GROUP-ID PRIVLEN PUBLEN STATE STATE PRIMARY DATAVOLS STATE REM_HOST REM_DG STATE LENGTH READPOL STATE LENGTH LAYOUT DISKOFFS LENGTH [COL/]OFF NVOLLAYR LENGTH [COL/]OFF SRL REM_RLNK PREFPLEX UTYPE NCOL/WID MODE DEVICE MODE AM/NM MODE

dg datadg dm dm dm dm v pl sd pl sd datadg01 datadg02 datadg03 datadg04 datavol01 datavol01-01 datadg01-01 datavol01-02 datadg01-05

91000 3023 3023 3023 3023 ACTIVE ACTIVE 0 ACTIVE 32289

1000753077.1117.train12 4191264 4191264 4191264 4191264 20480 21168 21168 LOGONLY 33 -To interpret the output, - To interpret the output, -match header lines match header lines -with output lines.

with output lines.


c1t10d0 c1t10d0

FOS35_Sol_R1.0_20020930

SELECT CONCAT 0 CONCAT LOG

fsgen RW 6-34 ENA RW ENA


6-34

FOS35_Sol_R1.0_20020930

Displaying Information for All Volumes To display the volume, plex, and subdisk record information for all volumes in the system, you use the command:
vxprint -ht

To restrict the output to a particular disk group:


vxprint -g diskgroup -ht

In the output, the top few lines indicate the headers that match each type of output line that follows. Each volume is listed along with its associated plexes and subdisks and other VxVM objects. dg is a disk group. dm is a disk. rv is a replicated volume group (used in VERITAS Volume Replicator). rl is an rlink (used in VERITAS Volume Replicator). v is a volume. pl is a plex. sd is a subdisk. sv is a subvolume. dc is a data change object. sp is a snap object. For more information, see the vxprint(1m) manual page.

6-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

For example, the following is the complete vxprint output for the disk group datadg:
# vxprint -g datadg -ht
DG DM RV RL V PL SD SV DC SP NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX PARENTVOL SNAPVOL NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME LOGVOL DCO MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF

SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

dg datadg dm dm dm dm v pl sd pl sd v pl sd sd pl sd sd v pl sd pl sd pl sd v pl sd sd sd pl sd datadg01 datadg02 datadg03 datadg04 datavol01 datavol01-01 datadg01-01 datavol01-02 datadg01-05 datavol02 datavol02-01 datadg02-01 datadg01-02 datavol02-02 datadg03-02 datadg04-02 datavol03 datavol03-01 datadg02-02 datavol03-02 datadg01-03 datavol03-03 datadg01-04 datavol04 vol01-01 datadg03-01 datadg04-01 datadg02-03 vol01-02 datadg01-06

default c1t10d0s2 c1t11d0s2 c1t14d0s2 c1t15d0s2 datavol01 datavol01-01 datavol01 datavol01-02 datavol02 datavol02-01 datavol02-01 datavol02 datavol02-02 datavol02-02 datavol03 datavol03-01 datavol03 datavol03-02 datavol03 datavol03-03 datavol04 vol01-01 vol01-01 vol01-01 datavol04 vol01-02

default sliced sliced sliced sliced ENABLED ENABLED datadg01 ENABLED datadg01 ENABLED ENABLED datadg02 datadg01 ENABLED datadg03 datadg04 ENABLED ENABLED datadg02 ENABLED datadg01 ENABLED datadg01 ENABLED ENABLED datadg03 datadg04 datadg02 ENABLED datadg01

91000 3023 3023 3023 3023 ACTIVE ACTIVE 0 ACTIVE 32289 ACTIVE ACTIVE 0 21168 ACTIVE 31248 31248 ACTIVE ACTIVE 11088 ACTIVE 33264 ACTIVE 32256 ACTIVE ACTIVE 0 0 32256 LOG 54432

1000753077.1117.train12 4191264 4191264 4191264 4191264 20480 21168 21168 LOGONLY 33 20480 22224 11088 11088 22224 11088 11088 20480 21168 21168 21168 21168 LOGONLY 33 61440 62464 31248 31248 31248 3024 3024 SELECT CONCAT 0 CONCAT LOG SELECT STRIPE 0/0 1/0 STRIPE 0/0 1/0 SELECT CONCAT 0 CONCAT 0 CONCAT LOG RAID RAID 0/0 1/0 2/0 CONCAT 0 c1t10d0 c1t10d0 2/128 c1t11d0 c1t10d0 2/128 c1t14d0 c1t15d0 c1t11d0 c1t10d0 c1t10d0 3/32 c1t14d0 c1t15d0 c1t11d0 c1t10d0 fsgen RW ENA RW ENA fsgen RW ENA ENA RW ENA ENA fsgen RW ENA RW ENA RW ENA raid5 RW ENA ENA ENA RW ENA

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-45

Removing a Volume

When a volume is removed, the volume and its data are permanently destroyed. Space is freed and can be used elsewhere. Unmount file system before removing the volume.
FOS35_Sol_R1.0_20020930 6-35

Removing a Volume
When you remove a volume, the volume and all of its data are permanently destroyed. When a volume is removed, the space it occupied is freed and can be used elsewhere. You should only remove a volume if you are sure that you do not need the data in the volume, or if the data is backed up elsewhere. You can remove a volume to make the underlying disk space available for use elsewhere. A volume must be closed before it can be removed. For example: If the volume contains a file system, the file system must be unmounted. You must manually edit the /etc/vfstab file in order to remove the entry for the file system. You must remove the mount entry in the /etc/vfstab file to avoid errors at boot. If the volume is used as a raw device, the application, such as a database, must close the device.

6-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Removing a Volume: VEA


Select Actions>Delete Volume. Select Actions>Delete Volume.

In the Delete Volume In the Delete Volume dialog box, click Yes. dialog box, click Yes.
FOS35_Sol_R1.0_20020930

Verify the name of the Verify the name of the volume to be removed. volume to be removed.
6-36

Removing a Volume: VEA To remove a volume: 1 In the main window, select the volume to be removed. 2 In the Actions menu, select Delete Volume. 3 In the Delete Volume dialog box, verify that the volume name displayed is the correct volume to be removed. Click Yes to confirm that you want to remove the volume.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-47

Removing a Volume: CLI

#_

To remove a volume with vxassist:


vxassist -g diskgroup remove volume volume Example:
# vxassist -g datadg remove volume datavol

To remove a volume with vxedit:


vxedit -g diskgroup -rf rm volume Example:
# vxedit -g datadg -rf rm datavol

FOS35_Sol_R1.0_20020930

6-37

Removing a Volume: CLI To remove a volume from the command line, you use the command:
vxassist [-g diskgroup] remove volume volume_name

For example, to remove the volume datavol from the disk group datadg:
# vxassist -g datadg remove volume datavol

You can use the vxassist remove command with VxVM release 3.0 and later. For earlier versions of VxVM, use the vxedit command:
vxedit [-g diskgroup] -rf rm volume_name

In the syntax: Use the -r and -f options in conjunction to remove a started volume. If the -r option is not used, the removal fails if the volume has an associated plex. The -f option stops the volume so that it can be removed. For more information, see the vxedit(1m) manual page.

6-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to: Identify the features, advantages, and disadvantages of volume layouts supported by VxVM. Create concatenated, striped, mirrored, and RAID-5 volumes by using VEA and from the command line. Display volume layout information by using VEA and by using the vxprint command. Remove a volume from VxVM by using VEA and from the command line.
FOS35_Sol_R1.0_20020930 6-38

Summary
This lesson described how to create a volume in VxVM. This lesson covered how to create a volume using different volume layouts, how to display volume layout information, and how to remove a volume. Next Steps In the next lesson, you learn how to configure additional volume attributes. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.

Lesson 6: Creating a Volume


Copyright 2002 VERITAS Software Corporation. All rights reserved.

6-49

Lab 6
Lab 6: Creating a Volume In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. You also practice creating a RAID-5 volume, creating a volume with a file system, and mounting a file system. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

6-39

Lab 6: Creating a Volume


Goal In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. You also practice creating a RAID-5 volume, creating a volume with a file system, and mounting a file system. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

6-50

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Configuring Volumes

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

7-2

7-2

Introduction
Overview This lesson describes how to configure volumes in VxVM. This lesson covers how to add and remove a mirror, how to add a log, and how to add a file system to a volume. In addition, methods for allocating storage for volumes, changing the volume read policy, and creating layered volumes are also covered. Importance By configuring volume attributes, you can create volumes that meet the needs of your business environment. Outline of Topics Administering Mirrors Adding a Log to a Volume Changing the Volume Read Policy Adding a File System to a Volume Allocating Storage for Volumes What Is a Layered Volume? Creating a Layered Volume

7-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Add a mirror to and remove a mirror from an existing volume. Add a log to an existing volume. Change the volume read policy for a mirrored volume. Add a file system to an existing volume. Allocate storage for a volume. List the benefits of layered volumes. Create layered volumes.
FOS35_Sol_R1.0_20020930 7-3

Objectives After completing this lesson, you will be able to: Add a mirror to and remove a mirror from an existing volume by using VEA and from the command line. Add a dirty region log or RAID-5 log to an existing volume by using VEA and from the command line. Change the volume read policy for a mirrored volume to specify which plex in a volume is used to satisfy read requests by using VEA and from the command line. Add a file system to an existing volume by using VEA and from the command line. Allocate storage for a volume by specifying storage attributes and ordered allocation. List the benefits of layered volumes, which provide mirroring at a more granular level. Create and view layered volumes by using VEA and from the command line.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-3

Adding a Mirror to a Volume


Only concatenated or striped volumes can be mirrored. By default, a mirror is created with the same plex layout as the original volume. Each mirror must reside on a separate disk. All disks must be in the same disk group. A volume can have up to 32 plexes, or mirrors. (The practical limit is 31.)
FOS35_Sol_R1.0_20020930 7-4

Administering Mirrors
Adding a Mirror If a volume was not originally created as a mirrored volume, or if you want to add additional mirrors, you can add a mirror to an existing volume. Only concatenated or striped volumes can be mirrored. You cannot mirror a RAID-5 volume. By default, a mirror is created with the same plex layout as the plex already in the volume. For example, assume that a volume is composed of a single striped plex. If you add a mirror to the volume, VxVM makes that plex striped, as well. You can specify a different layout using VEA or from the command line. A mirrored volume requires at least two disks. You cannot add a mirror to a disk that is already being used by the volume. A volume can have multiple mirrors, as long as each mirror resides on a separate disk. Only disks in the same disk group as the volume can be used to create the new mirror. Unless you specify the disks to be used for the mirror, VxVM automatically locates and uses available disk space to create the mirror. A volume can contain up to 32 plexes (mirrors); however, the practical limit is 31. One plex should be reserved for use by VxVM.

7-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding a Mirror: VEA


Select Actions> Mirror>Add. Select Actions> Mirror>Add. Specify the number Specify the number of mirrors to add. of mirrors to add.

Specify the disks to use Specify the disks to use and layout options. and layout options.
FOS35_Sol_R1.0_20020930 7-5

FOS35_Sol_R1.0_20020930

7-5

Adding a Mirror: VEA To add a mirror to an existing volume: 1 In the main window, select the volume to be mirrored. 2 In the Actions menu, select Mirror>Add. 3 In the Add Mirror dialog box, specify the number of mirrors to add. The default is 1. 4 Specify the disks to use for the mirror. By default, VxVM determines which disks to use. To place the mirror on specific disks, select Manually select disks for use by this volume. Move disks that can be used for the mirror into the Included field. Move disks that cannot be used for the mirror into the Excluded field. Mark the Mirror Across check box to mirror the volume across a controller, tray, target, or enclosure. Mark the Stripe Across check box to stripe the volume across a controller, tray, target, or enclosure. 5 Click OK to create the mirror. Note: Adding a mirror requires resynchronization of the volume, so this operation may take some time. To verify that a new mirror was added, view the total number of copies of the volume as displayed in the main window. The total number of copies is increased by the number of mirrors added.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-5

Adding a Mirror: CLI

#_

To add a mirror to an existing volume: vxassist -g diskgroup mirror volume [layout=layout_type] [disk_name] Example: # vxassist -g datadg mirror datavol

To mirror all unmirrored volumes in a disk group: # vxmirror -g datadg a To configure VxVM to create mirrored volumes by default: # vxmirror -d yes To turn off the default creation of mirrored volumes: # vxmirror -d no
7-6

FOS35_Sol_R1.0_20020930

Adding a Mirror: CLI You can add a mirror to an existing volume any time after the volume is created by using the vxassist command. To add a mirror to a volume:
# vxassist -g diskgroup mirror volume_name

For example, to mirror the volume datavol in the disk group datadg, you type:
# vxassist -g datadg mirror datavol

To specify a different layout for the mirror, continue the command with layout= and specify the layout for the mirror. To add a mirror onto a specific disk, you specify the disk name in the command:
# vxassist -g diskgroup mirror volume_name [disk_name]

For example, to add a mirror to datavol using the disk datadg03:


# vxassist -g datadg mirror datavol datadg03

Mirroring All Volumes To mirror all unmirrored volumes in a disk group to available disk space:
vxmirror -g diskgroup -a

7-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Setting a Default Mirror on Volume Creation You can configure VxVM to create mirrored volumes by default by using the command:
# vxmirror -d yes

If you make this change, you can still create unmirrored volumes by specifying nmirror=1 as an attribute to the vxassist command. For example, to create an unmirrored 20-megabyte volume named nomirrorvol, you type:
# vxassist make nomirrorvol 20m nmirror=1

To turn off the default creation of mirrored volumes, you type:


# vxmirror -d no

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-7

Removing a Mirror
Why remove a mirror? To provide free space To reduce number of mirrors To remove a temporary mirror

When a plex is removed, subdisks are returned to the free space pool.

FOS35_Sol_R1.0_20020930

7-7

Removing a Mirror When a mirror (plex) is no longer needed, you can remove it. When a mirror is removed, the space occupied by that mirror can be used elsewhere. Removing a mirror can be used: To provide free disk space To reduce the number of mirrors in a volume in order to reduce I/O to the volume To remove a temporary mirror that was created to back up a volume and is no longer needed Subdisks from a removed plex are returned to the disk groups free space pool. Caution: Removing a mirror can result in loss of data redundancy. If a volume only has two plexes, removing one of them leaves the volume unmirrored.

7-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Removing a Mirror: VEA


Select Actions>Mirror>Remove. Select Actions>Mirror>Remove.

FOS35_Sol_R1.0_20020930

Remove mirrors Remove mirrors by Mirror by Mirror

FOS35_Sol_R1.0_20020930

Remove mirrors Remove mirrors by Quantity/Disk by Quantity/Disk

7-8

7-8

Removing a Mirror: VEA To remove a mirror from a volume: 1 In the main window, select the volume that contains the mirror to be removed. 2 In the Actions menu, select Mirror>Remove. 3 Complete the Remove Mirror dialog box by specifying: Volume Name: Specify the volume that contains the mirror to be removed. Remove mirrors by: You can remove a mirror by the name of the mirror, by quantity, or by disk. By mirror: To specify the name of the mirror to be removed, select Mirror. Add the plex to be removed to the Selected mirrors field. By quantity: To specify a number of mirrors to be removed, select Quantity/Disk, and type the number of mirrors to be removed in the Mirror quantity field. By disk: To specify the name of disks on which mirrors should be preserved, select Quantity/Disk. Add the disks that are to retain their plexes to the Preserved disks field. 4 Click OK to complete the task.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-9

Removing a Mirror: CLI

#_

To remove a mirror with vxassist:


vxassist -g diskgroup remove mirror volume [!]dm_name

To remove the plex that contains a subdisk from datadg02:


# vxassist -g datadg remove mirror datavol !datadg02

To remove the plex that uses any disk except for datadg02:
# vxassist -g datadg remove mirror datavol datadg02

To remove a mirror with vxplex and vxedit in combination:


vxplex -g diskgroup dis plex_name vxedit -g diskgroup -rf rm plex_name

Or, use the single command:


vxplex -g diskgroup -o rm dis plex_name

FOS35_Sol_R1.0_20020930

7-9

Removing a Mirror: CLI To remove a mirror from the command line, you use the command:
vxassist [-g diskgroup] remove mirror volume [!]dm_name

When deleting a mirror (or a log), you indicate the storage to be removed using the form !dm_name. For example, for the volume datavol, to remove the plex that contains a subdisk from the disk datadg02:
# vxassist -g datadg remove mirror datavol !datadg02

To remove the plex that uses any disk except datadg02:


# vxassist -g datadg remove mirror datavol datadg02

You can also use the vxplex and vxedit commands in combination to remove a mirror:
vxplex [-g diskgroup] dis plex_name vxedit [-g diskgroup] -rf rm plex_name

For example:
# vxplex -g datadg dis datavol-02 # vxedit -g datadg -rf rm datavol-02

You can also use the single command:


# vxplex -g diskgroup -o rm dis plex_name

For more information, see the vxplex(1m) and vxedit(1m) manual pages.
7-10 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding a Log to a Volume


Dirty Region Logging Dirty Region Logging
(for mirrored volumes) (for mirrored volumes) Log keeps track of Log keeps track of changed regions. changed regions. If the system fails, only the If the system fails, only the changed regions of changed regions of volume must be volume must be recovered. recovered. DRL is not enabled by DRL is not enabled by default. When DRL is default. When DRL is enabled, one log is enabled, one log is created. created. You can create additional You can create additional logs to mirror log data. FOS35_Sol_R1.0_20020930 logs to mirror log data.
FOS35_Sol_R1.0_20020930

RAID-5 Logging RAID-5 Logging


(for RAID-5 volumes) (for RAID-5 volumes) Log keeps a copy of data Log keeps a copy of data and parity writes. and parity writes. If the system fails, the log If the system fails, the log is replayed to speed is replayed to speed resynchronization. resynchronization. RAID-5 logging is enabled RAID-5 logging is enabled by default. by default. RAID-5 logs can be RAID-5 logs can be mirrored. mirrored. Store logs on disks Store logs on disks separate from volume data separate from volume data 7-10 and parity. and parity.
7-10

Adding a Log to a Volume


Logging in VxVM By enabling logging, VxVM tracks changed regions of a volume. Log information can then be used to reduce plex synchronization times and speed the recovery of volumes after a system failure. Logging is an optional feature, but is highly recommended, especially for large volumes. VxVM supports two types of logging: Dirty region logging (for mirrored volumes) RAID-5 logging (for RAID-5 volumes) Dirty Region Logging Dirty region logging (DRL) is used with mirrored volume layouts. DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume. Prior to every write, a bitmap is written to a log to record the area of the disk that is being changed. In case of system failure, DRL uses this information to recover only the portions of the volume that need to be recovered. If DRL is not used and a system failure occurs, all mirrors of the volumes must be restored to a consistent state by copying the full contents of the volume between its mirrors. This process can be lengthy and I/O intensive.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-11

When you enable logging on a mirrored volume, one log plex is created by default. The log plex uses space from disks already used for that volume, or you can specify which disk to use. To enhance performance, you should consider placing the log plex on a disk that is not already in use by the volume. You can create additional DRL logs on different disks to mirror the DRL information. RAID-5 Logging When you create a RAID-5 volume, a RAID-5 log is added by default. RAID-5 logs speed up the resynchronization time for RAID-5 volumes after a system failure. A RAID-5 log maintains a copy of the data and parity being written to the volume at any given time. If a system failure occurs, VxVM can replay the RAID-5 log to resynchronize the volume. This copies the data and parity that was being written at the time of failure from the log to the appropriate areas of the RAID-5 volume. You can create multiple RAID-5 logs on different disks to mirror the log information. Ideally, each RAID-5 volume should have at least two logs to protect against the loss of logging information due to the failure of a single disk. A RAID-5 log should be stored on a separate disk from the volume data and parity disks. Therefore, at least four disks are required to implement RAID-5 with logging. Although a RAID-5 volume cannot be mirrored, RAID-5 logs can be mirrored. To support concurrent access to the RAID-5 array, the log should be several times the stripe size of the RAID-5 plex. As a guideline, make the log six times the size of a full-stripe write to the RAID-5 volume.

7-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding a Log: VEA


Select Actions>Log>Add. Select Actions>Log>Add.

Specify the disk Specify the disk or disks to or disks to contain the log. contain the log.

FOS35_Sol_R1.0_20020930

7-11

To remove a log, select Actions>Log>Remove. To remove a log, select Actions>Log>Remove.


FOS35_Sol_R1.0_20020930 7-11

Adding a Log: VEA You can add a log to a volume when you create the volume or at any time after volume creation. The type of log that is created is based on the type of volume layout. To add a log after volume creation: 1 In the main window, select the volume to contain the log. 2 In the Actions menu, select Log>Add. 3 In the Add Log dialog box, specify the disk to contain the log. By default, VxVM locates available space on any disk in the disk group and assigns the space automatically. To place the log on specific disks, select Manually assign destination disks, and move the desired destination disks from the left field to the right field. Disks in the right field are used to contain the log. 4 Click OK to complete the task. Removing a Log: VEA To remove a log from a volume: 1 Select the volume that contains the RAID-5 or DRL log to be removed. 2 In the Actions menu, select Log>Remove. 3 In the Remove Log dialog box, specify the volume name and removal method. The procedure is similar to removing a mirror. 4 Click OK to complete the task. Note: When you remove the only log from a volume, logging is no longer in effect, and recovery time increases in the event of a system crash.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-13

Adding a Log: CLI

#_

To add a log to an existing volume:


vxassist -g diskgroup addlog volume [logtype=drl] [nlog=n] [attributes]

To add a dirty region log to an existing mirrored volume:


# vxassist -g datadg addlog datavol logtype=drl

To add a RAID-5 log to an existing RAID-5 volume, no log type is needed:


# vxassist -g acctdg addlog payvol

To remove a log from a volume:


vxassist -g diskgroup remove log [nlog=n] volume

FOS35_Sol_R1.0_20020930

7-12

Adding a Log: CLI You can add a dirty region log to a mirrored volume or add a RAID-5 log to a RAID-5 volume by using the vxassist addlog command. To add a dirty region log to a mirrored volume, you use the logtype=drl attribute. For a RAID-5 volume, you do not need to specify a log type. VxVM adds a RAID-5 log based on the volume layout.
# vxassist -g diskgroup addlog volume_name [logtype=drl] [nlog=n] [attributes]

For example, to add a dirty region log to the mirrored volume datavol in the disk group datadg:
# vxassist -g datadg addlog datavol logtype=drl

To add two dirty region logs, you add the nlog attribute:
# vxassist -g datadg addlog datavol logtype=drl nlog=2

To add a RAID-5 log to the RAID-5 volume payvol in the disk group acctdg:
# vxassist -g acctdg addlog payvol

VxVM recognizes that the layout is RAID-5 and adds a RAID-5 log. You can specify additional attributes, such as the disks that should contain the log, when you run the vxassist addlog command. When no disks are specified, VxVM uses space from the disks already in use by that volume, which may not be best for performance.

7-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Removing a Log: CLI You can remove a dirty region log or a RAID-5 log by using the vxassist remove log command with the name of the volume. The appropriate type of log is removed based on the type of volume.
vxassist -g diskgroup remove log volume_name

For example, to remove the dirty region log from the volume datavol, you type:
# vxassist -g datadg remove log datavol

By default, vxassist removes one log. To remove more than one log, you can add the nlog=n attribute to specify the number of logs to be removed:
# vxassist -g datadg remove log nlog=2 datavol

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-15

Volume Read Policies


Round Robin Round Robin
Volume Read I/O Read I/O Read I/O Read I/O Preferred Preferred

Preferred Plex Preferred Plex


Volume

Selected Plex Selected Plex


Read I/O
FOS35_Sol_R1.0_20020930

Volume

Which has the greatest throughput?

7-13

Changing the Volume Read Policy


Volume Read Policies with Mirroring One of the benefits of mirrored volumes is that you have more than one copy of the data from which to satisfy read requests. You can specify which plex VxVM should use to satisfy read requests by setting the read policy. The read policy for a volume determines the order in which volume plexes are accessed during I/O operations. VxVM has three read policies: Round robin: If you specify a round-robin read policy, VxVM reads each plex in turn in round-robin manner for each nonsequential I/O detected. Sequential access causes only one plex to be accessed in order to take advantage of drive or controller read-ahead caching policies. If a read is within 256K of the previous read, then the read is sent to the same plex. Preferred plex: With the preferred plex read policy, Volume Manager reads first from a plex that has been named as the preferred plex. Read requests are satisfied from one specific plex, presumably the plex with the highest performance. If the preferred plex fails, another plex is accessed. Selected plex: This is the default read policy. Under the selected plex policy, Volume Manager chooses an appropriate read policy based on the plex configuration to achieve the greatest I/O throughput. If the volume has an enabled striped plex, the read policy defaults to that plex; otherwise, it defaults to a round-robin read policy.

7-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Read Policy: VEA


Select Actions>Set Volume Usage. Select Actions>Set Volume Usage.

Select a read Select a read policy. policy.

Default: Based on Layouts Default: Based on Layouts (Selected plex method) (Selected plex method)

If you select Preferred, If you select Preferred, then you can also select then you can also select the preferred plex from the preferred plex from the list of available plexes. the list of available plexes.

FOS35_Sol_R1.0_20020930

7-14

Changing the Volume Read Policy: VEA To change the volume read policy: 1 Select a volume in the main window. 2 In the Actions menu, select Set Volume Usage. 3 In the Set Volume Usage dialog box, select one of the following: Based on layouts Round robin Preferred The default setting is Based on layouts (the selected plex method). If you select Preferred, then you can also select the preferred plex from the list of available plexes. 4 To accept the change, click OK.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-17

Read Policy: CLI

#_

To set the read policy to round robin:


# vxvol -g datadg rdpol round datavol
Volume name Volume name

To set the read policy to read from a preferred plex:


# vxvol -g datadg rdpol prefer datavol datavol-02
Plex name Plex name

To set the read policy to select a plex based on layouts:


# vxvol -g datadg rdpol select datavol
FOS35_Sol_R1.0_20020930 7-15

Changing the Volume Read Policy: CLI To change the volume read policy from the command line, you use the vxvol rdpol command:
vxvol -g diskgroup rdpol round volume_name vxvol -g diskgroup rdpol prefer volume_name preferred_plex vxvol -g diskgroup rdpol select volume_name

In the syntax, you specify the type of read policy that you want VxVM to use and the name of the volume to which the read policy applies. If you want to use the preferred-plex read policy, you must also specify the name of the preferred plex to use for reads. For example, to set the read policy for the volume datavol to round robin, you type:
# vxvol -g datadg rdpol round datavol

To set the policy for datavol to read preferentially from the plex datavol-02, you type:
# vxvol -g datadg rdpol prefer datavol datavol-02

To set the read policy for datavol to dynamically select an appropriate read policy based on the mirrors, you type:
# vxvol -g datadg rdpol select datavol

7-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding a File System: Methods


VEA Select the volume to contain the file system. Select Actions>File System>New File System.

#_

CLI To create a file system:


mkfs -F fstype /dev/vx/rdsk/diskgroup/volume

To mount the file system:


mount -F fstype /dev/vx/dsk/diskgroup/volume mount_point
FOS35_Sol_R1.0_20020930 7-16

Adding a File System to a Volume


A file system provides an organized structure to facilitate the storage and retrieval of files. You can add a file system to a volume when you initially create a volume or any time after you create the volume. Adding a File System to a Volume: Methods You can use any of the following methods to add a file system to a volume. These methods are detailed in the sections that follow.
VEA Select the volume to contain the file system. Select Actions>File System>New File System. Complete the New File System dialog box. To create a file system: mkfs -F fstype /dev/vx/rdsk/diskgroup/volume To mount the file system: mount -F fstype /dev/vx/dsk/diskgroup/volume mount_point

CLI

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-19

Adding a File System: VEA


Select Actions>File System>New File System. Select Actions>File System>New File System.

Complete the New File Complete the New File System dialog box. System dialog box.
FOS35_Sol_R1.0_20020930 7-17

FOS35_Sol_R1.0_20020930

7-17

Adding a File System to a Volume: VEA To add a file system to an existing volume: 1 In the main window, select the volume to contain the file system. 2 In the Actions menu, select File System>New File System. 3 In the New File System dialog box, specify the File system type as vxfs (VERITAS File System) or ufs (UNIX File System). 4 Verify or set the mkfs and mount options to be used in creating and mounting the file system. The New File System Details and Mount File System Details buttons provide access to additional mkfs and mount options. 5 Click OK to complete the task.

7-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Mounting a File System: VEA


To mount a file system:
1. Select the volume. 2. Select Actions>File System>Mount File System. 3. Complete the Mount File System dialog box and click OK.

To unmount a file system:


1. Select the volume. 2. Select Actions>File System>Unmount File System. 3. Click Yes to confirm your decision.
FOS35_Sol_R1.0_20020930 7-18

Mounting a File System: VEA A file system created with VEA is mounted automatically if you specify the mount point in the New File System dialog box. If a file system was previously created, but not mounted, on a volume, you can explicitly mount the file system. This procedure mounts a file system that already exists on a volume and updates the file system table file, if necessary. To mount a file system on an existing volume: 1 Select the volume that contains the file system to be mounted. 2 In the Actions menu, select File System>Mount File System. 3 Complete the Mount File System dialog box and click OK. Unmounting a File System: VEA To unmount a file system on a volume: 1 Select the volume containing the file system to be unmounted. 2 In the Actions menu, select File System>Unmount File System. 3 Click Yes to confirm your decision to unmount the file system.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-21

Adding a File System: CLI

#_

1. Create the file system using mkfs (VxFS) or newfs (UFS):


# mkfs -F vxfs /dev/vx/rdsk/datadg/datavol

2. Create a mount point directory:


# mkdir /data

3. Link the volume to the mount point:


# mount -F vxfs /dev/vx/dsk/datadg/datavol/data

FOS35_Sol_R1.0_20020930

7-19

Adding a File System to a Volume: CLI To add a file system to a volume from the command line, you must create the file system, create a mount point for the file system, and then mount the file system. 1 To create the file system, use the mkfs (VxFS) or newfs (UFS) commands with the appropriate options to create the file system on the volume where the volume subdisks are stored. For example: # mkfs -F vxfs /dev/vx/rdsk/datadg/datavol 2 Create a directory to use as a mount point for the file system: # mkdir /data 3 Use the mount command with appropriate options to link the volume to the mount point: # mount -F vxfs /dev/vx/dsk/datadg/datavol /data Notes: When a file system has been mounted on a volume, the data is accessed through the mount point directory. When data is written to files, it is actually written to the block device file: /dev/vx/dsk/datadg/datavol. When fsck is run on the file system, the raw device file is checked: /dev/vx/rdsk/datadg/datavol. newfs can be used for UFS, but not for VxFS.

7-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Mount File System at Boot

#_

To mount the file system automatically at boot time, edit the /etc/vfstab file to add an entry for the file system. In the vfstab file, you specify:
Device to mount: Device to fsck: Mount point: File system type: Mount at boot: Mount options: /dev/vx/dsk/datadg/datavol /dev/vx/rdsk/datadg/datavol /data vxfs 1 yes -

fsck pass:

FOS35_Sol_R1.0_20020930

In VEA, select Add to file system table and Mount at boot in the New File System dialog box.

7-20

FOS35_Sol_R1.0_20020930

7-20

Mounting a File System at Boot: CLI If you want the file system to be mounted at every system boot, you must edit the /etc/vfstab file by adding an entry for the file system. If you later decide to remove the volume, you must remove the entry in the /etc/vfstab file. The following is an example of an /etc/vfstab file:
#device device mount #to mount to fsck point # #/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr /proc /proc fd /dev/fd swap /tmp /dev/dsk/c0t3d0s0 /dev/rdsk/c0t3d0s0 / /dev/dsk/c0t3d0s1 /dev/vx/dsk/datadg/datavol/dev/vx/rdsk/datadg/datavol/data FS type ufs proc fd tmpfs ufs swap vxfs fsck pass 1 1 1

Note: In VEA, when you create a file system, if you select the Add to file system table and Mount at boot check boxes, the entry is made automatically in the /etc/vfstab file. If the volume is later removed through VEA, its corresponding /etc/vfstab file entry is also removed automatically.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-23

Specifying Storage Attributes


With storage attributes, you can specify: Which storage devices are used by the volume How volumes are mirrored across devices When creating a volume, you can: Include specific disks, controllers, enclosures, targets, or trays to be used for the volume. Exclude specific disks, controllers, enclosures, targets, or trays from being used for the volume. Mirror volumes across specific controllers, enclosures, targets, or trays. (By default, VxVM mirrors across different disks.)
FOS35_Sol_R1.0_20020930 7-21

Allocating Storage for Volumes


Specifying Storage Attributes for Volumes VxVM automatically selects the disks on which each volume resides, unless you specify otherwise. To create a volume on specific disks, you can designate those disks when creating a volume. By specifying storage attributes when you create a volume, you can: Include specific disks, controllers, enclosures, targets, or trays to be used for the volume. Exclude specific disks, controllers, enclosures, targets, or trays to be used for the volume. Mirror volumes across specific controllers, enclosures, targets, or trays. (By default, VxVM does not permit mirroring on the same disk.) By specifying storage attributes, you can ensure a high availability environment. For example, you can only permit mirroring of a volume on disks connected to different controllers, and eliminate the controller as a single point of failure. Note: When creating a volume, all storage attributes that you specify for use must belong to the same disk group. Otherwise, VxVM does not use them to create a volume.

7-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Storage Attributes: Methods


VEA: In the New Volume wizard, select Manually select disks for use by this volume, and select the disks and storage allocation policy.

#_

CLI: Add storage attributes to vxassist make:


vxassist [-g diskgroup] make volume length [layout=layout] [mirror=ctlr|enclr|target] [!][storage_attributes...]
Disks: datadg02 Controllers: ctlr:c2 Enclosures: enclr:emc1 Targets: target:c2t4 Trays: c2tray2 Mirror across controllers: mirror=ctlr Mirror across enclosures: mirror=enclr Mirror across targets: mirror=target

Exclude Exclude

FOS35_Sol_R1.0_20020930

7-22

Specifying Storage Attributes: VEA To specify storage attributes when creating a volume by using VEA: 1 In the New Volume wizard, advance to the Select disks to use for volumes page. 2 Select Manually select disks for use by this volume. 3 Select the disks and the storage layout policy for allocating storage to a volume. You can specify that the volume is to be mirrored or striped across controllers, enclosures, targets, or trays.

Note: A tray is a set of disks within certain Sun arrays.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-25

Specifying Storage Attributes: CLI To create a volume on specific disks, you add storage attributes to the end of the vxassist command:
vxassist [-g diskgroup] make volume_name length [layout=layout] storage_attributes...

Storage attributes can include: Disk names, in the format diskname, for example, datadg02 Controllers, in the format ctlr:controller_name, for example, ctlr:c2 Enclosures, in the format enclr:enclosure_name, for example, enclr:emc1 Targets, in the format target:target_name, for example, target:c2t4 Trays, in the format c#tray#, for example, c2tray2 To exclude a disk, controller, enclosure, target, or tray, you add the exclusion symbol (!) before the storage attribute. For example, to exclude datadg02 from volume creation, you use the format: !datadg02. When mirroring volumes across controllers, enclosures, or targets, you can use additional attributes: The attribute mirror=ctlr specifies that disks in one mirror should not be on the same controller as disks in other mirrors within the same volume. The attribute mirror=enclr specifies that disks in one mirror should not be in the same enclosure as disks in other mirrors within the same volume. The attribute mirror=target specifies that volumes should be mirrored between identical target IDs on different controllers. Note: The vxassist utility has an internal default mirror=disk attribute that prevents you from mirroring data on the same disk.

7-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Storage Attributes: Examples

#_

To create datavol using any disks except for datadg05:


# vxassist -g datadg make datavol 5g !datadg05

To exclude all disks on controller c2:


# vxassist -g datadg make datavol 5g !ctlr:c2

To include all disks on c1, except for target t5:


# vxassist -g datadg make datavol 5g ctlr:c1 !target:c1t5

To create a mirrored volume with one plex on c2 and the other plex on c3:
# vxassist -g datadg make datavol 10g layout=mirror nmirror=2 mirror=ctlr ctlr:c2 ctlr:c3
FOS35_Sol_R1.0_20020930 7-23

Example: Creating a Volume on Specific Disks To create a 5-GB volume called datavol on datadg03 and datadg04:
# vxassist -g datadg make datavol 5g datadg03 datadg04

Examples: Excluding Storage from Volume Creation To create the volume datavol using any disks except for datadg05:
# vxassist -g datadg make datavol 5g !datadg05

To exclude all disks that are on controller c2:


# vxassist -g datadg make datavol 5g !ctlr:c2

To include only disks on controller c1 except for target t5:


# vxassist -g datadg make datavol 5g ctlr:c1 !target:c1t5

To exclude disks datadg07 and datadg08 when calculating the maximum size of a RAID-5 volume that vxassist can create using the disks in the disk group datadg:
# vxassist -g datadg maxsize layout=raid5 nlog=2 !datadg07 !datadg08

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-27

Example: Mirroring Across Controllers To create a mirrored volume with two data plexes, and specify that disks in one mirror should not be on the same controller as disks in other mirrors within the same volume:
# vxassist -g datadg make datavol 10g layout=mirror nmirror=2 mirror=ctlr ctlr:c2 ctlr:c3

The disks in one data plex are all attached to controller c2, and the disks in the other data plex are all attached to controller c3. This arrangement ensures continued availability of the volume should either controller fail. Example: Mirroring Across Enclosures To create a mirrored volume with two data plexes, and specify that disks in one mirror should not be in the same enclosure as disks in other mirrors within the same volume:
# vxassist -g datadg make datavol 10g layout=mirror nmirror=2 mirror=enclr enclr:emc1 enclr:emc2

The disks in one data plex are all taken from enclosure emc1, and the disks in the other data plex are all taken from enclosure emc2. This arrangement ensures continued availability of the volume should either enclosure become unavailable.

7-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Ordered Allocation
With VxVM 3.2 and later, ordered allocation enables you to control how columns and mirrors are laid out when creating a volume. With ordered allocation, storage is allocated in a specific order: First, VxVM concatenates the disks. Secondly, VxVM forms columns. Finally, VxVM forms mirrors. Example: For disks 1 through 6, create column 1 on disk 1, column 2 on disk 2, column 3 on disk 3, then create the mirror using disks 4, 5, and 6.
FOS35_Sol_R1.0_20020930 7-24

Specifying Ordered Allocation of Storage for Volumes In addition to specifying which storage devices VxVM uses to create a volume, you can also specify how the volume is distributed on the specified storage. By using the ordered allocation feature of VxVM, you can control how volumes are laid out on specified storage. Ordered allocation is available in VxVM 3.2 and later. When you use ordered allocation in creating a volume, columns and mirrors are created on disks based on the order in which you list the disks on the command line. Storage is allocated in the following order: First, VxVM concatenates the disks. Secondly, VxVM forms columns. Finally, VxVM forms mirrors. For example, if you are creating a three-column mirror-stripe volume using six specified disks, VxVM creates column 1 on the first disk, column 2 on the second disk, and column 3 on the third disk. Then, the mirror is created using the fourth, fifth, and sixth specified disks. Without the ordered allocation option, VxVM uses the disks in any order.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-29

Ordered Allocation: Methods


VEA: In the New Volume wizard, select Manually

select disks for use by this volume. Select the disks and the storage allocation policy,
and mark the Ordered check box.

#_

CLI: Add the -o ordered option: vxassist [-g diskgroup][-o ordered] make volume length [layout=layout]... Optional attributes for ordered allocation:
Use col_switch=size1,size2... to specify how to allocate space from listed disks to concatenate a column before switching to the next disk. Use logdisk=disk to specify the disk on which logs are created.
7-25

FOS35_Sol_R1.0_20020930

Specifying Ordered Allocation: VEA To specify ordered allocation using VEA: 1 In the New Volume wizard, select Manually select disks for use by this volume. 2 Select the disks and storage layout policy for the volume, and mark the Ordered check box.

When Ordered is selected, VxVM uses the specified storage to first concatenate disks, then to form columns, and finally to form mirrors. Specifying Ordered Allocation: CLI To implement ordered allocation of storage to volumes, you use the -o ordered option to vxassist when creating a volume:
vxassist [-g diskgroup] [-o ordered] make volume_name...

Two optional attributes are also available with the -o ordered option: You can use the col_switch=size1,size2... attribute to specify how to allocate space from each listed disk to a concatenated column before switching to the next disk. The number of size arguments determines how many disks are concatenated to form a column.
7-30 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

You can use the logdisk=disk attribute to specify the disk on which logs are created. This attribute is required when using ordered allocation in creating a RAID-5 volume, unless nolog or noraid5log is specified. For other types of volume layouts, this attribute is optional.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-31

Ordered Allocation: Examples

#_

Specifying the Order of Columns


# vxassist -g datadg -o ordered make datavol 2g layout=stripe ncol=3 datadg02 datadg04 datadg06
datavol

02

04

06

Column 1 is placed on datadg02, column 2 on datadg04, and column 3 on datadg06.


FOS35_Sol_R1.0_20020930 7-26

Example: Order of Columns To create a 10-GB striped volume, called datavol, with three columns striped across three disks:
# vxassist -g datadg -o ordered make datavol 10g layout=stripe ncol=3 datadg02 datadg04 datadg06

Because the -o ordered option is specified, column 1 is placed on datadg02, column 2 is placed on datadg04, and column 3 is placed on datadg06. Without this option, column 1 can be placed on any of the three disks, column 2 on any of the remaining two disks, and column 3 on the remaining disk.

7-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Ordered Allocation: Examples

#_

Specifying the Order of Mirrors


# vxassist -g datadg -o ordered make datavol 2g layout=mirror datadg02 datadg04
datavol

02

04

The first mirror is placed on datadg02 and the second mirror is placed on datadg04.
FOS35_Sol_R1.0_20020930 7-27

Example: Order of Mirrors To create a mirrored volume using datadg02 and datadg04:
# vxassist -g datadg -o ordered make datavol 10g layout=mirror datadg02 datadg04

Because the -o ordered option is specified, the first mirror is placed on datadg02 and the second mirror is placed on datadg04. Without this option, the first mirror could be placed on either disk. Note: There is no logical difference between the mirrors. However, by controlling the order of mirrors, you can allocate plex names with specific disks (for example, datavol-01 with datadg02 and datavol-02 with datadg04). This level of control is significant when you perform mirror breakoff and disk group split operations. You can establish conventions that indicate to you which specific disks are used for the mirror breakoff operations.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-33

Ordered Allocation: Examples

#_

Specifying the Order of Columns and Mirrors


# vxassist -g datadg -o ordered make datavol 2g layout=mirror-stripe ncol=3 datadg01 datadg02 datadg03 datadg04 datadg05 datadg06

datavol

01

02

03

04

05

06

Columns 1, 2, and 3 of the first mirror are placed on datadg01, datadg02, and datadg03, respectively. Columns 1, 2, and 3 of the second mirror are placed on datadg04, datadg05, and datadg06, respectively.
7-28

FOS35_Sol_R1.0_20020930

Example: First Form Columns, Then Form Mirrors To create a mirrored-stripe volume with 3 columns and 2 mirrors on 6 disks:
# vxassist -g datadg -o ordered make datavol 10g layout=mirror-stripe ncol=3 datadg01 datadg02 datadg03 datadg04 datadg05 datadg06

Because the -o ordered option is included, this command places columns 1, 2, and 3 of the first mirror on datadg01, datadg02, and datadg03, respectively. Columns 1, 2, and 3 of the second mirror are placed on datadg04, datadg05, and datadg06, respectively.

7-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Ordered Allocation: Examples

#_

Specifying Column Concatenation


# vxassist -g datadg -o ordered make datavol 10g layout=mirror-stripe ncol=2 col_switch=3g,2g datadg01 datadg02 datadg03 datadg04 datadg05 datadg06 datadg07 datadg08 datavol
3 GB 2 GB 3 GB 2 GB 3 GB 2 GB 3 GB 2 GB

01

02

03

04

05

06

07

08

3 GB from datadg01 and 2 GB from datadg02 to column 1


7-29

3 FOS35_Sol_R1.0_20020930 GB from datadg03 and 2 GB from datadg04 to column 2 Mirror created in same way from disks datadg05 through 08

FOS35_Sol_R1.0_20020930

7-29

Example: Concatenating Columns You can use the col_switch attribute to specify how to concatenate space on the disks into columns. For example, to create a 2-column, mirrored-stripe volume:
# vxassist -g datadg -o ordered make datavol 10g layout=mirror-stripe ncol=2 col_switch=3g,2g datadg01 datadg02 datadg03 datadg04 datadg05 datadg06 datadg07 datadg08

Because the col_switch attribute is included, this command allocates 3 GB from datadg01 and 2 GB from datadg02 to column 1, and 3 GB from datadg03 and 2 GB from datadg04 to column 2. The mirrors of these columns are then similarly formed from disks datadg05 through datadg08.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-35

Ordered Allocation: Examples

#_

Specifying Other Storage Classes


# vxassist -g datadg -o ordered make datavol 80g layout=mirror-stripe ncol=3 ctlr:c1 ctlr:c2 ctlr:c3 ctlr:c4 ctlr:c5 ctlr:c6 datavol

c1

c2

c3

c4

c5

c6

This command allocates space for column 1 from disks on controllers c1, for column 2 from disks on controller c2, and so on. FOS35_Sol_R1.0_20020930

7-30

Example: Other Storage Classes You can use other storage specification classes, such as controllers, enclosures, targets, and trays, with ordered allocation. For example, to create a 3-column, mirrored-stripe volume between specified controllers:
# vxassist -g datadg -o ordered make datavol 80g layout=mirror-stripe ncol=3 ctlr:c1 ctlr:c2 ctlr:c3 ctlr:c4 ctlr:c5 ctlr:c6

This command allocates space for column 1 from disks on controllers c1, for column 2 from disks on controller c2, and so on.

7-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Specifying SAN Storage Groups


In a SAN environment, you can specify how VxVM uses storage groups when creating volumes. Storage groups are defined with VERITAS SANPoint Control. The vxassist utility is SAN-aware and has additional options for allocating storage from disks with specific SAN storage attributes. For more information, see the vxassist(1m) manual page.

FOS35_Sol_R1.0_20020930

7-31

Specifying SAN Storage Groups In a Storage Area Networking (SAN) environment, if you are using VxVM in conjunction with VERITAS SANPoint Control, you can specify how VxVM uses the available storage groups when creating volumes. The vxassist utility is SAN-aware, which means that after you define SAN storage groups using VERITAS SANPoint Control, you can specify disk space allocation from the storage groups. Additional vxassist options are available that enable you to allocate storage from disks with specific SAN storage attributes. These options are beyond the scope of this training. For more information, see the vxassist(1m) manual page.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-37

What Is a Layered Volume?


Original Mirroring
The loss of disk results in the loss of the complete plex. A second disk failure could result in the loss of the complete volume. Mirroring is performed at the column or subdisk level. Disk losses are less likely to affect the complete volume.

Layered Volumes

FOS35_Sol_R1.0_20020930

7-32

What Is a Layered Volume?


Methods Used to Mirror Data VxVM provides two ways to mirror your data: Original VxVM mirroring: With the original method of mirroring, data is mirrored at the plex level. This means that the loss of a disk results in the loss of a complete plex. A second disk failure could result in the loss of a complete volume if the volume has only two mirrors. To recover the volume, the complete volume contents must be copied from backup. Enhanced mirroring: VxVM 3.0 introduced support for an enhanced type of mirrored volume called a layered volume. A layered volume is a virtual Volume Manager object that mirrors data at a more granular level. To do this, VxVM creates subvolumes from traditional bottom-layer objects, or subdisks. These subvolumes function much like volumes and have their own associated plexes and subdisks. With this new method of mirroring, data is mirrored at the column or subdisk level. Loss of a disk results in the loss of a copy of a column or subdisk within a plex. Further disk losses may occur without affecting the complete volume. Only the data contents of the column or subdisk affected by the loss of the disk need to be recovered. This recovery can be performed from an up-to-date mirror of the failed disk. Note: Only VxVM versions 3.0 and later support layered volumes. To create a layered volume, you must upgrade the disk group that owns the layered volume to version 60 or above.

7-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Traditional Mirroring
What happens if two What happens if two disks fail? disks fail?
Underlying Disks sd1 disk01 sd2 sd3 sd4 Volume Status Mirrored volume sd1 sd3 Plex sd2 sd4 Plex sd = subdisk sd = subdisk

disk02 disk03 disk04

X X X

X X X X X X X X X

FOS35_Sol_R1.0_20020930

Down Up Down Down Up Down

X = failed disk X = failed disk


FOS35_Sol_R1.0_20020930

When two disks When two disks fail, volume fail, volume survives 2/6, or survives 2/6, or 1/3 times. 1/3 times.

7-33

7-33

Comparing Regular Mirroring with Enhanced Mirroring To understand the purpose and benefits of layered volume layouts, compare regular mirroring with the enhanced mirroring of layered volumes in a disk failure scenario. Regular Mirroring The example illustrates a regular mirrored volume layout called a mirror-stripe layout. Data is striped across two disks, disk01 and disk03, to create one plex, and that plex is mirrored and striped across two other disks, disk02 and disk04. If two drives fail, the volume survives 2 out of 6 (1/3) times. As more subdisks are added to each plex, the odds of a traditional volume surviving a two-disk failure approach (but never equal) 50 percent. If a disk fails in a mirror-stripe layout, the entire plex is detached, and redundancy is lost on the entire volume. When the disk is replaced, the entire plex must be brought up-to-date, or resynchronized.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-39

Layered Volumes
What happens if What happens if two disks fail? two disks fail?
Layered volume Plex
Subvolumes Subvolumes

Underlying Disks sd1 disk01 sd2 sd3 sd4 sd1 sd2 sd3 sd4

Plex

Plex

Plex

Plex

disk02 disk03 disk04

Volume Status

sd = subdisk sd = subdisk

X X X

X X X X X X X X X

FOS35_Sol_R1.0_20020930

Down Up Up Up Up Down

X = failed disk X = failed disk


FOS35_Sol_R1.0_20020930

When two disks When two disks fail, volume fail, volume survives 4/6, or survives 4/6, or 2/3 times. 2/3 times.

7-34

7-34

Layered Volumes The example illustrates a layered volume layout called a stripe-mirror layout. In this layout, VxVM creates underlying volumes that mirror each subdisk. These underlying volumes are used as subvolumes to create a top-level volume that contains a striped plex of the data. If two drives fail, the volume survives 4 out of 6 (2/3) times. In other words, the use of layered volumes reduces the risk of failure rate by 50 percent. As more subvolumes are added, the odds of a volume surviving a two-disk failure approach 100 percent. For volume failure to occur, both subdisks that make up a subvolume must fail. If a disk fails, only the failing subdisk must be detached, and only that portion of the volume loses redundancy. When the disk is replaced, only a portion of the volume needs to be recovered, which takes less time.
Failed Subdisks 1 and 2 1 and 3 1 and 4 2 and 3 2 and 4 3 and 4 Volume Status Stripe-Mirror (Layered) Down Up Up Up Up Down Mirror-Stripe (Nonlayered) Down Up Down Down Up Down

7-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

How Do Layered Volumes Work?


Underlying Disks Underlying Disks Disk 1 Subdisk Disk 2 Subdisk Disk 3 Subdisk Disk 4 Subdisk Subdisk
Plex
FOS35_Sol_R1.0_20020930

Subvolumes Volume Subdisk


Plex

Top-Level Volume Volume Subvolume Subvolume


Plex

Subdisk
Plex

Volume Subdisk
Plex

Volumes are constructed Volumes are constructed from subvolumes. from subvolumes. Top-level volume is Top-level volume is accessible to accessible to applications. applications.

7-35

FOS35_Sol_R1.0_20020930

7-35

How Do Layered Volumes Work? In a regular mirrored volume, top-level plexes are made up of subdisks. In a layered volume, these subdisks are replaced by subvolumes. Each subvolume is associated with a second-level volume. This second-level volume contains secondlevel plexes, and each second-level plex contains one or more subdisks. In a layered volume, only the top-level volume is accessible as a device for use by applications. Note: You can also build a layered volume from the bottom up by using the vxmake command. For more information, see the vxmake(1m) manual page.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-41

Layered Volumes: Pros and Cons


Advantages Improved redundancy Faster recovery times Disadvantages Requires more VxVM objects Fills up disk group configuration database sooner

FOS35_Sol_R1.0_20020930

7-36

Layered Volumes: Advantages Improved redundancy: Layered volumes tolerate disk failure better than nonlayered volumes and provide improved data redundancy. Faster recovery times: If a disk in a layered volume fails, a smaller portion of the redundancy is lost, and recovery and resynchronization times are usually quicker than for a nonlayered volume that spans multiple drives. For a stripe-mirror volume, recovery of a single subdisk failure requires resynchronization of only the lower plex, not the top-level plex. For a mirrorstripe volume, recovery of a single subdisk failure requires resynchronization of the entire plex (full volume contents) that contains the subdisk. Layered Volumes: Disadvantages Requires more VxVM objects. Layered volumes consist of more VxVM objects than nonlayered volumes. Therefore, layered volumes may fill up the disk group configuration database sooner than nonlayered volumes. When the configuration database is full, you cannot create more volumes in the disk group. The minimum size of the private region is 2048 sectors rounded up to the cylinder boundary. With modern disks with large cylinder sizes, this size can be quite large. Each VxVM object requires about 256 bytes. The private region can be made larger when a disk is initialized, but only from the command line. The size cannot be changed once disks have been initialized. Note: With VxVM 3.2 and later, the maximum size of the private region was doubled in order to better accommodate layered volumes.

7-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Mirrored Volumes: Types


Nonlayered mirror-concat: Top-level volume has more than one plex, and the plexes are concatenated in structure. mirror-stripe: Top-level volume has more than one plex, and the plexes are striped in structure. Layered concat-mirror: Top-level volume has one concatenated plex, and the component subdisks are mirrored. stripe-mirror: Top-level volume has one striped plex, and the component subdisks are mirrored.
FOS35_Sol_R1.0_20020930 7-37

Layered Volume Layouts To mirror your data, you can create regular mirrored layouts or layered layouts. In general, you should use regular mirrored layouts for smaller volumes and layered layouts for larger volumes. Before you create layered volumes, you need to understand the terminology that defines the different types of mirrored layouts in VxVM. The following layout types are hyphenated terms. The first term indicates the structure of the top-level volume, and the second term indicates the structure of the lower layers:
Layout Type Description The top-level volume contains more than one plex, and the plexes are concatenated in structure. The top-level volume contains more than one plex, and the plexes are striped in structure. The top-level volume comprises a concatenated plex, and the component subdisks (subvolumes) are mirrored. The top-level volume comprises a striped plex, and the component subdisks (subvolumes) are mirrored.

mirror-concat mirror-stripe concat-mirror stripe-mirror

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-43

mirror-concat
Disk 1 Subdisk 1 1.5 GB Disk 2

mirror-concat
Top-level volume contains more than one plex (mirror). Plexes are concatenated.

Volume 1.5 GB Disk 3 Subdisk 3 1 GB Disk 4 Subdisk 4 500 MB


FOS35_Sol_R1.0_20020930

Subdisk 1 1.5 GB

Subdisk 3 1 GB Subdisk 4 500 MB

Concat Plex 1.5 GB

Concat Plex 1.5 GB


7-38

mirror-concat
Layout Type Description The top-level volume contains more than one plex (mirror), and the plexes are concatenated in structure.

mirror-concat

This layout mirrors data across concatenated plexes. The concatenated plexes can be comprised of subdisks of different sizes. In the example, the plexes are mirrors of each other; each plex is a concatenation of one or more subdisks, and the plexes are of equal size. When you create a simple mirrored volume that is less than 1 GB in size, a nonlayered mirrored volume is created by default. Nonlayered, mirrored layouts are recommended if you are using less than 1 GB of space, or if you are using a single drive for each copy of the data.

7-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

mirror-stripe
Disk 1 Subdisk 1 750 MB Disk 2 Subdisk 2 750 MB Disk 3 Subdisk 3 750 MB Disk 4 Subdisk 4 750 MB
FOS35_Sol_R1.0_20020930

mirror-stripe
Top-level volume contains more than one plex (mirror). Plexes are striped.

Volume 1.5 GB Subdisk 1 750 MB Subdisk 2 750 MB Striped Plex 1.5 GB Subdisk 3 750 MB Subdisk 4 750 MB Striped Plex 1.5 GB
7-39

mirror-stripe
Layout Type Description The top-level volume contains more than one plex (mirror), and the plexes are striped in structure.

mirror-stripe

This layout mirrors data across striped plexes. The striped plexes can be made up of different numbers of subdisks. In the example, plexes are mirrors of each other; each plex is striped across the same number of subdisks. Each striped plex can have different numbers of columns and different stripe unit sizes. One plex could also be concatenated. When you create a striped mirrored volume that is less than one gigabyte in size, a nonlayered mirrored volume is created by default. Nonlayered, mirrored layouts are recommended if you are using less than 1 GB of space, or if you are using a single drive for each copy of the data.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-45

concat-mirror
Disk 1 Subdisk 1 1.5 GB Disk 2 Subdisk 2 1.5 GB Disk 3 Subdisk 3 2 GB Disk 4 Subdisk 4 2 GB
FOS35_Sol_R1.0_20020930

concat-mirror
Subvolumes Volume 1.5 GB
Subdisk 1 1.5 GB
Concat Plex 1.5 GB

Top-level volume comprises a concatenated plex.

Subvolumes are mirrored.

Subdisk 2 1.5 GB
Concat Plex 1.5 GB

Top-Level Volume Volume 3.5 GB


Subvolume 1.5 GB Subvolume 2 GB
Concat Plex 3.5 GB

Volume 2 GB
Subdisk 3 2 GB
Concat Plex 2 GB

Subdisk 4 2 GB
Concat Plex 2 GB

7-40

concat-mirror
Layout Type Description The top-level volume comprises one plex, and the component subdisks (subvolumes) are mirrored.

concat-mirror

This volume layout contains a single plex made up of one or more concatenated subvolumes. Each subvolume comprises two concatenated plexes (mirrors) made up of one or more subdisks. If you have two subdisks in the top-level plex, then a second subvolume is created, which is used as the second concatenated subdisk of the plex. Additional subvolumes can be added and concatenated in the same manner. In the VEA interface, the GUI term used for a layered, concatenated layout is Concatenated Pro. Concatenated Pro volumes are mirrored by default and therefore require more disks than unmirrored concatenated volumes. Concatenated Pro volumes require at least two disks. You cannot use a Concatenated Pro volume for a root or swap volume.

7-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

stripe-mirror
Disk 1 Subdisk 1 750 MB Disk 2 Subdisk 2 750 MB Disk 3 Subdisk 3 750 MB Disk 4 Subdisk 4 750 MB
FOS35_Sol_R1.0_20020930

stripe-mirror
Subvolumes Volume 750 MB
Subdisk 1 750 MB
Concat Plex 750 MB

Top-level volume comprises a striped plex.

Subvolumes are mirrored.

Subdisk 2 750 MB
Concat Plex 750 MB

Top-Level Volume Volume 1.5 GB


Subvolume 750 MB Subvolume 750 MB
Striped Plex 1.5 GB

Volume 750 MB
Subdisk 3 750 MB
Concat Plex 750 MB

Subdisk 4 750 MB
Concat Plex 750 MB

7-41

stripe-mirror
Layout Type Description The top-level volume comprises one plex, and the component subdisks (subvolumes) are mirrored.

stripe-mirror

This volume layout stripes data across mirrored volumes. The difference between stripe-mirror and concat-mirror is that the top-level plex is striped rather than concatenated. In the VEA interface, the GUI term used for a layered, striped layout is Striped Pro. Striped Pro volumes are mirrored by default and therefore require more disks than unmirrored striped volumes. Striped Pro volumes require at least four disks. You cannot use Concatenated Pro or Striped Pro volumes for a root or swap volume.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-47

Creating Layered Volumes: VEA


Select Actions>New Volume. Select Actions>New Volume.

Specify volume volume properties.

Concatenated Pro Concatenated Striped Pro Striped Pro


FOS35_Sol_R1.0_20020930 7-42

FOS35_Sol_R1.0_20020930

7-42

Creating a Layered Volume


Creating a Layered Volume: VEA To create a layered volume: 1 Select a disk group, and select Actions>New Volume. 2 Complete the New Volume wizard by specifying standard volume properties, such as disk group name, volume name, and size, and options, such as mirroring, logging, disks, and file system. Select one of the two layered volume layout types: Concatenated Pro: The Concatenated Pro layout refers to a concat-mirror volume. Striped Pro: The Striped Pro layout refers to a stripe-mirror volume. 3 Click OK to complete the task.

7-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating Layered Volumes: CLI

#_

To create a layered volume from the command line:


vxassist -g diskgroup make volume length layout=type [other_attributes]

You can specify the following layered layout types: layout=concat-mirror layout=stripe-mirror Note: To create simple mirrored volumes (nonlayered), you can use the layout types: layout=mirror-concat layout=mirror-stripe
FOS35_Sol_R1.0_20020930 7-43

Creating a Layered Volume: CLI To create a mirrored volume from the command line:
vxassist -g diskgroup make volume_name length layout=type [other_attributes]

In the syntax, you can specify any of the following layout types: To create layered volumes layout=concat-mirror layout=stripe-mirror To create simple mirrored volumes layout=mirror-concat layout=mirror-stripe For striped volumes, you can specify other attributes, such as ncol=number_of_columns and stripeunit=size.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-49

Controlling Mirroring Behavior


VxVM operates according to built-in rules, called trigger points, to determine the level of mirroring. VxVM uses the layout type that you specify on the command line in combination with trigger point values to determine whether mirroring occurs at the volume, column, or subdisk level. To control the level of mirroring in a layout, you can alter the default values of the trigger points.
Note: In VEA, you can select the No layered volumes check box in the New Volume wizard to prevent VxVM from creating a layered volume.
FOS35_Sol_R1.0_20020930 7-44

Controlling VxVM Mirroring When you create any mirrored volume from the command line, VxVM operates according to built-in rules, called trigger points, to determine how a volume is configured, whether or not a volume is layered, and at what level mirroring is performed. In general, nonlayered layouts are used for small volumes (less than 1 GB), and layered layouts are used for larger volumes. You can allow VxVM to select a mirrored layout based on these built-in rules, or you can override these rules. For more information on trigger points, see the vxassist (1m) manual page. Note: In VEA, you can prevent VxVM from creating a layered volume by marking the No layered volumes check box in the New Volume wizard.

7-50

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Default Mirroring Behavior


Striped Layouts mirror-stripe stripe,mirror stripe-mirror Concatenated Layouts mirror mirror-concat concat-mirror
FOS35_Sol_R1.0_20020930

Volume-level mirroring Trigger points are ignored. Trigger points are applied. Column- or subdisk-level mirroring Trigger points are applied. Trigger points are applied. Volume-level mirroring Trigger points are ignored. Subdisk-level mirroring Trigger points are applied.
7-45

Default Mirroring Behavior Striped Layouts mirror-stripe: If you specify a mirror-stripe layout, the new volume is mirrored at the volume level, and the trigger points are ignored. stripe,mirror: If you specify layout=stripe,mirror, the trigger points are applied to determine the level of mirroring. stripe-mirror: If you specify a stripe-mirror layout, mirroring is handled at the column or subdisk level, depending on the trigger point attribute. Concatenated Layouts mirror: If you specify layout=mirror, the trigger points are applied to determine the level of mirroring. mirror-concat: If you specify a mirror-concat layout, the new volume is mirrored at the volume level, and the trigger points are ignored. concat-mirror: If you specify a concat-mirror layout, mirroring is handled at the subdisk level, and the trigger point attributes are applied.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-51

Creating Layered Volumes: Examples

#_

In this example, a layered stripe-mirror layout is created: # vxassist -g datadg make datavol 10g layout=stripe-mirror In this example, a layered concat-mirror layout is created: # vxassist -g datadg make datavol 10g layout=concat-mirror

FOS35_Sol_R1.0_20020930

7-46

Creating Layered Volumes: Examples In this example, a layered stripe-mirror layout is created:
# vxassist -g datadg make datavol 10g layout=stripe-mirror

In this example, a layered concat-mirror layout is created:


# vxassist -g datadg make datavol 10g layout=concat-mirror

7-52

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Viewing Layered Volumes

#_
Top-level volume and plex Subvolume, second-level volume, plex, and subvolume

vxprint -rth vol01 vxprint -rth vol01


... ... v vol01 v vol01 pl vol01-03 pl vol01-03 sv vol01-S01 sv vol01-S01 v2 vol01-L01 v2 vol01-L01 p2 vol01-P01 p2 vol01-P01 vol01 vol01 vol01-03 vol01-03 vol01-L01 vol01-L01 ENABLED ENABLED ACTIVE... ACTIVE... ENABLED ENABLED ACTIVE... ACTIVE... vol01-L01 1... vol01-L01 1... ENABLED ENABLED ACTIVE... ACTIVE... ENABLED ACTIVE... ENABLED ACTIVE... datadg05 0... datadg05 0... ENABLED ENABLED ACTIVE... ACTIVE... datadg03 0... datadg03 0... vol01-L02 1... vol01-L02 1...

s2 datadg05-02 vol01-P01 s2 datadg05-02 vol01-P01 p2 vol01-P02 p2 vol01-P02 vol01-L01 vol01-L01 s2 datadg03-02 vol01-P02 s2 datadg03-02 vol01-P02 sv vol01-S02 sv vol01-S02 vol01-03 vol01-03

FOS35_Sol_R1.0_20020930

7-47

Viewing a Layered Volume: VEA To view the layout of a layered volume, you can use any of the methods for displaying volume information, including: Object views in the main window Disk View window Volume View window Volume to Disk Mapping window Volume Layout window Viewing a Layered Volume: CLI To view the configuration of a layered volume from the command line, you use the vxprint command:
vxprint -rth volume_name

In the syntax: The -r option ensures that subvolume configuration information for a layered volume is displayed. The -t option prints single-line output records. The -h option lists hierarchies below selected records. volume_name is the name of the volume for which you want to display the configuration.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-53

Example: Displaying a concat-mirror Configuration With layered volumes, the underlying volumes are displayed in the same format as toplevel volumes, but they are listed under the subvolume that maps to them. The following is an example of configuration information for a concat-mirror volume layout:
vxprint -rth vol01
. . . V NAME RVG PLEX PLEX vol01 vol01-03 KSTATE DISK VOLNAME STATE STATE DISKOFFS NVOLLAYR LENGTH READPOL LENGTH LAYOUT PREFPLEX UTYPE NCOL/WID MODE MODE MODE fsgen RW ENA fsgen RW RW ENA fsgen RW RW PL NAME SD NAME SV NAME . . . v vol01 ENABLED ENABLED ENABLED ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE 16384000 SELECT 16384000 CONCAT 8007120 0 8007120 SELECT 8007120 CONCAT 8007120 0 8007120 CONCAT 8007120 0 8376880 SELECT 8376880 CONCAT 8376880 0 8376880 CONCAT 8376880 0 2/2 pl vol01-03 sv vol01-S01 v2 vol01-L01 p2 vol01-P01 p2 vol01-P02 sv vol01-S02 v2 vol01-L02 p2 vol01-P03 p2 vol01-P04 VOLUME KSTATE

LENGTH [COL/]OFF DEVICE LENGTH [COL/]OFF AM/NM

vol01-L01 1

vol01-L01 ENABLED vol01-L01 ENABLED vol01-03 -

s2 datadg05-02 vol01-P01 datadg05 0 s2 datadg03-02 vol01-P02 datadg03 0 vol01-L02 1 ENABLED ACTIVE ACTIVE ACTIVE

c1t2d0 ENA c1t3d0 ENA -

8376880 8007120 2/2

vol01-L02 ENABLED vol01-L02 ENABLED

s2 datadg02-02 vol01-P03 datadg02 0 s2 datadg04-02 vol01-P04 datadg04 0

c1t6d0 ENA c1t5d0 ENA

In the output, subvolume levels are highlighted in bold: v represents a top-level volume. sv represents a top-level subvolume that maps to a subdisk in an underlying volume. v2 represents an underlying volume. p2 represents a plex associated with an underlying volume. s2 represents a subdisk associated with an underlying volume.

7-54

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Example: Displaying a stripe-mirror Configuration This example shows configuration information for a stripe-mirror volume layout, with subvolume levels highlighted in bold, plus a graphical depiction of the volume.
vxprint -rth vol01
. . . V NAME RVG PLEX PLEX vol01 vol01-03 KSTATE DISK VOLNAME STATE STATE DISKOFFS NVOLLAYR LENGTH READPOL LENGTH LAYOUT PREFPLEX UTYPE NCOL/WID MODE MODE MODE PL NAME SD NAME SV NAME . . . v vol01 pl vol01-03 sv vol01-S01 v2 vol01-L01 p2 vol01-P01 p2 vol01-P02 sv vol01-S02 v2 vol01-L02 p2 vol01-P03 p2 vol01-P04 ENABLED ENABLED ENABLED ACTIVE 3072000 SELECT ACTIVE 3072000 STRIPE 1536000 0/0 ACTIVE 1536000 SELECT 0 0 1536000 0 1536000 0 1536000 1/0 vol01-03 fsgen 2/128 2/2 c1t2d0 c1t4d0 2/2 c1t3d0 c1t5d0 RW ENA fsgen ENA ENA ENA fsgen ENA ENA VOLUME KSTATE

LENGTH [COL/]OFF DEVICE LENGTH [COL/]OFF AM/NM

vol01-L01 1

vol01-L01 ENABLED vol01-L01 ENABLED vol01-03 -

ACTIVE 1536000 CONCAT- RW ACTIVE 1536000 CONCAT- RW

s2 datadg05-02 vol01-P01 datadg05 s2 datadg06-02 vol01-P02 datadg06 ENABLED

vol01-L02 1

ACTIVE 1536000 SELECT 0 0 1536000 0 1536000 0

vol01-L02 ENABLED vol01-L02 ENABLED

ACTIVE 1536000 CONCAT- RW ACTIVE 1536000 CONCAT- RW

s2 datadg03-02 vol01-P03 datadg03 s2 datadg04-02 vol01-P04 datadg04

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-55

Summary
You should now be able to: Add a mirror to and remove a mirror from an existing volume. Add a log to an existing volume. Change the volume read policy for a mirrored volume. Add a file system to an existing volume. Allocate storage for a volume. List the benefits of layered volumes. Create layered volumes.
FOS35_Sol_R1.0_20020930 7-48

Summary
This lesson described how to configure volumes in VxVM. This lesson covered how to add and remove a mirror, how to add a log, and how to add a file system to a volume. In addition, methods for allocating storage for volumes, changing the volume read policy, and creating layered volumes were also covered. Next Steps In the next lesson, you learn how to perform common volume maintenance tasks. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.

7-56

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 7
Lab 7: Configuring Volumes This lab provides additional practice in configuring volume attributes. In this lab, you add mirrors, logs, and file systems to existing volumes, change the volume read policy, and specify ordered allocation of storage to volumes. You also practice creating layered volumes. Lab instructions are in Appendix A. Lab solutions are in Appendix B.
FOS35_Sol_R1.0_20020930 7-49

Lab 7: Configuring Volumes


Goal This lab provides additional practice in configuring volume attributes. In this lab, you add mirrors, logs, and file systems to existing volumes, change the volume read policy, and specify ordered allocation of storage to volumes. You also practice creating layered volumes. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 7: Configuring Volumes


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7-57

7-58

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Maintenance

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

8-2

8-2

Introduction
Overview This lesson describes how to perform and monitor volume maintenance tasks using VERITAS Volume Manager (VxVM). This lesson describes how to perform online administration tasks, such as resizing a volume, creating a volume snapshot, and changing the layout of a volume. Importance With VxVM, you can perform volume maintenance, such as changing the size and layout of a volume, without disrupting applications or file systems that are using the volume. A volume layout can be resized, reconfigured, monitored, and controlled while the volume is online and accessible to users. Outline of Topics Resizing a Volume Creating a Volume Snapshot Changing the Volume Layout Managing Volume Tasks

8-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Resize a volume while the volume remains online. Duplicate the contents of volumes by creating volume snapshots. Change the volume layout while the volume remains online. Manage volume maintenance tasks with VEA and from the command line.

FOS35_Sol_R1.0_20020930

8-3

Objectives After completing this lesson, you will be able to: Resize a volume while the volume remains online by using VEA and from the command line. Duplicate the contents of volumes by creating volume snapshots by using VEA and from the command line. Change the volume layout while the volume remains online by using VEA and from the command line. Manage volume maintenance tasks by monitoring and controlling task progress.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-3

Resizing a Volume
To resize a volume, you can:
Specify a desired new volume size. Add to or subtract from the current volume size.

Expanding a volume provides more space to users:


Disk space must be available. VxVM assigns disk space, or you can specify disks.

Shrinking a volume enables you to use space elsewhere. VxVM returns space to the free space pool.
FOS35_Sol_R1.0_20020930 8-4

Resizing a Volume
Resizing a Volume If users require more space on a volume, you can increase the size of the volume. If a volume contains unused space that you need to use elsewhere, you can shrink the volume. You can resize a volume by using the VEA interface or from the command line. To resize a volume, you can specify either: The desired new size of the volume or The amount of space to add to or subtract from the current volume size Shrinking a Volume When the volume size is reduced, the resulting extra space is returned to the free space pool. Expanding a Volume When the volume size is increased, sufficient disk space must be available in the disk group. When increasing the size of a volume, VxVM assigns the necessary new space from available disks. By default, VxVM uses space from any disk in the disk group, unless you define specific disks.

8-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resizing a Volume with a File System


If a volume is resized, its file system must also be resized. VxFS can be expanded or reduced while mounted. UFS can be expanded, but not reduced. For volumes with other types of data, ensure that the data manager application supports resizing.

FOS35_Sol_R1.0_20020930

8-5

Resizing a Volume with a File System Volumes and file systems are separate virtual objects. When a volume is resized, the size of the raw volume is changed. If a file system exists that uses the volume, the file system must also be resized. If a volume is expanded, its associated file system must also be expanded to be able to use the increased storage space. A VERITAS File System (VxFS) can be enlarged or reduced while mounted. A UNIX File System (UFS) can be expanded, but not reduced. When you resize a volume using VEA or the vxresize command, the file system is also resized. Resizing Volumes with Other Types of Data For volumes containing data other than file systems, such as raw database data, you must ensure that the data manager application can support the resizing of the data device with which it has been configured.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-5

Resizing a Volume: Methods


VEA Select a volume. Select Actions>Resize Volume. Complete the Resize Volume dialog box.

#_

CLI vxassist with the options growto, growby, shrinkto, or shrinkby vxresize

FOS35_Sol_R1.0_20020930

8-6

Resizing a Volume: Methods You can use any of the following methods to resize a volume. These methods are detailed in the sections that follow.
VEA Select a volume. Select Actions>Resize Volume. Complete the Resize Volume dialog box and click OK. vxassist with options growto, growby, shrinkto, or shrinkby vxresize

CLI

8-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resizing a Volume: VEA


Highlight a volume, and select Actions>Resize Volume. Highlight a volume, and select Actions>Resize Volume.

Specify the amount Specify the amount of space to add or of space to add or subtract, or specify a subtract, or specify a new volume size. new volume size.

If desired, specify If desired, specify disks to be used disks to be used for the additional for the additional space. FOS35_Sol_R1.0_20020930 space.
FOS35_Sol_R1.0_20020930

8-7

8-7

Resizing a Volume: VEA When you resize a volume using the VEA interface, if the volume contains a file system, the file system is also resized. 1 In the main window, select the volume to be resized. 2 In the Actions menu, select Resize Volume. 3 Complete the Resize Volume dialog box. To specify a new size, use one of the following: Add by: To increase the volume size by a specific amount of space, use the Add by field to specify how much space should be added to the volume. Subtract by: To decrease the volume size by a specific amount of space, use the Subtract by field to specify how much space should be removed from the volume. New volume size: To specify a new volume size, type the size in the New volume size field. Max Size: To determine the largest possible size, click Max Size. To use a specific disk for the additional space, select Manually select disks for use by this volume, and move the disks that you want to use into the Included field. You can also specify mirroring and striping options. 4 Click OK to complete the task. Note: When you resize a volume, if a VERITAS file system (VxFS) is mounted on the volume, the file system is also resized. The file system is not resized if it is unmounted. If a UFS file system is mounted on the volume, the file system can be expanded, but not shrunk.
Lesson 8: Volume Maintenance
Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-7

Resizing a Volume: CLI

#_

Use vxassist or vxresize to expand or reduce a volume: To a specific size By a specified amount of space Significant difference: vxassist resizes the volume, but not the file system. vxresize automatically resizes both the volume and the file system.

FOS35_Sol_R1.0_20020930

8-8

Resizing a Volume: CLI To resize a volume from the command line, you can use either the vxassist command or the vxresize command. Both commands can expand or reduce a volume to a specific size or by a specified amount of space, with one significant difference: vxresize automatically resizes a volumes file system. vxassist does not resize a volumes file system. When using vxassist, you must resize the file system using a separate command. When you expand a volume, both commands automatically locate available disk space unless you designate specific disks to use. When you shrink a volume, unused space is returned to the free space pool of the disk group. When you resize a volume, you can specify the length of a new volume in sectors, kilobytes, megabytes, or gigabytes. The unit of measure is added as a suffix to the length (s, k, m, or g). If no unit is specified, the default unit is sectors. Caution: Do not shrink a volume below the size of the file system. If you have a VxFS file system, you can shrink the file system and then shrink the volume. If you do not shrink the file system first, you risk unrecoverable data loss.

8-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resizing a Volume: vxassist

#_

To resize a volume:
vxassist -g diskgroup {growto|growby| shrinkto|shrinkby} volume size growto growby shrinkto shrinkby Increases volume to a specified length Increases volume by a specified amount Reduces volume to a specified length Reduces volume by a specified amount

FOS35_Sol_R1.0_20020930

8-9

Resizing Volumes with vxassist To resize a volume using the vxassist command, you use the syntax:
vxassist -g diskgroup {growto|growby|shrinkto|shrinkby} volume_name size vxassist can resize a volume in the following ways: Increases volume to specified length growto growby Increases volume by specified amount shrinkto Reduces volume to specified length shrinkby Reduces volume by specified amount

Caution: You cannot grow or shrink any volume associated with an encapsulated boot disk, such as rootvol, usr, var, opt, or swapvol. These volumes map to a physical underlying partition on the disk and must be contiguous. If you attempt to grow these volumes, the system could become unbootable if you need to revert back to slices to boot. Growing these volumes can also prevent a successful Solaris upgrade, and you might have to do a fresh install. Additionally, the upgrade_start script might fail.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-9

Resizing a Volume: vxassist

#_

Original volume size: 20 MB


1 2 3 4 # vxassist -g datadg growto datavol 40m # vxassist -g datadg growby datavol 10m # vxassist -g datadg shrinkto datavol 30m # vxassist -g datadg shrinkby datavol 10m

20 MB Original
FOS35_Sol_R1.0_20020930

40 MB 1

50 MB 2

30 MB 3

20 MB 4
8-10

Examples: Resizing Volumes with vxassist The volume datavol is in the disk group datadg. The size of the volume is 20 MB. To extend datavol to 40 MB, you type:
# vxassist -g datadg growto datavol 40m

The size of the volume is now 40 MB. To extend datavol by an additional 10 MB, you type:
# vxassist -g datadg growby datavol 10m

The size of the volume is now 50 MB. To shrink datavol back to a length of 30 MB, you type:
# vxassist -g datadg shrinkto datavol 30m

The size of the volume is now 30 MB. To shrink datavol by an additional 10 MB, you type:
# vxassist -g datadg shrinkby datavol 10m

The size of the volume is returned to 20 MB. Note: Do not shrink a volume below the current size of the file system or database using the volume. Shrinking a volume can always be safely performed on empty volumes.

8-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resizing a Volume: vxresize

#_

To resize a volume and its file system:


vxresize [-bsx] F fstype -g diskgroup volume [+|-]new_length new_length Indicates size to which you want to expand or shrink the volume Indicates that the new length is added to the current length Indicates that the new length is subtracted from the current length
8-11

+new_length -new_length

FOS35_Sol_R1.0_20020930

Resizing Volumes with vxresize You can also use the vxresize command to resize a volume. This command has the advantage of automatically resizing the file system as well as the volume. Only VxFS and UFS file systems can be resized with vxresize. The ability to expand or shrink a file system depends on the file system type and whether the file system is mounted or unmounted. The following table summarizes the resize operations that can be performed for VxFS and UFS:
File System Type VxFS UFS Mounted FS Expand and shrink Expand only Unmounted FS Not allowed Expand only

The syntax for the vxresize command is:


vxresize [-bsx] -F fstype -g diskgroup volume new_length

In the syntax, you specify the file system type, the disk group name, the volume name, and the new length of the volume. The new_length operand can begin with a plus sign (+) to indicate that the new length is added to the current volume length. The new_length operand can begin with a minus sign (-) to indicate that the new length is subtracted from the current volume length. Without the plus or minus sign, the new_length indicates the size to which you want to expand or shrink the volume.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-11

Other options include: The -b option performs the resize operation in the background. The command returns quickly, but the resize operation is still in progress. Resizing very large volumes can take some time. The -s option requires that the operation represents a decrease in the volume length. Otherwise, the operation fails. The -x option requires that the operation represents an increase in the volume length. Otherwise, the operation fails.

8-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resizing a Volume: vxresize

#_

Original volume size: 10 MB


1 2 3 4 # vxresize -g mydg myvol 50m # vxresize -g mydg myvol +10m # vxresize -g mydg myvol 40m # vxresize -g mydg myvol -10m

10 MB Original
FOS35_Sol_R1.0_20020930

50 MB 1

60 MB 2

40 MB 3

30 MB 4
8-12

Examples: Resizing Volumes with vxresize The volume myvol is in the disk group mydg. The size of the volume is 10 MB. To extend myvol to 50 MB, you type:
# vxresize -g mydg myvol 50m

The size of the volume is now 50 MB. To extend myvol by an additional 10 MB, you type:
# vxresize -g mydg myvol +10m

The size of the volume is now 60 MB. To shrink myvol back to a length of 40 MB, you type:
# vxresize -g mydg myvol 40m

To shrink myvol by an additional 10 MB, you type:


# vxresize -g mydg myvol -10m

The size of the volume is now 30 MB.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-13

Creating a Volume Snapshot


1

Snapstart Phase
Plex 01

datavol
Temporary Snapshot Mirror

Plex 02

Snapshot Phase

datavol
Plex 01 Plex 02

snapvol
Backup Snapshot
8-13

FOS35_Sol_R1.0_20020930

Creating a Volume Snapshot


Creating a Snapshot Copy of a Volume Creating a volume snapshot provides a method for backing up the data contained in a volume with minimal interruption to users. A volume snapshot is an exact copy, or temporary mirror, of a volume at a specific point in time. When you create a snapshot, you create a temporary mirror of an existing volume. This mirror is then detached from the volume and placed into a new volume, called the snapshot volume, that is used as the backup volume. The snapshot can then be backed up at a convenient time. After you back up the snapshot, you have three options: You can remove the snapshot volume. You can reassociate a snapshot volume with its original volume. You can permanently break the link between the snapshot and the original volume. This procedure is called dissociating a snapshot volume.

8-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Volume Snapshot: Phases Creating a volume snapshot has two phases: 1 The Snapstart Phase The snapstart phase creates a snapshot mirror of the volume to be backed up. The snapstart phase of this task may take a long time, depending on the size of the volume. The copy procedure used by VxVM during the snapstart phase is an atomic copy, which is similar to a full backup of the volume. 2 The Snapshot Phase The snapshot phase detaches the mirror from the original volume and creates a new volume, which is called the snapshot volume. The snapshot volume is an exact copy of the original volume at the time the snapshot phase begins. Note: A volume snapshot is an example of a technique also referred to as a third mirror break off.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-15

Creating a Snapshot: Methods


VEA Select a volume. Select Actions>Snap>Snap Start to create the snapshot mirror. Select Actions>Snap>Snap Shot to break off the mirror into a snapshot volume.

#_

CLI vxassist snapstart


followed by

vxassist snapshot
FOS35_Sol_R1.0_20020930 8-14

Creating a Volume Snapshot: Methods You can use any of the following methods to create a volume snapshot. These methods are detailed in the sections that follow.
VEA Select a volume. Select Actions>Snap>Snap Start to begin the snapstart phase. Select Actions>Snap>Snap Shot to begin the snapshot phase. vxassist snapstart followed by vxassist snapshot

CLI

8-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Snapshot: VEA


1. Select Actions> 1. Select Actions> Snap>Snap Start. Snap>Snap Start. 2. Select Actions> 2. Select Actions> Snap>Snap Shot. Snap>Snap Shot.

Select the disk to use in creating Select the disk to use in creating the snapshot mirror. the snapshot mirror.
FOS35_Sol_R1.0_20020930

Select the snapshot mirror to use in Select the snapshot mirror to use in creating the snapshot volume. creating the snapshot volume.

8-15

FOS35_Sol_R1.0_20020930

8-15

Creating a Volume Snapshot: VEA To create a snapshot copy of a volume: 1 In the main window, select the volume to be copied to a snapshot. 2 In the Actions menu, select Snap>Snap Start. 3 Complete the Snap Start Volume dialog box: To place the snapshot mirror on specific disks, select the Manually select disks for use by this volume option, and select the disks that you want to use. To enable the FastResync (FR) feature for the volume, select Enable FastResync. FR speeds up the resynchronization of mirrors in a volume. Note: FastResync requires a separate license. 4 Click OK to begin the creation of the snapshot mirror. This procedure may take some time. After the snapshot mirror is created, the snapshot mirror is displayed in the main window in the Mirrors tab of the original volume:

The snapshot mirror has a type of Snapshot and a status of Snap Ready.
Lesson 8: Volume Maintenance
Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-17

Aborting or removing the snapshot mirror: If you decide that you do not want to break off the snapshot mirror into a separate snapshot volume, you can abort the snapshot (during mirror creation) or remove the snapshot mirror (after mirror creation). To abort or remove the snapshot mirror, highlight the original volume, and select Actions>Snap>Snap Abort. 5 After the snapshot mirror is created, you can break off the mirror into a separate snapshot volume. Highlight the original volume, and select Actions>Snap>Snap Shot. 6 In the Snap Shot Volume dialog box, verify the volume name, specify a name for the snapshot volume, and select the mirror from which you want to create the snapshot volume. By default, the snapshot volume name is SNAP-volume_name, where volume_name is the name of the original volume. To create a read-only snapshot volume, mark the Create readonly snapshot check box.

Click OK to complete the task. 7 After the snapshot volume has been created, you can use the volume to back up your data at a convenient time.

Note: It is preferable to quiesce the application that is using the volume to be snapshot (for example, unmount a file system or shut down a database), so that a consistent image of the volume contents is obtained. If the file system is mounted read/write at the time of the snapshot, you will need to fsck the snapshot volume before the copy of the file system that it contains can be mounted.

8-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

After Using a Snapshot: VEA


Remove the Remove the snapshot volume: snapshot volume:
Actions>Delete Volume Snapshot Actions>Delete Volume

Merge the snapshot volume: Merge the snapshot volume:


Actions>Snap>Snap Back Actions>Snap>Snap Back

Dissociate the Dissociate the snapshot volume: snapshot volume: datavol


plex01 plex02

datavol
plex01 plex02 plex03

Actions>Snap>Snap Clear Actions>Snap>Snap Clear

newvol
plex03
8-16

FOS35_Sol_R1.0_20020930

Removing a Snapshot Volume: VEA After you have backed up your data, and you have determined that the snapshot volume is no longer needed, you can remove the snapshot volume. To avoid wasting space, you should remove the snapshot volume when your backup is complete. Removing a snapshot volume is the same as removing a regular volume. Highlight the volume in the main window and select Actions>Delete Volume. Reassociating a Snapshot Volume (Snapback): VEA You can reassociate a snapshot copy of a volume with the original volume by using the snapback feature. When you reassociate, the snapshot plex is detached from the snapshot volume and attached to the original volume. Data is resynchronized so that the plexes are consistent. To reassociate a snapshot volume with its original volume: 1 Select the snapshot volume in the main window. 2 In the Actions menu, select Snap>Snap Back. 3 Complete the Snap Back Volume dialog box by specifying which data should be used in the resynchronization process: Resynchronize using the original volume: By default, the data in the original plex is used for resynchronizing the merged volume. Resynchronize using the snapshot: To replace the data in the original volume with the data from the snapshot volume, select the Resynchronize using the snapshot option.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-19

4 Click OK to complete the task. Dissociating a Snapshot Volume (Snapclear): VEA To permanently break the association between a snapshot and its original volume, but maintain the snapshot as an independent volume, you can dissociate the snapshot volume by using the snapclear feature. Dissociating a snapshot is one method that you can use to keep a permanent image of a volume for storage. To dissociate a snapshot volume from its original volume: 1 In the main window, select the snapshot volume to be dissociated from its original volume. 2 In the Actions menu, select Snap>Snap Clear.

3 When prompted, click Yes to confirm the operation.

8-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Snapshot: CLI

#_

1. Create a snapshot mirror.


vxassist g diskgroup [-b] snapstart volume For example: # vxassist g datadg snapstart datavol

2. Create a snapshot volume.


vxassist g diskgroup snapshot orig_volume new_volume For example: # vxassist g datadg snapshot datavol backupvol

3. Use the snapshot volume to back up data.


FOS35_Sol_R1.0_20020930 8-17

Creating a Snapshot Volume: CLI You can create volume snapshots from the command line by using the vxassist command. 1 First, run vxassist snapstart to create a snapshot mirror on the volume to be backed up: vxassist -g diskgroup [-b] snapstart volume_name The vxassist snapstart task creates a write-only backup mirror, which is attached to and synchronized with the volume to be backed up. The mirror is used in the volume in the same way as any other mirror. The mirror becomes part of the volume read policy, and all writes are also sent to the mirror. The process runs until the mirror is created and has been synchronized. When synchronized with the volume, the backup mirror is ready to be used as a snapshot mirror. However, the mirror continues to be updated until it is detached during the actual snapshot phase of the procedure. The -b option runs the snapstart process in the background. 2 Next, you run vxassist snapshot to create the snapshot volume. This task detaches the snapshot mirror from the original volume, creates a new volume, and attaches the snapshot mirror to the snapshot volume. The state of the snapshot is set to ACTIVE. To create the snapshot volume, you use the syntax: vxassist -g diskgroup snapshot orig_volume new_volume

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-21

Note: If possible, try to create the snapshot volume at a time when users are accessing the volume as little as possible. It is preferable to quiesce the application that is using the volume to be snapshot (for example, unmount a file system or shut down a database), so that a consistent image of the volume contents is obtained. If the file system is mounted read/write at the time of the snapshot, you need to fsck the snapshot volume before the copy of the file system that it contains can be mounted. The snapshot volume reflects the original volume at the time you begin the snapshot phase. If the snapshot procedure is interrupted, the snapshot mirror is automatically removed when the volume is started. 3 After the snapshot volume is created, you can use the snapshot volume with backup utilities while the original volume continues to be available for applications and users. Example: Creating a Snapshot Volume The volume datavol exists in the disk group datadg. To create a snapshot mirror of datavol and run the process in the background, you type:
# vxassist -g datadg -b snapstart datavol

To create a snapshot volume called backupvol of the original volume datavol, you type:
# vxassist -g datadg snapshot datavol backupvol

You can then use the volume backupvol to copy data to tape or other backup media.

8-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

More Snapshot Options: CLI

#_

To ensure mirror synchronization before creating the snapshot volume, use the snapwait option after running snapstart:
# vxassist -g datadg -b snapstart datavol # vxassist g datadg snapwait datavol # vxassist -g datadg snapshot datavol backupvol

To remove the snapshot mirror (before creating the snapshot volume) if you decide a snapshot volume is not needed:
vxassist g datadg snapabort datavol
FOS35_Sol_R1.0_20020930 8-18

The snapwait Option To ensure that the snapstart mirror is synchronized before the vxassist snapstart command exits, you can use the vxassist snapwait option. When snapwait is complete, the snapshot option can be used. This command is usually used as part of a shell script and run prior to the snapshot version of the command:
vxassist -g diskgroup snapwait orig_volume_name

For example:
# vxassist -g datadg -b snapstart datavol # vxassist g datadg snapwait datavol # vxassist -g datadg snapshot datavol backupvol

The end of the snapstart procedure is indicated by the new snapshot mirror changing its state to SNAPDONE. This change is tracked by the vxassist snapwait task, which waits until at least one of the mirrors changes its state to SNAPDONE. If the attach process fails, the snapshot mirror is removed, and its space is released. The snapabort Option To remove a snapshot mirror that has not been detached and moved to a snapshot volume, you use the vxassist snapabort option. This option is used when the administrator decides that a snapshot volume is not needed:
vxassist -g diskgroup snapabort orig_volume_name

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-23

After Using a Snapshot: CLI

#_

To remove a snapshot volume:


vxassist g diskgroup remove volume snap_volume

To reassociate a snapshot volume:


vxassist g diskgroup snapback snap_volume

To reassociate using data from the replica:


vxassist g diskgroup o resyncfromreplica snapback snap_volume

To disassociate a snapshot volume:


vxassist g diskgroup snapclear snap_volume
FOS35_Sol_R1.0_20020930 8-19

Removing a Snapshot Volume: CLI When the snapshot volume backupvol is no longer needed, you can remove the volume using the vxassist remove volume command:
# vxassist -g datadg remove volume backupvol

Reassociating a Snapshot Volume (Snapback): CLI A snapshot copy of a volume can be reassociated with the original volume; that is, the snapshot plex can be detached from the snapshot volume and reattached to the original volume. The data in the volume is resynchronized so that the plexes are consistent. To reassociate a snapshot volume:
# vxassist -g diskgroup snapback snapshot_volume snapshot_volume is the name of the snapshot copy of the volume.

By default, the data in the original plex is used for the merged volume. To use the data copy from the snapshot volume instead, use the following syntax:
# vxassist -g diskgroup -o resyncfromreplica snapback snapshot_volume

Note: You must unmount the file system on the original volume prior to overwriting its data with that of the replica.

8-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Dissociating a Snapshot Volume (Snapclear): CLI The link between a snapshot and its original volume can be permanently broken so that the snapshot volume becomes an independent volume. To dissociate a snapshot from its original volume, use the syntax:
# vxassist -g diskgroup snapclear snapshot_volume

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-25

Changing the Volume Layout


Online relayout: Change the volume layout or layout characteristics while the volume is online.
volume volume

Examples:
Convert concatenated to stripe-mirror to achieve redundancy. Convert RAID-5 to mirrored for better write performance. Convert mirrored to RAID-5 to save space. Change stripe unit size or add columns to achieve desired performance.
8-20

FOS35_Sol_R1.0_20020930

Changing the Volume Layout


What Is Online Relayout? You may need to change the volume layout in order to change the redundancy or performance characteristics of an existing volume. The online relayout feature of VxVM enables you to change from one volume layout to another by invoking a single command. You can also modify the performance characteristics of a particular layout to reflect changes in your requirements. While relayout is in progress, data on the volume can be accessed without interruption. Online Relayout Examples Online relayout eliminates the need for creating a new volume in order to obtain a different volume layout. For example, by using a single command: You can convert a simple concatenated volume to a stripe-mirror volume to achieve redundancy. You can convert a RAID-5 volume to a mirrored volume for better write performance. You can convert a mirrored volume to a RAID-5 volume to save space. You can change stripe unit sizes or add columns to RAID-5 or striped volumes to achieve desired performance. You can convert a mirrored concatenated plex to a striped mirrored plex.

8-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Supported Transformations
Use online relayout to change the volume or plex layout to or from: Concatenated Striped RAID-5 Stripe-mirror Concat-mirror Also use online relayout to change the number of columns or stripe unit size for a RAID-5 or striped plex.
FOS35_Sol_R1.0_20020930 8-21

Supported Transformations By using online relayout, you can change the layout of an entire volume or a specific plex. VxVM currently supports transformations to or from the following volume layouts: Concatenated Striped RAID-5 Mirrored Stripe-mirror (Striped Pro) Concat-mirror (Concatenated Pro) In addition, you can use the online relayout feature to: Change the number of columns on a RAID-5 or striped plex. Change the stripe unit size for a RAID-5 or striped plex. Note: Online relayout can be used only with volumes created with the vxassist command or through the VEA interface.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-27

How Does Relayout Work?


1
Source Subvolume

Data is copied a chunk at a Data is copied a chunk at a time to a temporary area. time to a temporary area.

Temporary Subvolume (scratch pad)

By default: If volume size is less than 50 MB, the temp area = volume size. If volume size is 50 MB to 1 GB, the temp area = 50 MB. If volume size is 1 GB or greater, the temp area = 1 GB.
FOS35_Sol_R1.0_20020930

Data is returned from temporary Data is returned from temporary area to new layout area. area to new layout area.

The larger the temporary space, the faster the relayout, because larger pieces can be copied at one time.

8-22

FOS35_Sol_R1.0_20020930

8-22

How Does Online Relayout Work? The transformation of data from one layout to another involves rearranging the data in the existing layout into the new layout. Data is removed from the source subvolume in portions and copied into a temporary subvolume, or scratch pad. The temporary storage space is taken from the free space in the disk group. Data redundancy is maintained by mirroring any temporary space used. The area in the source subvolume is then transformed to the new layout, and data saved in the temporary subvolume is written back to the new layout. This operation is repeated until all the storage and data in the source subvolume have been transformed to the new layout. Read/write access to data is not interrupted during the transformation. If all of the plexes in the volume have identical layouts, VxVM changes all plexes to the new layout. If the volume contains plexes with different layouts, you must specify a target plex. VxVM changes the layout of the target plex and does not change the other plexes in the volume. File systems mounted on the volumes do not need to be unmounted to perform online relayout, as long as online resizing operations can be performed on the file system. If the system fails during a transformation, data is not corrupted. The transformation continues after the system is restored and read/write access is maintained.

8-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Temporary Storage Space VxVM determines the size of the temporary storage area, or you can specify a size through VEA or vxassist. Default sizes are as follows: If the original volume size is less than 50 MB, the temporary storage area is the same size as the volume. If the original volume is larger than 50 MB, but smaller than 1 GB, the temporary storage area is 50 MB. If the original volume is larger than 1 GB, the temporary storage area is 1 GB. Specifying a larger temporary space size speeds up the layout change process, because larger pieces of data are copied at a time. If the specified temporary space size is too small, VxVM uses a larger size.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-29

Online Relayout Notes


You can reverse online relayout at any time. Some layout transformations can cause a slight increase or decrease in the the volume length due to subdisk alignment policies. If volume length increases during relayout, VxVM resizes the file system using vxresize. Relayout does not change log plexes. You cannot:

FOS35_Sol_R1.0_20020930

Create a snapshot during relayout. Change the number of mirrors during relayout. Perform multiple relayouts at the same time. Perform relayout on a volume with a sparse plex.
8-23

Notes on Online Relayout Reversing online relayout: You can reverse the online relayout process at any time, but the data may not be returned to the exact previous storage location. Any existing transformation in the volume should be stopped before performing a reversal. Volume length: Some layout transformations can cause a slight increase or decrease in the volume length due to subdisk alignment policies. If the volume length changes during online relayout, VxVM uses vxresize to shrink or grow a file system mounted on the volume. Log plexes: When you change the layout of a volume, the log plexes are not changed. To change the layout of a log plex, you should remove and then re-create the log plex. Volume snapshots: You cannot create a snapshot of a volume when there is an online relayout operation running on the volume. Number of mirrors: During a transformation, you cannot change the number of mirrors in a volume. Multiple relayouts: A volume cannot undergo multiple relayouts at the same time. Sparse plexes: Online relayout cannot be used to change the layout of a volume with a sparse plex.

8-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Changing the Layout: Methods


VEA Select a volume. Select Actions>Change Layout. Complete the Change Volume Layout dialog box.

#_

CLI
vxassist relayout vxassist convert

FOS35_Sol_R1.0_20020930

8-24

Changing the Volume Layout: Methods You can use any of the following methods to change the layout of a volume. These methods are detailed in the sections that follow.
VEA Select a volume. Select Actions>Change Layout. Complete the Change Volume Layout dialog box and click OK. vxassist relayout vxassist convert

CLI

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-31

Changing the Layout: VEA


Highlight a volume, and select Actions>Change Layout. Highlight a volume, and select Actions>Change Layout.

Select a new Select a new volume layout. volume layout.

Set relayout options. Set relayout options.

FOS35_Sol_R1.0_20020930

8-25

FOS35_Sol_R1.0_20020930

8-25

Changing the Volume Layout: VEA To change the volume layout: 1 In the main window, select the volume to be changed to a different layout. 2 In the Actions menu, select Change Layout. 3 Complete the Change Volume Layout dialog box: Volume Name: Specify the volume that you want to change. Layout: Select the new volume layout and specify layout details as necessary. Options: To retain the original volume size when the volume layout changes, mark the Retain volume size at completion check box. To specify the size of the pieces of data that are copied to temporary space during the volume relayout, type a size in the Temp space size field. To specify additional disk space to be used for the new volume layout (if needed), specify a disk in the Disk(s) field or browse to select a disk. To specify the temporary disk space to be used during the volume layout change, specify a disk in the Temp disk(s) field or browse to select a disk. If the volume contains plexes with different layouts, specify the plex to be changed to the new layout in the Target plex field. 4 Click OK to begin the relayout task. When prompted, confirm that you want to change the layout.

8-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Changing the Layout: VEA


Relayout Status Monitor Window Relayout Status Monitor Window

Status Status Information Information

Relayout Relayout controls controls

FOS35_Sol_R1.0_20020930

8-26

FOS35_Sol_R1.0_20020930

8-26

5 The Relayout Status Monitor window is displayed. This window provides information and options regarding the progress of the relayout operation. Volume Name: The name of the volume that is undergoing relayout Initial Layout: The original layout of the volume Desired Layout: The new layout for the volume Status: The status of the relayout task % Complete: The progress of the relayout task The Relayout Status Monitor window also contains options that enable you to control the relayout process: Pause: To temporarily stop the relayout operation, click Pause. Abort: To cancel the relayout operation, click Abort. Continue: To resume a paused or aborted operation, click Continue. Reverse: To undo the layout changes and return the volume to its original layout, click Reverse.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-33

Changing the Layout: CLI

#_

vxassist relayout Used for nonlayered relayout operations Used for changing layout characteristics, such as stripe width and number of columns Used to change the resilience level of a volume Changes nonlayered volumes to layered volumes, and vice versa

vxassist convert

Note: vxassist relayout cannot create a nonlayered mirrored volume in a single step. The command always creates a layered mirrored volume even if you specify a non-layered mirrored layout. Use vxassist convert to convert the resulting layered volume into a nonlayered volume.
FOS35_Sol_R1.0_20020930 8-27

Changing the Volume Layout: CLI From the command line, online relayout is initiated using the vxassist command. The vxassist relayout command creates the necessary infrastructure and storage needed to perform the layout transformation. Use this option for all nonlayered transformations, including changing layout characteristics. The vxassist convert command is used to change the resilience level of a volume; that is, to convert a volume from nonlayered to layered, or vice versa. Use this option only when layered volumes are involved in the transformation. The vxassist relayout operation involves the copying of data at the disk level in order to change the structure of the volume. The vxassist convert operation does not copy data; it only changes the way the data is referenced. This operation specifically switches between mirror-concat and concat-mirror layouts or between mirror-stripe and stripe-mirror layouts. You cannot use this command to change the number of stripes or stripe unit width, or to change to other layout types. Note: vxassist relayout cannot create a nonlayered mirrored volume in a single step. The command always creates a layered mirrored volume even if you specify a non-layered mirrored layout, such as mirror-stripe or mirror-concat. Use the vxassist convert command to convert the resulting layered mirrored volume into a nonlayered mirrored volume.

8-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxassist relayout

#_

To perform most online relayout operations:


vxassist -g diskgroup relayout volume|plex layout=layout ncol=[+|-]ncol stripeunit=size [tmpsize=tmpsize]
volume|plex: Name of object to be converted layout: Desired new layout ncol: Number of columns in new layout +ncol: Adds a number of columns to original layout -ncol: Subtracts a number of columns from original layout stripeunit=size: Stripe width of new layout tmpsize: Size of scratch pad used in relayout

FOS35_Sol_R1.0_20020930

Default settings exist in /etc/default/vxassist.

8-28

The vxassist relayout Command You use the vxassist relayout command to perform most online relayout operations:
vxassist -g diskgroup relayout volume_name|plex_name layout=layout ncol=[+|-]ncol stripeunit=size [tmpsize=tmpsize]

In the syntax: volume_name|plex_name specifies the volume or plex to be converted. layout specifies the new layout desired. ncol specifies the number of columns in the new layout. By adding the plus sign (+) or minus sign (-) to the number of columns, you specify a number of columns to be added to or subtracted from the original volume or plex. stripeunit=size specifies the stripe width of the new layout. Default unit is sectors. You can specify a different unit by appending a k, m, or g to represent kilobytes, megabytes, or gigabytes, respectively. tmpsize specifies the size of the scratch space used in the relayout. You can override the default values by specifying a value for this parameter. Note: When changing to a striped layout, you should always specify the number of columns, or the operation may fail with the following error:
vxvm:vxassist: ERROR: Cannot allocate space for 51200 block volume vxvm:vxassist: ERROR: Relayout operation aborted.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-35

vxassist relayout

#_

To change to a striped layout:


# vxassist g datadg relayout datavol layout=stripe ncol=2
The default number of columns depends on free disks. The default stripe unit size is 128 sectors (64K).

To add a column to striped volume datavol:


# vxassist g datadg relayout datavol ncol=+1

To remove a column from datavol:


# vxassist g datadg relayout datavol ncol=-1

To change stripe unit size and number of columns:


# vxassist g datadg relayout datavol stripeunit=128k ncol=5
FOS35_Sol_R1.0_20020930 8-29

Changing to a Striped Layout: CLI The concatenated volume datavol exists in the disk group datadg. To change datavol to a striped volume with two columns and a stripe unit size of 128 sectors:
# vxassist -g datadg relayout datavol layout=stripe ncol=2

Changing Column and Stripe Characteristics: CLI To add a column to the striped volume datavol:
# vxassist -g datadg relayout datavol ncol=+1

To remove a column from the volume datavol:


# vxassist -g datadg relayout datavol ncol=-1

To change the number of columns in the striped volume datavol to be four columns:
# vxassist -g datadg relayout datavol ncol=4

To change the stripe width of the volume datavol from the default 128 sectors (64K) to 128K, and change the number of columns to five:
# vxassist -g datadg relayout datavol stripeunit=128k ncol=5

8-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxassist relayout

#_

To change mirrored layouts to RAID-5: You must specify which plex to change. All other plexes are removed. If a mirrored layout is changed to a layout other than RAID-5, unchanged plexes are not removed. Specify the plex to be converted (instead of the volume):
# vxassist -g diskgroup relayout plex layout=raid5 [options]

FOS35_Sol_R1.0_20020930

8-30

Changing to a RAID-5 Layout: CLI To change the concatenated volume payvol to a RAID-5 layout with four columns:
# vxassist -g hrdg relayout payvol layout=raid5 ncol=4

Any layout can be changed to RAID-5 if sufficient disk space and disks exist in the disk group. If the ncol and stripeunit options are not specified, the default characteristics are used. Default values for RAID-5 layouts are three columns and a stripe unit size of 32 sectors (16K). Note: When using vxassist to change the layout of a volume to RAID-5, VxVM may place the RAID-5 log on the same disk as a column, for example, when there is no other free space available. To place the log on a different disk, you can remove the log and then add the log to the location of your choice. Changing Mirrored Layouts to RAID-5 If you convert a mirrored volume to RAID-5, you must specify which plex is to be converted. All other plexes are removed when the conversion has finished, releasing their space for other purposes. If you convert a mirrored volume to a layout other than RAID-5, the unconverted plexes are not removed. Specify the plex to be converted by naming it in place of a volume:
# vxassist relayout plex layout=raid5 [options]

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-37

vxassist convert

#_

Use vxassist convert to convert: mirror-stripe to stripe-mirror stripe-mirror to mirror-stripe mirror-concat to concat-mirror concat-mirror to mirror-concat To convert the striped volume datavol to a layered stripe-mirror layout:
# vxassist g datadg convert datavol layout=stripe-mirror

FOS35_Sol_R1.0_20020930

8-31

Converting to a Layered Volume: CLI To change the resilience level of a volume; that is, to convert a nonlayered volume to a layered volume, or vice versa, you use the vxassist convert option. Available conversion operations include: mirror-stripe to stripe-mirror stripe-mirror to mirror-stripe mirror-concat to concat-mirror concat-mirror to mirror-concat The syntax for vxassist convert is:
vxassist -g diskgroup convert volume_name|plex_name layout=layout

Example: Converting to a Layered Volume To convert the striped volume datavol in the datadg disk group to a layered stripe-mirror layout:
# vxassist -g datadg convert datavol layout=stripe-mirror

8-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Volume Tasks


You can monitor, pause, continue, abort, or reverse the online relayout process.
VEA Relayout Status Monitor window Task History window Command log file CLI vxtask vxrelayout
FOS35_Sol_R1.0_20020930 8-32

#_

Managing Volume Tasks


Monitoring and Controlling Online Relayout Online relayout may take a long time, depending on the volume size and other factors. After you start a relayout operation, you can monitor its progress and pause, abort, continue, or reverse the process. Monitoring and controlling online relayout operations can be performed through the VEA interface or from the command line. Managing Volume Tasks: Methods You can use any of the following methods to monitor and control volume maintenance operations. These methods are detailed in the sections that follow.
VEA Relayout Status Monitor window Task History window Command log file vxtask vxrelayout

CLI

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-39

Managing Volume Tasks: VEA


Relayout Status Monitor Window
Displays automatically when you start relayout Enables you to view progress, pause, abort, continue, or reverse the relayout task Is also accessible from the Volume Properties window Displays information about the current-session tasks Can be accessed by clicking the Tasks tab at the bottom of the main window Enables you to right-click a task to abort, pause, resume, or throttle a task in progress Contains history of current- and previous-session tasks Is located in /var/vx/isis/command.log
8-33

Task History Window

Command Log File


FOS35_Sol_R1.0_20020930

Managing Volume Tasks: VEA Relayout Status Monitor Window When you start a relayout operation, the Relayout Status Monitor is displayed automatically. Through this window, you can view the progress of the relayout task and also pause, abort, continue, or reverse the relayout task. You can also access the Relayout Status Monitor through the Volume Properties window. Task History Window The Task History window displays a list of tasks performed in the current session and includes the name of the operation performed, target object, host machine, start time, status, and progress. To display the Task History window, click the Tasks tab at the bottom of the main window. When you right-click a task in the list and select Properties, the Task Properties window is displayed. In this window, you can view the underlying commands executed to perform the task. Command Log File The command log file, located in /var/vx/isis/command.log, contains a history of VEA tasks performed in the current session and in previous sessions. The file contains task descriptions and properties, such as date, command, output, and exit code. All sessions since the initial VEA session are recorded. The log file is not self-limiting and should therefore be initialized periodically to prevent excessive use of disk space.

8-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Volume Tasks: CLI

#_

What is a task?
A task is a long-term operation, such as online relayout, that is in progress on the system. Task ID is a unique number assigned to a single task. Task tag is a string assigned to a task or tasks by the administrator to simplify task management. For most utilities, you specify a task tag using: -t tag

Use the vxtask command to:


Display task information. Pause, continue, and abort tasks. Modify the progress rate of a task.
FOS35_Sol_R1.0_20020930 8-34

Managing Volume Tasks: CLI To monitor and control volume maintenance operations from the command line, you use the vxtask and vxrelayout commands. The vxtask Command The vxtask command enables you to monitor VxVM tasks and modify task states. Using vxtask, you can: Display task information. Pause, continue, and abort tasks. Modify the rate of progress for a task. What is a task? A VxVM task is a long-term operation, such as an online relayout operation, that is in progress on the system. What is a task identifier? When you start a task, VxVM assigns a unique number, called a task identifier, that is used to specifically identify the task. What is a task tag? The administrator can also assign a task tag to a task to simplify administration. A task tag is a string specified by the administrator in the command that initiates the task. For most utilities, the task tag is specified using the -t tag option. A task tag can be associated with multiple tasks and is inherited by any child tasks. For example, when initiating a relayout with the vxassist command, the administrator can assign the task tag convertop1 using the -t tag option:
# vxassist -t convertop1 relayout datavol layout=raid5

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-41

vxtask list

#_

To display information about tasks:


vxtask [-ahlpr] list [task_id|task_tag]
Percentage of Percentage of task complete task complete Starting, ending, and Starting, ending, and current offset current offset

VxVM-assigned VxVM-assigned Task ID Task ID

# vxtask list # vxtask list


TASKID PTID TYPE/STATE PCT PROGRESS TASKID PTID TYPE/STATE PCT PROGRESS 198 RELAYOUT/R 58.48% 0/20480/11976 RELAYOUT myvol 198 RELAYOUT/R 58.48% 0/20480/11976 RELAYOUT myvol
Parent ID Parent ID Description Description of task of task State of State of Running (R), Running (R), Paused (P), or Paused (P), or Aborting (A) Aborting (A) Affected Affected VxVM object VxVM object
8-35

FOS35_Sol_R1.0_20020930

Displaying Task Information with vxtask To display information about tasks, such as relayout or resynchronization processes, you use the vxtask list command:
vxtask [-ahlpr] list [task_id|task_tag]

Without any options, vxtask list prints a one-line summary for each task running on the system.
# vxtask list
TASKID 198 PTID TYPE/STATE RELAYOUT/R PCT 58.48% PROGRESS 0/20480/11976 RELAYOUT datavol

Information in the output includes: TASKID The task identifier assigned to the task by VxVM PTID The ID of the parent task, if any If the task must be completed before a higher-level task is completed, the higher-level task is called the parent task. The task type and state The type is a description of the work being performed, such as RELAYOUT. The state is a single letter representing one of three states: R: Running P: Paused A: Aborting

TYPE/STATE

8-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

PCT PROGRESS

The percentage of the operation that has been completed to this point The starting, ending, and current offset for the operation, separated by slashes, a description of the task, and names of objects that are affected

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-43

vxtask list Options

#_

To display task information in long format: # vxtask -l list To display a hierarchical listing of parent/child tasks: # vxtask -h list To limit output to paused tasks: # vxtask -p list To limit output to running tasks: # vxtask -r list To limit output to aborted tasks: # vxtask -a list To limit output to tasks with a specific task ID or task tag: # vxtask list convertop1
Task tag Task tag

FOS35_Sol_R1.0_20020930

8-36

Options for vxtask list The -l option displays all available information for a task in long format and spans multiple lines. If more than one task is printed, the output for different tasks is separated by a single blank line.
# vxtask -l list

The -h option prints tasks hierarchically with child tasks following the parent task:
# vxtask -h list

The -p option restricts the output to tasks in the paused state:


# vxtask -p list

The -r option restricts the output to tasks in the running state:


# vxtask -r list

The -a option restricts the output to tasks in the aborted state:


# vxtask -a list

To display only the tasks with a specific task tag or task identifier, you add the task tag or identifier at the end of the command. For example, to display all tasks with a tag of convertop1:
# vxtask list convertop1

8-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxtask monitor

#_

To provide a continuously updated list of tasks running on the system, use vxtask monitor: vxtask [-c count] [-ln] [-t time] [-w interval] monitor [task_id|task_tag]
-l: Displays task information in long format -n: Displays information for tasks that are newly registered while the program is running -c count: Prints count sets of task information and then exits -t time: Exits program after time seconds -w interval: Prints waiting ... after interval seconds with no activity

When a task is completed, the STATE is displayed as EXITED.


FOS35_Sol_R1.0_20020930 8-37

Monitoring a Task with vxtask To provide a continuously updated listing of tasks running on the system, you use the vxtask monitor command. (The vxtask list output represents a point in time and is not continuously updated.) With vxtask monitor, you can track the progress of a task on an ongoing basis:
# vxtask [-c count] [-ln] [-t time] [-w interval] monitor [task_id|task_tag]

By default, vxtask monitor prints a one-line summary for each task running on the system.
# vxtask monitor
TASKID 198 PTID TYPE/STATE RELAYOUT/R PCT 58.48% PROGRESS 0/20480/11976 RELAYOUT datavol

The output is the same as for vxtask list, but changes as information about the task changes. When a task is completed, the STATE is displayed as EXITED. Options for vxtask monitor The -l option displays the list in long format. The -n option causes the program to also monitor newly registered tasks while the program is running. The -c count option causes the program to print count sets of task information and then exit.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-45

The -t time option causes the program to exit after time seconds. The -w interval option causes the string waiting... to be printed when interval seconds have passed with no output activity. You can restrict results by specifying a task tag or task identifier at the end of the command.

8-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxtask abort|pause|resume

#_

To abort, pause, or resume a task:


vxtask abort|pause|resume task_id|task_tag

To pause the task with the task ID 198:


# vxtask pause 198

To resume the task with the task ID 198:


# vxtask resume 198

To abort the task with the task tag convertop1:


# vxtask abort convertop1
FOS35_Sol_R1.0_20020930 8-38

Controlling Tasks with vxtask You can abort, pause, or resume a task by using the vxtask command:
vxtask abort|pause|resume task_id|task_tag

In the syntax: abort stops a task. pause suspends a running task. resume restarts a paused task. You specify the task ID or task tag to identify the task. Using pause, abort, and resume For example, you can pause a task when the system is under heavy contention between the sequential I/O of the synchronization process and the applications trying to access the volume. pause allows for an indefinite amount of time for an application to complete before using resume to continue the process.
abort is often used when reversing a process. For example, if you start a process and then decide that you do not want to continue, you reverse the process. When the process returns to 0 percent, you use abort to stop the task.

Note: Once you abort or pause a relayout task, you must at some point either resume or reverse it.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-47

Examples: Controlling Tasks with vxtask To pause the task with the task ID 198:
# vxtask pause 198

To resume the task with the task ID 198:


# vxtask resume 198

To abort the task with the task tag convertop1:


# vxtask abort convertop1

8-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxrelayout

#_

The vxrelayout command can also be used to display the status of, reverse, or start a relayout operation:
vxrelayout g diskgroup status|reverse|start volume
Note: You cannot stop a relayout with vxrelayout. Only the vxtask command can stop a relayout operation.

# vxrelayout -g datadg status datavol # vxrelayout -g datadg status datavol


STRIPED, columns=5, stwidth=128 --> STRIPED, columns=5, stwidth=128 --> STRIPED, columns=6, stwidth=128 STRIPED, columns=6, stwidth=128 Relayout running, 58.48% completed. Relayout running, 58.48% completed.
FOS35_Sol_R1.0_20020930

Source layout Source layout Destination layout Destination layout

Task status Task status

Percentage of task Percentage of task completed completed

8-39

The vxrelayout Command The vxrelayout command can also be used to display the status of relayout operations and to control relayout tasks.
vxrelayout -g diskgroup status|reverse|start volume_name

In the syntax: The status option displays the status of an ongoing or discontinued layout conversion. The reverse option reverses a discontinued layout conversion. Before using this option, the relayout operation must be stopped using vxtask abort. The start option continues a discontinued layout conversion. Before using this option, the relayout operation must have been stopped using vxtask abort. Example: Displaying Task Status with vxrelayout To display information about the relayout operation being performed on the volume datavol, which exists in the datadg disk group:
# vxrelayout -g datadg status datavol STRIPED, columns=5, stwidth=128 --> STRIPED, columns=6, stwidth=128 Relayout running, 58.48% completed.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-49

The output displays the characteristics of both the source and destination layouts (including the layout type, number of columns, and stripe width), the status of the operation, and the percentage completed. In the example, the output indicates that an increase from five to six columns for a striped volume is more than halfway completed.

8-50

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Controlling Task Progress

#_

To control the I/O rate for mirror copy operations from the command line, use vxrelayout options: -o slow=iodelay
Use this option to reduce the system performance impact of copy operations by setting a number of milliseconds to delay copy operations Process runs faster without this option.

-o iosize=size
Use this option to perform copy operations in regions with the length specified by size. Specifying a larger number typically causes the operation to complete sooner, but with greater impact on other processes using the volume.
FOS35_Sol_R1.0_20020930 8-40

Controlling the Task Progress Rate vxrelayout options VxVM provides additional options that you can use with the vxrelayout command to pass usage-type-specific options to an operation. These options can be used to control the I/O rate for mirror copy operations by speeding up or slowing down resynchronization times.
-o slow=iodelay

This option reduces the system performance impact of copy operations. Copy operations are usually a set of short copy operations on small regions of the volume (normally from 16K to 128K). This option inserts a delay between the recovery of each such region. A specific delay can be specified with iodelay as a number of milliseconds. The process runs faster when you do not set this option.
-o iosize=size

This option performs copy operations in regions with the length specified by size, which is a standard VxVM length number. Specifying a larger number typically causes the operation to complete sooner, but with greater impact on other processes using the volume. The default I/O size is typically 32K. Caution: Be careful when using these options to speed up operations, because other system processes may slow down. It is always acceptable to increase the slow options to enable more system resources to be used for other operations.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-51

Controlling Task Progress: VEA


Right-click a task in the Task History window, Right-click a task in the Task History window, and select Throttle Task. and select Throttle Task.

Set the throttling value in the Set the throttling value in the Throttle Task dialog box. Throttle Task dialog box.

FOS35_Sol_R1.0_20020930

8-41

Slowing a Task with vxtask You can also set the slow attribute in the vxtask command by using the syntax:
vxtask [-i task_id] set slow=value

Throttling a Task with VEA You can reduce the priority of any task that is time-consuming. Right-click the task in the Task History window, and select Throttle Task. In the Throttle Task dialog box, use the slider to set a throttling value. The larger the throttling value, the slower the task is performed.

8-52

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to: Resize a volume while the volume remains online. Duplicate the contents of volumes by creating volume snapshots. Change the volume layout while the volume remains online. Manage volume maintenance tasks with VEA and from the command line.

FOS35_Sol_R1.0_20020930

8-42

Summary
This lesson described how to perform and monitor volume maintenance tasks using VERITAS Volume Manager (VxVM). This lesson described how to perform online administration tasks, such as resizing a volume, creating volume snapshots, and changing the layout of a volume. Next Steps The next lesson covers basic administrative commands used in setting up a VERITAS file system. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.

Lesson 8: Volume Maintenance


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8-53

Lab 8
Lab 8: Volume Maintenance In this lab, you resize volumes, create and manage volume snapshots, and change volume layouts. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

8-43

Lab 8: Volume Maintenance


Goal In this lab, you resize volumes, create and manage volume snapshots, and change volume layouts. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

8-54

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Setting Up a File System

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

9-2

9-2

Introduction
Overview This lesson describes the different types of file systems and provides general guidelines for using file system commands. This lesson describes how to use common file system commands to create, set up, and monitor a file system. Importance Before you can take advantage of the online administration features of VERITAS File System, you need to know how to set up a file system in a way that meets the needs of your environment. This lesson describes how to create a VERITAS file system and identifies some of the options that you can set at the time of creation. Outline of Topics File System Types Using VERITAS File System Commands Creating a New File System Setting File System Properties Mounting a File System Mounting a File System Automatically Unmounting a File System Identifying File System Type Identifying Free Space Maintaining File System Consistency
9-2 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Describe file system types. List guidelines for issuing file system commands. Create a file system by using mkfs. Set file system properties by using mkfs options. Mount a file system by using mount. Mount a file system at boot time. Unmount a file system by using umount. Identify file system type by using fstyp. Identify free disk space by using df. Maintain file system consistency by using fsck.
9-3

FOS35_Sol_R1.0_20020930

Objectives After completing this lesson, you will be able to: Describe file system types. List guidelines for issuing file system commands. Create a file system using the mkfs command. Set file system properties using mkfs command options. Mount a file system using the mount command. Mount a file system at boot time by editing the vfstab file. Unmount a file system by using the umount command. Identify file system type by using the fstyp command. Identify free disk space by using the df command. Maintain file system consistency by using the fsck command.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-3

File System Types


File System Interfaces - API / CLI
Type-Independent File System Type-Dependent File Systems

Virtual File System UFS VxFS HSFS NFS PROCFS

Underlying Storage Devices Underlying Storage Devices

FOS35_Sol_R1.0_20020930

9-4

File System Types


Types of File Systems VERITAS File System is one of many different types of file systems that are available for providing file system services. When you use file system administrative commands, you specify the type of file system in the command. This enables you to access VERITAS File System-specific versions of standard file system commands. Different types of file systems coexist in a layered structure within a computer system. Type-Independent File Systems A type-independent file system is a file system that provides a common interface for interacting with different types of file systems. In Solaris, the type-independent file system is called Virtual File System (VFS). Type-Dependent File Systems A type-dependent file system is a file system that has a specific association with a particular storage media device, network, or memory space. Examples of typedependent file systems include: UNIX File System (UFS): UFS is the default disk-based file system for Solaris. VERITAS File System (VxFS): VxFS is a disk-based file system designed to provide high performance, availability, data integrity, and integrated online administration.

9-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

High Sierra File System (HSFS): HSFS was designed as the first CD-ROM file system and is an example of a read-only file system. Network File System (NFS): NFS is the default distributed file system for Solaris, which means that it resides on one system and can be shared and accessed by other systems across a network. Process File System (PROCFS): PROCFS provides an access point or simple reference to processes and resides in system memory.

Data Flow Through File Systems


When an application makes a call to the Solaris operating system, the call first passes through the type-independent file system, VFS, where it is converted into a vnode that identifies which file block needs to be retrieved. The vnode is then passed to the appropriate type-dependent file system. The type-dependent file system handles the I/O request by locating and retrieving the requested file from its own memory space or from the underlying storage system. The file is then sent back through VFS to be returned to the calling application.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-5

Using VxFS Commands


VxFS can be used as the basis for any file system except for /, /usr, /var, and /opt, which must have UFS-based file systems. VxFS-specific commands are stored in:
/opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /etc/fs/vxfs

Specify all three directories in the PATH environment variable to access all of the commands.
FOS35_Sol_R1.0_20020930 9-5

Using VERITAS File System Commands


Using VxFS As an Alternate to UFS
You can generally use a VERITAS file system as an alternative to UFS, except for the root and /usr, /var, and /opt directories. Root and /usr are mounted readonly in the boot process, before the VxFS driver is loaded, and must be UFS-based file systems. Also, the VxFS driver requires dynamic libraries available from /usr.

Location of VxFS Commands


You can administer a VERITAS file system from the command line by using VxFS-specific commands. VxFS-specific commands are stored in: /opt/VRTSvxfs/sbin Contains VERITAS-specific commands /usr/lib/fs/vxfs Contains VxFS-type specific switchout commands Contains the VERITAS mount command and /etc/fs/vxfs some QuickLog commands required to mount file systems Specify these directories in the PATH environment variable in order to access the commands. Note: The directory /opt/VRTS/bin contains symbolic links to the VERITASspecific commands installed in the directories listed above.

9-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxFS Command Syntax


VxFS uses standard file system management command syntax:
command [-F type] [generic_options] [-o specific_options][special|mount_point]

Use the switchout -F vxfs to access VxFS-specific versions of standard commands. Use -o to add VxFS-specific options. Without -F vxfs, the file system type is taken from the default specified in /etc/default/fs. To use VxFS as your default, change /etc/default/fs to contain vxfs.
FOS35_Sol_R1.0_20020930 9-6

General File System Command Syntax


VERITAS File System uses standard file system management command syntax:
command [-F type] [generic_options] [-o specific_options] [special | mount_point] In the syntax, you first specify the standard command, such as mkfs, mount, umount, or fstyp. To access VxFS-specific versions, or wrappers, of standard commands, you use the VFS switchout mechanism -F followed by the file system type, vxfs. The -F vxfs option directs the system to search /opt/VRTSvxfs/sbin, /etc/fs/vxfs, and then /usr/lib/fs/vxfs for VxFS-specific versions of commands. Generic options are options that are common to most file system types. You access VxFS-specific command options using -o followed by specific options. To complete the command, you specify the mount point or special device file to identify the file system.

Using VxFS Commands by Default


If you do not use the switchout mechanism -F vxfs, then the file system type is taken from the default specified in the /etc/default/fs file. If you want VERITAS File System to be your default file system type, then you change the /etc/default/fs file to contain vxfs.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-7

VxFS Commands
The following table lists all of the VxFS command line interface commands. Online manual pages for these commands are installed in /opt/VRTS/man in the appropriate directories. Many of these commands are covered in detail throughout this training.
/opt/VRTS/man/man1 cp_vxfs.1 cpio_vxfs.1 getext.1 ls_vxfs.1 mv_vxfs.1 qioadmin.1 qiomkfile.1 qiostat.1 setext.1 vxlicinst.1 vxlicrep.1 vxlictest.1

/opt/VRTS/man/man1m cfscluster.1m cfsdgadm.1m cfsmntadm.1m cfsmount.1m cfsumount.1m df_vxfs.1m ff_vxfs.1m fsadm_vxfs.1m fscat_vxfs.1m fsck_vxfs.1m fsckptadm.1m fsclustadm.1m fsdb_vxfs.1m fstyp_vxfs.1m glmconfig.1m mkfs_vxfs.1m /opt/VRTS/man/man4 fs_vxfs.4 inode_vxfs.4 qlog_config.4 tunefstab.4 /opt/VRTS/man/man7 qlog.7 vxfsio.7 mount_vxfs.1m ncheck_vxfs.1m qlogadm.1m qlogattach.1m qlogck.1m qlogclustadm.1m qlogdb.1m qlogdetach.1m qlogdisable.1m qlogenable.1m qlogmk.1m qlogprint.1m qlogrec.1m qlogrm.1m qlogstat.1m qlogtrace.1m umount_vxfs.1m vxdump.1m vxedquota.1m vxfsconvert.1m vxfsstat.1m vxlicense.1m vxquot.1m vxquota.1m vxquotaoff.1m vxquotaon.1m vxrepquota.1m vxrestore.1m vxtunefs.1m vxupgrade.1m

Notes:
The qio- commands have functionality that is only available with the VERITAS Quick I/O for Databases feature. The qlog- commands have functionality that is only available with the VERITAS QuickLog feature. The cfs-, fsclustadm, glmconfig, and qlogclustadm commands have functionality that is only available with the VERITAS Cluster File System feature.

9-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Administering VxFS Using VEA


You can use the VEA GUI to perform standard file system commands.

Mount Mount Point Point


FOS35_Sol_R1.0_20020930

Device Device

File File System System Type Type

Mounted? Mounted?

Size Size

Free Free Space Space


9-7

Cluster Mount? Cluster Mount?

Administering a File System Using VEA


You can also administer VxFS file systems through the VERITAS Enterprise Administrator (VEA) graphical user interface. Options in the VEA interface enable you to perform many file system administration tasks, including creating, mounting, and setting properties for a file system.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-9

Creating a File System


To create a file system:
mkfs [-F vxfs] [generic_options] [-o specific_options] special [size]

Examples:
# mkfs -F vxfs /dev/vx/rdsk/datadg/datavol # mkfs -F vxfs /dev/rdsk/c1t0d0s0

In VEA, select Actions>File System>New File System. In the New File System dialog box, specify the file system type and other options.
FOS35_Sol_R1.0_20020930 9-8

Creating a New File System


The mkfs Command
To create a VERITAS file system, you use the standard file system command mkfs. The mkfs command creates a file system by writing to a special character device file. The special character device can be a raw disk device or a VxVM volume. The mkfs command builds a file system with a root directory and a lost+found directory. The syntax for using the mkfs command is:
# mkfs [-F vxfs] [generic_options] [-o specific_options] special [size]

In the syntax, you specify the command, followed by the file system type, and any generic options common to most other file systems. Using the -o command, you can add VxFS-specific options. Special means to specify the character (or raw) device or the VxVM volume character device node. The size argument specifies the number of 512-byte sectors in the file system. If size is not specified, the mkfs command determines the size of the special device and constructs a file system equal in size to the volume within which it is created. If you prefer to specify a size in a unit other than sectors, you can append the number by adding a k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. If you separate the appended letter by a space, you must enclose the number and letter in quotation marks.

9-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Example: Creating a File System


1. Initialize the target device:
# vxassist -g datadg make datavol 1g

2. Create the file system using mkfs:


# mkfs -F vxfs /dev/vx/rdsk/datadg/datavol
version 5 layout 2097152 sectors, 1048576 blocks of size 1024, log size 16384 blocks unlimited inodes, largefiles not supported 1048576 data blocks, 1031864 free data blocks 32 allocation units of 32768 blocks, 32768 data blocks

FOS35_Sol_R1.0_20020930

9-9

Steps to Create a New File System 1 Initialize the target device using an appropriate method. If you have added a new kind of disk controller, which requires a new driver, you must run drvconfig. If you have added a new disk, you must run disks, then run format to create a disk slice. If you are using a logical device such as a VxVM volume, then you can use vxassist or VEA to initialize the volume. The VxVM disks must be initialized, and the disk group must exist before you use vxassist. 2 Create the VERITAS file system using the mkfs command. Example: Creating a File System on a VxVM Volume 1 Initialize the target device. This example initializes the VxVM volume datavol in the disk group datadg. The size is specified as 1 gigabyte. # vxassist -g datadg make datavol 1g 2 Create the VERITAS file system. The following mkfs command creates a VERITAS file system on the VxVM volume datavol. # mkfs -F vxfs /dev/vx/rdsk/datadg/datavol
version 5 layout 2097152 sectors, 1048576 blocks of size 1024, log size 16384 blocks unlimited inodes, largefiles not supported 1048576 data blocks, 1031864 free data blocks 32 allocation units of 32768 blocks, 32768 data blocks
Lesson 9: Setting Up a File System
Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-11

Setting mkfs Options


mkfs [-F vxfs] [generic_options] [-o specific_options] special [size]

Option N largefiles version bsize logsize


FOS35_Sol_R1.0_20020930

Description Provides information only Supports files > 2 gigabytes (or > 8 million files) Specifies layout version Sets logical block size Sets size of logging area
9-10

Setting File System Properties


Using mkfs Command Options
You can set a variety of file system properties when you create a VERITAS file system by adding VxFS-specific options to the mkfs command. To add VxFS-specific options to the mkfs command, you type -o followed by the specific options in the syntax:
mkfs [-F vxfs] [generic_options] [-o specific_options] special [size]

Some of the specific options supported by the VxFS-specific mkfs command include:
Option N Description Reports the same structural information about the file system as if it had actually been created, without actually creating the file system Enables the creation of files 2 gigabytes or larger and the use of more than 8 million inodes in a file system Specifies the VxFS file system layout version number Sets the logical block size in bytes for files on the file system Specifies the number of blocks to allocate for the logging area

largefiles version bsize logsize

For a complete list of options, see the mkfs_vxfs manual page.

9-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Checking VxFS Structure


mkfs mkfs Options Options

To check the structure of a file system:


# mkfs -F vxfs -o N /dev/vx/rdsk/...

Does not actually create the file system Option not available in VEA

FOS35_Sol_R1.0_20020930

9-11

Checking VxFS Structure


To check the structure of a VERITAS file system without writing to the device, you use the -o N option. This option reports the same structural information about the file system as if it had actually been created, but does not actually create the file system. You can use this option to check the structure that is created without actually creating the file system. Example: Checking VxFS Structure To display information needed to create a file system on the volume datavol, but not actually create the file system, you type:
# mkfs -F vxfs -o N /dev/vx/rdsk/datadg/datavol

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-13

Supporting Large Files


mkfs To enable support for files larger than 2 GB: mkfs Options Options

# mkfs -F vxfs -o largefiles /dev/vx/rdsk/...


Supported on Solaris 2.6 and above

Valid only for version 4 or 5 file system layouts Enables use of more than 8 million inodes (each file is associated with an inode) Default: nolargefiles

In VEA, mark the Support large file size check box in the New File System Details dialog box.
FOS35_Sol_R1.0_20020930 9-12

FOS35_Sol_R1.0_20020930

9-12

Enabling Large File Support


To enable support for files larger than two gigabytes, or more than eight million inodes in the file system, you add the -o largefiles option when you create the file system. This option controls the largefiles flag for the file system and is valid only for the Version 4 or 5 file system layouts. If largefiles is specified, the bit is set and files two gigabytes or larger can be created. If nolargefiles is specified, the bit is cleared and files created on the file system are limited to less than two gigabytes. The default is nolargefiles. If you do not set the largefiles flag at file system creation, and decide later that you need large files, then you can use the -o largefiles option to the fsadm command to enable large file support. See the fsadm_vxfs(1m) manual page for more information. Note: Large files are supported on Solaris versions 2.6 or later. When implementing large file system capability, system administration utilities such as backup may not operate correctly if they are not large-file aware. Example: Enabling Large File Support To create a file system on the volume datavol that allows for files larger than two gigabytes, you type:
# mkfs -F vxfs -o largefiles /dev/vx/rdsk/datadg/datavol

9-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Note: If you specify the largefiles option with the mount command, this option does not turn largefiles capability on and off, but can be used to verify whether a file system is largefiles-capable. If nolargefiles is specified and the mount succeeds, the file system does not contain files two gigabytes or larger, and such files cannot be created. If largefiles is specified and the mount succeeds, the file system can contain files two gigabytes or larger.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-15

Specifying Layout Version


mkfs mkfs Options Options

To specify a particular layout version when creating a file system:


# mkfs -F vxfs -o version=4 /dev/vx/rdsk/...
version refers to the VxFS file system layout.

Valid values are 4 and 5. By default, a version 5 layout is created.

FOS35_Sol_R1.0_20020930

9-13

Specifying a File System Layout Version


To specify a particular file system layout version to be used when making the file system, you use the -o version=n option, where n is the VxFS file system layout version number. Valid values are 4 and 5. When no option is specified, the default is file system layout Version 5. The Version 4 layout enables extents to be variable in size, enables support for large files, and adds typed extents to the VxFS architecture. Version 4 supports files and file systems up to one terabyte in size. The Version 5 layout enables the creation of file system sizes up to 32 terabytes. Files can be a maximum of two terabytes. File systems larger than 1 terabyte must be created on a VxVM volume and require an 8K block size. Note: With VxFS 3.5 and later, VxFS file systems with earlier layout versions (1, 2, and 3) can no longer be created. Example: Specifying Layout Version Suppose that you have a legacy system that requires the use of a VERITAS file system with a Version 4 file system layout. To create a file system with a Version 4 layout, you type:
# mkfs -F vxfs -o version=4 /dev/vx/rdsk/...

9-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Setting Block Size


mkfs To specify a block size for files on the file system: mkfs Options Options

# mkfs -F vxfs -o bsize=2048 /dev/vx/rdsk/...


Default block size is 1024 bytes (1K). Default block size is larger for large file systems (> 4 TB). Block size cannot be changed after creation. In most cases, the default block size is best. Resizing the file system does not change the block size.

In VEA, you can select a block size in the New File System dialog box.
FOS35_Sol_R1.0_20020930 9-14

FOS35_Sol_R1.0_20020930

9-14

Setting Block Size


To set the block size for files on the file system, you use the -o bsize=n option, where n is the block size in bytes for files on the file system. Block size represents the smallest amount of disk space allocated to a file and must be a power of two selected from the range 1024 to 8192. To create a file system with a block size of 2048 bytes, you type:
# mkfs -F vxfs -o bsize=2048 /dev/vx/rdsk/...

Default Block Size


If you do not specify a block size when you create a file system, the default block size is 1024 bytes (1K). The default block size is larger for file systems greater than 4 TB.

Considerations for Setting Block Size


Overall file system performance can be improved or degraded by changing the block size. In most cases, you do not need to specify a block size when creating a file system. However, for large file systems with relatively few files, you may want to experiment with larger block sizes. Resizing the file system does not change the block size. Therefore, you typically set a larger than usual block size if you expect to extend the file system in the near future. Determining an appropriate block size involves a trade-off between memory consumption and wasted disk space.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-17

Setting Log Size


mkfs mkfs Options Options

To specify the number of file system blocks used for a logging area: # mkfs -F vxfs -o logsize=2048 /dev/vx/rdsk/...
Default log size is 16384 file system blocks. Default is sufficient for most workloads. Log size cannot be changed after creation.

FOS35_Sol_R1.0_20020930

In VEA, you can specify a log size in the New File System Details dialog box.

9-15

FOS35_Sol_R1.0_20020930

9-15

Setting Log Size


To allocate the number of file system blocks for an activity logging area, you use the logsize=n option, where n is the number of file system blocks. The activity logging area, called the intent log, contains a record of changes to be made to the structure of the file system. Default Log Size When you create a file system with mkfs, VxFS uses a default log size of 16384 file system blocks, which is sufficient for most workloads. The log size cannot be changed after the file system is created. To avoid wasting space, the default log size is smaller for small file systems. For example:
File System Size 512 MB and greater Between 8 MB and 512 MB Less than 8 MB Default Intent Log Size 16384 file system blocks 1024 file system blocks 256 file system blocks

Minimum and Maximum Log Sizes The minimum log size is the number of file system blocks that make the log no less than 256K. The maximum log size is the number of file system blocks that make the log no greater than 16384K.

9-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Example: Setting Log Size To create a file system with a log size of 2048 file system blocks, you type:
# mkfs -F vxfs -o logsize=2048 /dev/vx/rdsk/...

Selecting an Appropriate Log Size


The best way to select an appropriate log size is to test representative system loads against various sizes and select the size that results in the fastest performance. When to Use a Large Log Size A large log provides better performance on metadata-intensive workloads. For example, larger log sizes can be beneficial for NFS-intensive workloads or applications that send intensive writes requiring more space to hold transactions. Note: The larger the log size, the longer the file system recovery time. When to Use a Small Log Size A small log uses less space on the disk and leaves more room for file data. For example, setting a log size smaller than the default log size may be appropriate for a small floppy device. On small systems, you should ensure that the log size is not greater than half the available swap space. Log Size and VERITAS QuickLog When you use VERITAS QuickLog, the QuickLog device size can be easily changed at any time during the use of a file system.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-19

Mounting a File System


To mount a file system:
mount [-F vxfs] [generic_options] [-r] [-o specific_options] special mount_point

Examples:
# mount -F vxfs /dev/vx/dsk/datadg/datavol /mydata # mount -F vxfs /dev/dsk/c1t0d0s0 /mydata

In VEA, highlight an unmounted file system and select Actions>Mount File System. Complete the Mount File System dialog box.
FOS35_Sol_R1.0_20020930 9-16

Mounting a File System


The mount Command
After creating a VERITAS file system on a raw device, you use the block device to mount the file system. To mount a VERITAS file system, you use the standard file system mount command:
mount [-F vxfs] [generic_options] [-r] [-o specific_options] special mount_point

In the syntax, you specify the mount command followed by the file system type. You can add generic mount options as well as VxFS-specific mount options. To mount the file system as read-only, you use the -r option. The special argument means to identify the file system as either a block device or a VxVM volume. The mount_point is the directory on which to mount the file system. The mount point becomes the name of the root of the newly mounted file system. If the directory used as the mount point contains files or subdirectories, they are inaccessible until the file system is unmounted. Example: Mounting a File System To mount a VERITAS file system on the volume datavol at the mount point /mydata, you type:
# mount -F vxfs /dev/vx/dsk/datadg/datavol /mydata

9-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Basic Mount Options


To display the file system type and mount options for all mounted file systems:
# mount -v

To display a list of mounted file systems in the vfstab format:


# mount -p

To mount all file systems in the vfstab file:


# mount -a
FOS35_Sol_R1.0_20020930 9-17

Displaying Mounted File Systems


You can use the mount command to display a list of currently mounted file systems. By keeping track of which file systems are mounted and which are not, you can avoid trying to access unmounted file systems. To see the status of mounted file systems, type:
# mount -v

This shows the file system type and mount options for all mounted file systems. The -v option specifies verbose mode. To display a list of mounted file systems in the /etc/vfstab format, you use the command:
# mount -p

Mounting All File Systems


To mount all file systems listed in the /etc/vfstab file, you use the -a option:
# mount -a

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-21

The vfstab File


Add an entry to the /etc/vfstab file to automatically mount a file system at boot time. In the vfstab file, you specify:
Device to mount: Device to fsck: Mount point: File system type: Mount at boot: Mount options: /dev/dsk/c0t6d0s0 /dev/rdsk/c0t6d0s0 /ext vxfs 1 yes -

fsck pass:

In VEA, select Add to file system table and Mount at boot in the FOS35_Sol_R1.0_20020930 New File System dialog box.
FOS35_Sol_R1.0_20020930

9-18

9-18

Mounting a File System Automatically


The vfstab File
The mount command checks the /etc/vfstab file for parameters to use. For example, if you enter mount /ext, the rest of the information needed for the mount command is read from the entry in the /etc/vfstab file. If you do not supply the file system type using the -F vxfs option, then the mount command searches the file /etc/vfstab for a file system and file system type that match the special file or mount point provided. If no file system type is specified, then the mount command uses the default file system. The following is an example of a typical vfstab file. A VERITAS file system is displayed in the last line of the file.
#device device mount #to mount to fsck point # #/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr /proc /proc fd /dev/fd swap /tmp /dev/dsk/c0t3d0s0/dev/rdsk/c0t3d0s0/ /dev/dsk/c0t3d0s1/dev/dsk/c0t6d0s2/dev/rdsk/c0t6d0s2/ext FS type ufs proc fd tmpfs ufs swap vxfs fsck mount mount pass at boot options 1 1 1 yes no no yes no no yes -

9-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding an Entry to the vfstab File


You can automatically mount a VERITAS file system at boot time by adding an entry for the file system in the /etc/vfstab file. To add an entry to the /etc/vfstab file, you specify the following: Name of the special block device to mount Name of the special character device used by fsck Mount point File system type (vxfs) Number of the fsck pass The fsck pass number determines the level of file system checking that occurs at boot time or whether the file system is checked at all. A hyphen indicates that the file system is not checked. Whether to mount the file system at boot time (yes or no) Mount options If you use VEA to create the file system, you are asked whether the file system is required to be mounted at boot time.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-23

Unmounting a File System


To unmount a file system: umount special|mount_point Examples: # umount /mydata # umount /dev/dsk/c1t0d0s0 To unmount all file systems except those required by operating system: # umount -a
Caution To force an unmount:

# umount -o force /mydata


FOS35_Sol_R1.0_20020930

In VEA, highlight a mounted file system and select Actions>Unmount File System.

9-19

FOS35_Sol_R1.0_20020930

9-19

Unmounting a File System


The umount Command
To unmount a currently mounted file system, you use the standard file system command umount:
umount special|mount_point

In the syntax, you specify the umount command followed by the special device or mount point on which the file system resides. You do not need to specify the file system type. The type of a mounted file system can be determined automatically. For example, to unmount the VERITAS file system located at /mydata, you type:
# umount /mydata

Unmounting All File Systems


To unmount all file systems, except the ones required by the operating system, you use the -a option with the umount command. With this option, the umount command attempts to unmount all file systems except /, /usr, /usr/kvm, /var, /proc, /dev/fd, and /tmp. For example, to unmount all mounted file systems, you type:
# umount -a

9-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Forcing an Unmount
Beginning with VxFS 3.4, you can perform forced unmounts of VERITAS file systems by using the option -o force with the umount command. A forced unmount can be useful in situations such as high availability environments, where a mounted file system could prevent timely failover. Any active process with I/O operations pending on an unmounted file system receives an I/O error. To perform a forced unmount of the VERITAS file system mounted at /mydata, you type:
# umount -o force /mydata

Caution: This command can cause data loss.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-25

Identifying File System Type


To identify the file system type:

fstyp [-v] special


Example:

# fstyp -v /dev/vx/dsk/datadg/datavol
vxfs magic a501fcf5 version 5 ctime Thu May 23 10:09:58 2002 logstart 0 logend 0 bsize 1024 size 512000 dsize 512000 ninode 0 nau 0 defiextsize 0 ilbsize 0 immedlen 96 ndaddr 10 aufirst 0 emap 0 imap 0 iextop 0 istart 0 bstart 0 femap 0 fimap 0 fiextop 0 fistart 0 fbstart 0 nindir 2048 aulen 32768 auimlen 0 auemlen 8 ...
FOS35_Sol_R1.0_20020930

In VEA, right-click a file system in the object tree, and select Properties. The file system type is displayed in the File System Properties window.

9-20

FOS35_Sol_R1.0_20020930

9-20

Identifying File System Type


The fstyp Command
If you do not know the file system type of a particular file system, you can determine the file system type by using the fstyp command. You can use the fstyp command to describe either a mounted or unmounted file system. To determine the type of file system on a disk partition, you use the following syntax:
fstyp [-v] special

In the syntax, you specify the command followed by the name of the device. You can use the -v option to specify verbose mode. Verbose mode displays the super-block fields, the number of free blocks and inodes, and the number of free extents by size. The output displayed when using the -v option varies slightly for each disk layout.

Example: Displaying File System Type


To find out what kind of file system is on the device /dev/dsk/c0t6d0s0, you type:
# fstyp /dev/dsk/c0t6d0s0 vxfs

The output indicates that the file system type is vxfs.

9-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Example: Verbose Mode


To find out what kind of file system is on the volume datavol and to display additional information about the file system, you type:
# fstyp -v /dev/vx/dsk/datadg/datavol vxfs magic a501fcf5 version 5 ctime Thu May 23 10:09:58 2002 logstart 0 logend 0 bsize 1024 size 512000 dsize 512000 ninode 0 nau 0 defiextsize 0 ilbsize 0 immedlen 96 ndaddr 10 aufirst 0 emap 0 imap 0 iextop 0 istart 0 bstart 0 femap 0 fimap 0 fiextop 0 fistart 0 fbstart 0 nindir 2048 aulen 32768 auimlen 0 auemlen 8 auilen 0 aupad 0 aublocks 32768 maxtier 15 inopb 4 inopau 0 ndiripau 0 iaddrlen 8 bshift 10 inoshift 2 bmask fffffc00 boffmask 3ff checksum e20ebb28 oltext1 33 oltext2 1282 oltsize 1 checksum2 62b free 510763 ifree 0 efree 1 1 0 1 0 1 2 1 0 1 0 1 2 1 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0

The output displays: Superblock fields Number of free blocks and inodes Number of free extents by size The output for -v differs slightly depending on the disk layout. For a Version 2 disk layout, the number of free inodes is always zero. For a Version 4 disk layout, fields such as logstart, logend, and nau are also zero. The number of allocation units can be determined from the file system size field and the aulen field. For Version 4 disk layouts, all allocation units are the same size (as shown by aulen) except for the last allocation unit, which can be smaller.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-27

Identifying Free Space


To identify free space: df [-F vxfs] [generic_options] [-o s] [directory|special] Example: # df -F vxfs /mydata
/mydata (/dev/dsk/c1t11d0s0): 2094438 blocks 261804 files Number of free disk blocks Number Number of free inodes of free inodes
In VEA, right-click a file system, and select Properties to display free space FOS35_Sol_R1.0_20020930 and usage information.
FOS35_Sol_R1.0_20020930

9-21

9-21

Identifying Free Space


The df Command
To report the number of free disk blocks and inodes for a VxFS File System, you use the df command. The df command displays the number of free blocks and free inodes in a file system or directory by examining the counts kept in the super-blocks. Extents smaller than 8K may not be usable for all types of allocation, so the df command does not count free blocks in extents below 8K when reporting the total number of free blocks. In VERITAS File System versions 2.0 and above, inodes are dynamically allocated from a pool of free blocks. In this case, the number of free inodes and blocks reported by df is an estimate based on the number of free 8K (or larger) extents and the current ratio of allocated inodes to allocated blocks. Allocating additional blocks can therefore decrease the count of free inodes and vice versa.

Syntax for the df Command


The syntax for the df command is as follows:
df [-F vxfs] [generic_options] [-o s] [directory|special]

In the syntax, you specify the command followed by the file system type. Generic options are those options supported by the generic UNIX df command. The -o s option is specific to VxFS. You can use this option to print the number of free extents of each size.

9-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

To complete the command, you specify the special device name (for example, /dev/dsk/c0t1d0s5) or mount point directory name (for example, /mydata). If you specify a directory name, the report presents information for the device that contains the directory.

Generic Options
The following table describes some of the generic options that you can use with the df command. For complete descriptions of the options, see the df_vxfs(1m) and df(1m) manual pages.
Option
-a -b -e -g -k -l -n -o -t -V

Description Reports on all file systems Prints the total number of kilobytes free Prints only the number of files free Prints detailed information about a file system Prints one line of information for each specified file system Reports only on local file systems Prints a list of mounted file system types Specifies file system type-specific options Prints full listings with totals Echoes the complete set of file system-specific command lines, but does not execute them (Used to verify and validate the command line)

Example: Displaying Free Space


Display the number of free disk blocks and inodes for the VERITAS file system mounted at /mydata.
# df -F vxfs /mydata /mydata (/dev/dsk/c1t11d0s0): 2094438 blocks 261804 files

The number of free disk blocks and inodes (files) is displayed. The command displays information for the device /dev/dsk/c1t11d0s0 that contains a VERITAS file system mounted at /mydata.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-29

Maintaining Consistency
To check file system consistency:
fsck [-F vxfs] [generic_options] [-y|-Y] [-n|-N] [-o specific_options] special

Example:
# fsck -F vxfs /dev/vx/rdsk/datadg/datavol

By default, VxFS fsck replays the intent log, rather than doing a full structural recovery.
In VEA, highlight a file system and select Actions>Check File System.
FOS35_Sol_R1.0_20020930 9-22

Maintaining File System Consistency


The fsck Command
You use the VxFS-specific version of the fsck command to check the consistency of and repair a VERITAS file system. VxFS uses a feature called intent logging to record pending file system updates in a log of intent, and by default the fsck utility replays the intent instead of doing a full structural file system check. Using the intent log is usually sufficient to set the file system state to CLEAN. You can also use the fsck utility to perform a full structural recovery in the unlikely event that the log is unusable. The syntax for the fsck command is:
fsck [-F vxfs] [generic_options] [-y|-Y] [-n|-N] [-o full,nolog] special

Example: Checking VxFS Consistency


To check file system consistency by using the intent log for the VERITAS file system on the volume datavol, you type:
# fsck -F vxfs /dev/vx/rdsk/datadg/datavol

9-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to:
Describe file system types. List guidelines for issuing file system commands. Create a file system by using mkfs. Set file system properties by using mkfs options. Mount a file system by using mount. Mount a file system at boot time. Unmount a file system by using umount. Identify file system type by using fstyp. Identify free disk space by using df. Maintain file system consistency by using fsck.
9-23

FOS35_Sol_R1.0_20020930

Summary
This lesson described the different types of file systems and provided general guidelines for using file system administrative commands. This lesson described how to use common file system commands to perform administrative tasks such as creating and mounting a file system, identifying the file system type, and identifying free space.

Next Steps
After learning how to use basic file system administration commands, you are ready to learn how to perform additional administrative duties such as resizing a file system, backing up and restoring a file system, and creating a file system snapshot.

Additional Resource
VERITAS File System System Administrators Guide This guide describes VERITAS File System concepts, how to use various utilities, and how to perform backup procedures.

Lesson 9: Setting Up a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9-31

Lab 9
Lab 9: Setting Up a File System
This lab ensures that you are able to use basic VERITAS File System administrative commands from the command line. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

9-24

Lab 9: Setting Up a File System


Goal
This lab ensures that you are able to use basic VERITAS File System administrative commands from the command line.

To Begin This Lab


To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

9-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10

Online File System Administration

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

10-2

10-2

Introduction
Overview This lesson describes online administration features of VERITAS File System. Methods for resizing, backing up, and restoring a file system are covered, as well as procedures for creating a snapshot file system. Importance The online administration features of VERITAS File System enable you to perform a variety of administrative tasks while minimizing user down time. By learning how to resize a VERITAS file system, you can ensure that a file system can handle changes in the workload over time. The backup and restore utilities and file system snapshots help you to prevent data loss in the case of system failure. Outline of Topics Resizing a File System Backing Up a File System Restoring a File System Creating a Snapshot File System Managing Snapshot File Systems

10-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Resize a file system. Back up a file system by using vxdump. Restore a file system by using vxrestore. Create a snapshot file system. Manage snapshot file systems.

FOS35_Sol_R1.0_20020930

10-3

Objectives After completing this lesson, you will be able to: Resize a file system. Back up a file system by using the vxdump command. Restore a file system by using the vxrestore command. Create a snapshot file system. Manage snapshot file systems.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-3

Resizing a File System


Why expand a file system?
To accommodate an increase in workload To use full space available on a volume

Why shrink a file system?


To adjust to a decrease in workload To use space elsewhere on the server

FOS35_Sol_R1.0_20020930

10-4

Resizing a File System


File System Size When you create a VERITAS file system using the mkfs command, you can specify a particular size for the file system or use the default size. The default size is the size of the special raw device on which the file system is created. Over time, as the use of the file system changes, the file system may become too small or too large. You can resize a VERITAS file system while the file system remains mounted. You may need to resize a file system to accommodate a change in usefor example, when there is an increased need for space in the file system. You may also need to resize a file system as part of a general reorganization of disk usagefor example, when a large file system is subdivided into several smaller file systems. Traditional File System Resizing Traditionally, if a file system became too small, with no more data space or inodes remaining, the system administrator addressed the problem by moving some or all contents to another file system. Alternatively, the administrator backed up, repartitioned, and recreated the file system, and then restored the data. If a file system became too large, the system administrator would try to reclaim the unused space by offloading the contents of the file system and rebuilding it to a preferable size. Both of these cases resulted in downtime for users, because the solutions required unmounting the file system and blocking user access during modification.
10-4 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resizing VxFS
You can resize a VERITAS file system using fsadm or vxresize. Resizing can be done online without interrupting user access. With vxresize, you can resize the volume at the same time as the file system. You can expand a file system if the underlying device is expandable.
FOS35_Sol_R1.0_20020930

a b c d e f g h i j k l

a e i m Expand

b f j n

c g k o

d h l p

a b c d e f g h

a b c d e f g h Shrink
10-5

Resizing a VERITAS File System You can expand or shrink a VERITAS file system without unmounting the file system or interrupting user productivity. However, if you want to expand a file system, the underlying device on which it is mounted must be expandable. 1 Before you resize a VERITAS file system, you should verify the available free space of the underlying device. For file systems mounted on partitions, use prtvtoc or format to check on the size of the disk partitions. You can expand a disk partition if there is free space on the disk immediately after the disk partition. A disk partition may always be reduced in size. For file systems mounted on volumes, use vxprint to check on the size of VxVM volumes or vxdg to check available free space in a disk group. You can expand a VxVM volume if there is free space available on any disk within the disk group. A VxVM volume may always be reduced in size. 2 You can resize the file system by using the fsadm command or the vxresize command. If you use the vxresize command, the volume and file system are resized at the same time. If you use the fsadm command, you must resize the underlying volume in a separate step. 3 You can verify that the file system was resized by using the df command.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-5

Expanding with fsadm


fsadm -F vxfs [-b newsize] [-r rawdev] mount_point
Example: Expand the file system /datavol from 512,000 sectors to 1,024,000 sectors. 1. Verify the free space on the underlying device: # vxdg -g datadg free 2. Expand the volume using vxassist: # vxassist -g datadg growto myvol 1024000 3. Expand the file system using fsadm: # fsadm -F vxfs -b 1024000 -r /dev/vx/rdsk/datadg/datavol /datavol 4. Verify that the file system was resized by using df: # df -k /datavol
FOS35_Sol_R1.0_20020930 10-6

The fsadm Command You resize a VERITAS File System by using the fsadm command. The fsadm command performs a variety of online administration functions on VERITAS File Systems, including resizing, extent reorganization, directory reorganization, and querying or changing the largefiles flag. The fsadm command operates on file systems mounted for read/write access. The syntax for the fsadm command is as follows:
fsadm -F vxfs [-b newsize] [-r rawdev] mount_point fs_type specifies the file system type, such as vxfs.

newsize is the size to which the file system increases or decreases. The size of the file system is specified in units of 512-byte blocks, called sectors. The -r rawdev option specifies the path of the raw device if there is no entry in the /etc/vfstab file and fsadm cannot determine the raw device. The mount_point specifies the mount point of the file system.

Using fsadm to resize a file system does not automatically resize the underlying volume. When you expand a file system, the underlying device must be large enough to contain the new larger file system. When you shrink a file system, unused space is released at the end of the underlying device, which can be a VxVM volume or disk partition. You can then resize the device, but be careful not to make the device smaller than the size of the file system.

10-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Shrinking with fsadm


fsadm -F vxfs [-b newsize] [-r rawdev] mount_point
Example: Shrink the file system /datavol from 1,024,000 sectors back down to 512,000 sectors. 1. Shrink the file system: # fsadm -F vxfs -b 512000 -r /dev/vx/rdsk/datadg/datavol /datavol 2. Shrink the underlying volume using vxassist: # vxassist -g datadg shrinkto datavol 512000

FOS35_Sol_R1.0_20020930

10-7

Example: Expanding a VERITAS File System Using fsadm Expand the size of the file system mounted at /datavol from 512,000 sectors to 1,024,000 sectors. The volume datavol exists in the diskgroup datadg. 1 Verify the available free space of the underlying device by using vxdg: # vxdg -g datadg free 2 Expand the volume using the vxassist command: # vxassist -g datadg growto datavol 1024000 3 Expand the file system using the fsadm command: # fsadm -F vxfs -b 1024000 -r /dev/vx/rdsk/datadg/datavol
/datavol

4 Verify that the file system was resized by using the df command: # df -k /datavol Example: Shrinking a VERITAS File System Using fsadm Shrink the size of the file system mounted at /datavol back down to 512,000 sectors. 1 Shrink the file system by using the fsadm command: # fsadm -F vxfs -b 512000 -r /dev/vx/rdsk/datadg/datavol
/datavol

2 After you shrink the file system, you can shrink the underlying volume using the vxassist command: # vxassist -g traindg shrinkto myvol 512000

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-7

Resizing with vxresize


/usr/lib/vxvm/bin/vxresize [-bsx] [-F vxfs] [-g diskgroup] volume new_length
vxresize is a VERITAS Volume Manager (VxVM) command. When you resize a volume, the file system is automatically resized. To expand the file system on the volume datavol from 2 GB to 5 GB: # vxresize -F vxfs -g datadg datavol 5g To shrink the file system from 5 GB to 4 GB: # vxresize -F vxfs -g datadg datavol 4g In VEA, to resize a file system, highlight a file system and select Actions>Resize File System. The underlying command is vxresize.
FOS35_Sol_R1.0_20020930 10-8

The vxresize Command If you are running VERITAS Volume Manager (VxVM), you can use the vxresize command to expand or shrink a volume containing a file system. When you resize a VxVM volume using the vxresize command, the file system is automatically resized at the same time. Only VxFS and UFS file systems can be resized using the vxresize command. The syntax for the vxresize command is as follows:
/usr/lib/vxvm/bin/vxresize [-bsx] [-F vxfs] [-g diskgroup] [-t tasktag] volume new_length [media_name]

In the syntax, you use the -b option to perform the resize operation in the background. The command returns without waiting for the completion of the resize, but the resize is in progress. You use the vxprint command to determine when the operation completes. You specify the file system type as VxFS by using the standard -F vxfs option. The -g diskgroup option limits the operation of the command to the given disk group, as specified by a disk group ID or disk group name. You use the -s option to require that the operation represents a decrease in the volume length, or you can use the -x option to require that the operation represents an increase in the volume length. If the operation specified does not match the decrease or increase flag, then the operation fails. If you want to track the progress of the operation, you can mark the operation with a task tag using the -t tasktag option of the vxresize command.

10-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

You specify the volume that you want to resize followed by the new length of the volume. The default unit of the new length is in sectors, unless specified otherwise. The new length can begin with a plus sign (+) or minus sign (-) to indicate that the new length is added to or subtracted from the current volume length. The media_name operand names disks to use for allocating new space for a volume. These arguments can be a simple name for a disk media record, or they can be of the form media_name,offset to specify an offset within the named disk. If an offset is specified, then regions from that offset to the end of the disk are considered candidates for allocation. Example: Expanding a Volume and File System Using vxresize Suppose that you want to expand the volume and file system mounted on the device /dev/vx/dsk/datadg/datavol from 2 GB to 5 GB. 1 Verify free space available in the VxVM disk group. # vxdg -g datadg free To verify free space available on a disk, use format or prtvtoc to check if the partition can be extended. 2 To expand the volume and file system, use the vxresize command: # vxresize -F vxfs -g datadg datavol 5g 3 Verify that the file system was resized by using the df command. # df -k /datavol Example: Shrinking a Volume and File System Using vxresize Using the same example as above, suppose that you decide to shrink the 5 GB volume datavol back down to 4 GB. The syntax of the vxresize command is the same:
# vxresize -F vxfs -g datadg datavol 4g

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-9

Troubleshooting Tips
Avoid trying to resize a file system that:
Is performing at a time of high activity Has a mounted snapshot file system Needs a full file system check (fsck) Is nearly 100% full, fragmented, or both

FOS35_Sol_R1.0_20020930

10-9

Troubleshooting Tips: Resizing a File System When resizing a file system, avoid the following common errors: Resizing a file system that is very busy: Although resizing a file system requires that the file system be mounted, the file system freezes when the actual resizing occurs. Freezing temporarily prevents new access to the file system, but waits for pending I/Os to complete. You should attempt to resize a file system during a time when the file system is under less of a load. Resizing a file system that has a mounted snapshot file system: If a snapshot file system is mounted on the file system being resized, the resize fails. File systems that have snapshots mounted on them cannot be resized. Resizing a corrupt file system: A file system that has experienced structural damage and is marked for full fsck cannot be resized. If the resize fails due to structural damage, you must unmount the file system, perform a fsck, remount the file system, and try the resize again. Resizing a file system that is nearly 100 percent full: The resize operation needs space to expand a file system, and if a file system is nearly 100 percent full, an error is returned. When increasing the size of a file system, you must first extend the size of the internal structural files. If the file system is full or almost full, then this may not be possible. To address this problem, try one or more of the following options: Increase the size by a smaller amount first. Defragment the file system. Move some files temporarily to another file system.

10-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxFS Utilities
VxFS backup utilities include:
vxdump vxrestore

Options and parameters are similar to UFS backup utilities:


ufsdump ufsrestore

FOS35_Sol_R1.0_20020930

10-10

Backing Up a File System


VxFS Backup and Restore Utilities When backing up files in a VERITAS file system, you can use standard file utilities such as tar and cpio, as well as the VxFS-specific utilities vxdump and vxrestore. The options and parameters for vxdump and vxrestore are similar to the standard UFS commands ufsdump and ufsrestore.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-11

The vxdump Command


vxdump [options] mount_point To dump the file system /fsorig to a file:
# vxdump -0 -f /backup1/firstdump /fsorig
Dump Level 0 Send dump to this file Mount point

To dump the file system to a tape: # vxdump -0 -f /dev/rmt/0 /fsorig

FOS35_Sol_R1.0_20020930

10-11

The vxdump Command You can use the vxdump command to implement incremental file system dump levels. The vxdump command copies to magnetic tape or to a file all the files in the VERITAS file system that changed after a particular date. This information is derived from the files /etc/dumpdates and /etc/vfstab. Note: It is recommended that you use vxdump only on a quiescent file system (either unmounted or mounted read-only). If you attempt to back up a file system that is mounted read/write in multiuser mode, it is possible to miss data or metadata held in memory and not yet flushed to disk, resulting in a corrupt backup. File system snapshots (discussed in detail later) provide a method for safely accessing a read-only view of an active mounted file system. The vxdump command establishes a checkpoint at the start of each tape volume. If for any reason writing a volume fails, vxdump, with operator permission, restarts from the checkpoint after the old tape is rewound and removed and a new tape is mounted. The syntax for the vxdump command is:
vxdump [-clntuwW] [-number] [-b blocking_factor] [-B records] [-d density] [-f filename] [-s size] [-t tracks] [-T time] mount_point|special

You specify the vxdump command, followed by options, and the mount point of the file system to dump.

10-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

You can also use the traditional command line style:


vxdump [option [arguments. . .] file_system]

Using this syntax, you specify the options followed by arguments for those options. The first argument goes with the first option that takes an argument, the second argument goes with the second option that takes an argument, and so on. The vxdump Options If no arguments are specified, the default options are -9u. The default tape device is /dev/rmt/0m. The following table summarizes the vxdump options and their uses. For more information on these options, see the vxdump(1m) manual page.
Option -number -b blocking_factor -B records -c -d density -f filename -l -n -s size -t tracks -T date -u -w -W Use Indicates a dump level in the range 0 to 9 (The option -0 dumps the entire file system.) Sets a blocking factor (Default is 63.) Sets the number of logical records per volume (The vxdump logical record size is 1024 bytes.) Uses a cartridge Sets tape density, which is used to calculate the amount of tape used per reel Dumps to a specific file Ensures that autoloading tape drives have time to load a new tape if the end of the tape is reached Notifies all users in the group operator whenever vxdump requires operator attention Specifies size of the dump tape in feet Specifies number of tracks for a cartridge (Default is 9.) Specifies a starting time for the dump to override the time determined in the /etc/dumpdates file Writes dump start dates to file /etc/dumpdates Prints file systems that need to be dumped Prints recent dump dates and levels

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-13

Example: Dumping to a File To dump the file system mounted at /fsorig to a specific file:
# vxdump -0 -f /backup1/firstdump /fsorig

Example: Dumping to a Tape To dump the entire file system mounted at /fsorig onto a tape:
# vxdump -0 -f /dev/rmt/0 /fsorig

Using traditional syntax and specifying the tape size in logical records:
# vxdump 0Bf 2097152 /dev/rmt/0 /fsorig

The argument 2097152 goes with the option letter B, because it is the first specified option that requires an argument. The argument /dev/rmt/0 goes with the option f, because it is the second option that requires an argument.

10-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The vxrestore Command


vxrestore [options] mount_point To restore the file system from a file:
# vxrestore -vrf /backup1/firstdump /fsorig
Verbose mode Restores into current directory Restore from this file Mount point of restored file system

To restore the file system from tape:


# vxrestore -vx /fsorig
Extracts named files from tape
FOS35_Sol_R1.0_20020930 10-12

Restoring a File System


The vxrestore Command You can use the vxrestore command to restore files previously copied to tape by the vxdump command. The options used with the vxrestore command are similar to those used with the ufsrestore command. The current version of vxrestore can read dumps produced by older versions of vxdump. The vxrestore command can also restore files to a file system of a type other than VxFS. However, if the file system type does not support extent attributes, then the extent attributes are not restored. Also, if the dump tape contains files larger than 2 GB, and if the file system being restored to does not support files larger than 2 GB, then the file is truncated to 2 GB. The syntax for the vxrestore command is as follows:
vxrestore [-himrRtvxy] [-s number] [-b block_size] [-e opt] [-f file] [filename . . .] mount_point

In the syntax, you specify the command, followed by a variety of restore options, and one or more filename arguments specifying the files to restore. The mount_point is the mount point of the restored file system. The vxrestore command can also be used in the traditional command line style:
vxrestore key [filename. . .]

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-15

The vxrestore Options The following table summarizes the vxrestore options and their uses. For more information on these options, see the vxrestore(1m) manual page.
Option Use Specifies tape block size in kilobytes Handles extent attribute information Specifies the name of the archive other than the default Extracts the directory rather than the files Enables interactive interface Extracts by inode numbers rather than filename Restores into current directory Resumes a full restore Specifies the dump file number Lists names of files if they occur on the tape Specifies verbose output Extracts named files from the tape Continues operation despite errors

-b -e -f -h -i -m -r -R -s number -t -v -x -y

Example: Restoring from a File To restore a VERITAS file system from a file using the mount point /fsorig:
# vxrestore -vrf /backup1/firstdump /fsorig

Example: Restoring from a Tape To restore a VERITAS file system using the mount point /fsorig by extracting the named files from the tape:
# vxrestore -vx /fsorig

Note: vxrestore places the file restoresymtab in the current directory of the file system to pass information between incremental vxrestore passes. You can remove this file when the last incremental tape is restored.

10-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting Tips
Regardless of the type of file system onto which you are restoring the data:
A ufsdump always requires a ufsrestore to restore the data. A vxdump always requires a vxrestore to restore the data.

FOS35_Sol_R1.0_20020930

10-13

Troubleshooting Tips: Using vxdump and vxrestore Problem: Unable to Restore Data from UFS to VxFS Using vxrestore You receive an error when you try to use vxrestore to restore data from a ufsdump backup onto a new VxFS file system. Solution You must use ufsrestore to restore data from a ufsdump backup, regardless of the type of file system onto which it is being restored. Similarly, you must use vxrestore to restore data from a vxdump backup, even if you are restoring it onto a UFS file system.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-17

Backing Up a File System


A traditional file system backup:
Prohibits user access during the process Requires twice the disk space for mirroring Results in inconsistent file contents and metadata if backup is lengthy

A VERITAS file system backup:


Performs online backups using snapshot file systems Works with standard utilities (tar, cpio) and VxFS-specific utilities (vxdump, vxrestore)
FOS35_Sol_R1.0_20020930 10-14

Creating a Snapshot File System


Backing Up a VERITAS File System Backing up a file system on a regular basis is an essential administrative task that ensures that you can recover your data in the event of disk failure, system failure, accidental deletion, or corruption of files. VERITAS File System offers VxFS-specific versions of traditional backup utilities in addition to snapshot file system technology to enable you to back up your file system while it remains online. Traditional File System Backups During a traditional UFS backup, user access is prohibited during a backup, while a physical copy of the files is created on another disk. This process results in user downtime and requires that you have twice the disk space when backing up an entire file system. During a long backup, file system metadata and file contents can become inconsistent if data changes during a backup. Note: vxdump also cannot be used safely on a volume mounted read/write. Even data in the file system buffers that has not yet been flushed to disk before the vxdump operation begins on such a volume may be missed. VxFS file system snapshots solve the problem of performing backups on file systems that cannot be taken offline for extended periods of time.

10-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

How a Snapshot Works


1
9 a.m. Snapshot mounted 10 a.m. Changes to b and k 11 a.m. Additional changes to b

File System
a b c d e f g h i j k l

Snapshot
< < < < < < < < < < < < < < < < < < b b < < < < < < < < < < < < < < k k < < < < < <

Snapshot is empty

a b c d e f g h i j k l a b c d e f g h i j k l

Original b and k copied to snapshot Snapshot not changed a b c d e f g h 10-15 i j k l


Backup
10-15

< b < < < b < < < < < < < < < < k < < < k

FOS35_Sol_R1.0_20020930

vxdump

FOS35_Sol_R1.0_20020930

What Is a Snapshot File System? A snapshot file system is an image of a mounted file system that is an exact read-only copy of the file system at a certain point in time. When you create a snapshot file system, the original file system is referred to as the snapped file system, while the copy is called the snapshot. The snapshot is a consistent view of the snapped file system at the point in time the snapshot is made. What Does a Snapshot File System Contain? A snapshot file system acts as a database before-image log. When blocks are changed in the original file system, the original version of the blocks is copied to the snapshot. Subsequent changes to a changed block are not copied to the snapshot area again. The space allocated to the snapshot file system only needs to be large enough to contain the changed blocks. A snapshot file system does not need to be large enough to contain an entire second copy of the original file system. Therefore, a snapshot file system is an exact image of the original file system without the cost of duplicate disk space. How Is a Snapshot File System Used in a Backup? By using snapshot file systems with standard backup and restore commands, you can back up your VERITAS file systems while they remain online. When the snapshot is read, data that has not changed is read from the original file system. Changed data is read directly from the snapshot.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-19

Snapshot Disk Structure


Superblock: Contains logistical information about the snapshot Bitmap: Indicates if block in original file system has changed Blockmap: Provides address of data block in snapshot that has a copy of original file system block Data blocks: Contains copies of original contents of changed FOS35_Sol_R1.0_20020930 file system blocks
FOS35_Sol_R1.0_20020930

Snapshot File System Superblock Bitmap Blockmap Data blocks


10-16

10-16

Snapshot File System Disk Structure The disk structure of a snapshot file system consists of a superblock, bitmap, blockmap, and data blocks. The superblock is similar to the superblock of a normal VxFS file system. It contains logistical information about the snapshot. The bitmap contains one bit for every block on the snapped file system. Initially, all bitmap entries are zero. A set bit indicates that the appropriate block was copied from the snapped file system to the snapshot. In this case, the appropriate position in the blockmap references the copied block. The blockmap contains one entry for each block on the snapped file system. Initially, all entries are zero. When a block is copied from the snapped file system to the snapshot, the appropriate entry in the blockmap is changed to contain the block number on the snapshot file system that holds the data from the snapped file system. The data blocks used by the snapshot file system contain data copied from the snapped file system, starting from the front of the data block area. Mounting a Snapshot File System When you mount an empty disk slice as a snapshot of a currently mounted file system, the bitmap, blockmap, and superblock are initialized, and then the currently mounted file system is frozen. Next, the snapshot is enabled and mounted, and the snapped file system is thawed. This process takes only a few seconds. The snapshot is displayed as an exact image of the snapped file system at the time the snapshot was made.
10-20 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Data Copied to Snapshot


/fs_real (read/write) Superblock Data blocks and inodes WRITE I/O Superblock COPY THEN WRITE Bitmap Blockmap Data blocks
10-17

/fs_snap (read only)

FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

10-17

Data Copied to Snapshot Initially, the snapshot file system satisfies read requests by simply finding the data on the snapped file system and returning it to the requesting process. When a change occurs in block n of the snapped file system, the old data is read and copied to the snapshot before the snapped file system is updated. The bitmap entry for block n is changed from 0 to 1 (indicating that the data for block n can be found on the snapped file system). The blockmap entry for block n is changed from 0 to the block number on the snapshot file system containing the old data.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-21

Reading a Snapshot
/fs_real (read/write) Superblock Data blocks and inodes If bit is unset, read /fs_real READ I/O /fs_snap (read only) Superblock Is bit Bitmap set? Blockmap
FOS35_Sol_R1.0_20020930

If bit is set, read /fs_snap

Data blocks
10-18 10-18

FOS35_Sol_R1.0_20020930

Reading a Snapshot A subsequent read request for block n on the snapshot file system is satisfied by checking the bitmap entry for block n and reading the data from the indicated block on the snapshot file system, rather than from block n on the snapped file system. Subsequent writes to block n on the snapped file system do not result in additional copies to the snapshot file system, since the old data only needs to be saved once. All updates to the snapped file system for inodes, directories, data in files, extent maps, and so on, are handled in this fashion so that the snapshot can present a consistent view of all file system structures for the snapped file system for the time when the snapshot was created. As data blocks are changed on the snapped file system, the snapshot gradually fills with data copied from the snapped file system.

10-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Snapshot
To create a snapshot, use the mount command: mount [-F vxfs] o snapof=source[,snapsize=size] destination snap_mount_point Create a mount point for the snapshot: # mkdir /snapmount Mount the snapshot. Examples: # mount -F vxfs -o snapof=/dev/dsk/c0t6d0s2 /dev/dsk/c0t5d0s2 /snapmount # mount -F vxfs -o snapof=/dev/vx/dsk/datadg/uservol /dev/vx/dsk/datadg/snapvol /snapmount In VEA, highlight a file system and select Actions>Snapshot>Create.
FOS35_Sol_R1.0_20020930 10-19

Creating a Snapshot File System To create a snapshot file system, you use the -o snapof option of the mount command. There is no mkfs step involved. The syntax is as follows:
mount [-F vxfs] -o snapof=source[,snapsize=size] destination snap_mount_point

In the -o snapof option: The source is the special device name or mount point of the file system to copy. The snapsize option is required only if the device being mounted does not identify the device size in its disk label or if you want to select a size that is smaller than the entire device. The snapshot size is the size in sectors of the snapshot file system being mounted. The snapsize cannot exceed the amount of data that is sent to the snapshot while mounted. The destination is the name of the special device on which to create the snapshot, and the snap_mount_point is where to mount the snapshot. The snapshot mount point must exist before you enter this command. Example: Creating a Snapshot File System A VERITAS file system is located on the device /dev/dsk/c0t6d0s2. To create a snapshot of this file system on /dev/dsk/c0t5d0s2 that is 32,768 sectors in size and mount it at /snapmount, you type:
# mount -F vxfs -o snapof=/dev/dsk/c0t6d0s2, snapsize=32768 /dev/dsk/c0t5d0s2 /snapmount
Lesson 10: Online File System Administration
Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-23

Using a Snapshot for Backup


After creating a snapshot, you use standard backup methods:
Standard utilities, such as tar, cpio, vxdump, and vxrestore Commercial programs, such as VERITAS NetBackup fscat (for a binary dump using dd)

After the backup to tape is complete, you can unmount the snapshot file system.

FOS35_Sol_R1.0_20020930

10-20

Using a Snapshot File System for Backup After creating a snapshot file system, you can back up the file system from the snapshot while the snapped file system remains online. Any program that uses the standard UNIX file system API (for example, open, close, read, and write) should be able to access a full file system image by using the snapshot. You can back up and restore selected files using utilities such as tar, cpio, vxdump, and vxrestore or using commercial backup and restore products such as VERITAS NetBackup. Backup programs that function using the standard file system tree (such as cpio) can be used without modification on a snapshot file system, since the snapshot presents the same data as the snapped file system. Backup programs that access the disk structures of a VxFS file system (such as vxdump) make suitable modifications in their behavior so that their operation on a snapshot file system is indistinguishable from that on a normal file system. When performing a binary dump of a file system using the dd command, you must use the fscat command to read the combined file system structure. Without fscat, the result is a binary dump of only the snapshot volume. The fscat command translates a snapshot file system back to raw file system information to obtain a raw image of the entire file system. This raw image is identical to that obtained by performing a dd of the disk device containing the snapped file system at the exact moment the snapshot was created. When the backup is complete and you no longer need the snapshot, you unmount the snapshot file system.
10-24 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Snapshot Backup Example


To back up a snapshot: # vxdump [options] [snap_mount_point] To back up the snapshot to tape: # vxdump -cf /dev/rmt/0 /snapmount
Uses a cartridge Send dump to this device Mount point of snapshot file system

To restore the file system from tape:


# vxrestore -vx /mount_point
Mount point of restored file system
FOS35_Sol_R1.0_20020930 10-21

Backing Up a Snapshot File System To back up a snapshot file system using the vxdump command, you specify the command, followed by vxdump command options, and the mount point of the snapshot file system:
vxdump [options] snap_mount_point

Example: Backing Up a Snapshot File System To back up the VxFS snapshot file system mounted at /snapmount to the tape drive with the device name /dev/rmt/0, you type:
# vxdump -cf /dev/rmt/0 /snapmount

Restoring from a Snapshot File System Backup After backing up a file system from a snapshot, you can restore it using the vxrestore command. First, you create and mount an empty file system. Then, to restore the file system, you specify the vxrestore command, followed by command options, and the mount point of the restored file system:
vxrestore [options] mount_point

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-25

Additional Examples Here are some typical examples of backing up a 300,000 block file system named /home (which exists on disk /dev/dsk/c0t0d0s7) using a snapshot file system on /dev/dsk/c0t1d0s1 with a snapshot mount point of /backup/home: To back up files changed within the last week using cpio: # mount -F vxfs -o snapof=/dev/dsk/c0t0d0s7, snapsize=100000 /dev/dsk/c0t1d0s1 /backup/home # cd /backup # find home -ctime -7 -depth -print | cpio -oc > /dev/rmt/0 # umount /backup/home To perform a full backup of /dev/dsk/c0t0d0s7 and use dd to control blocking of output onto tape device using vxdump: # vxdump f - /dev/rdsk/c0t0d0s7 | dd bs=128k > /dev/rmt/0 To perform a level 3 backup of /dev/dsk/c0t0d0s7 and collect those files that have changed in the current directory: # vxdump 3f - /dev/rdsk/c0t0d0s7 | vxrestore -xf To perform a full backup of a snapshot file system: # mount -o snapof=/dev/dsk/c0t0d0s7,snapsize=100000 /dev/dsk/c0t1d0s1 /backup/home # vxdump f - /dev/rdsk/c0t1d0s1 | dd bs=128k > /dev/rmt/0 The vxdump program determines whether /dev/rdsk/c0t1d0s1 is a snapshot mounted as /backup/home and performs the appropriate work to get the snapshot data through the mount point.

10-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Snapshots
Snapshot size must be at least five percent of the size of the snapped file system. Snapshot size depends on:
Rate of change in the snapped file system

Amount of time the snapshot is maintained A snapshot file system ceases to exist when unmounted. You can have multiple snapshots of a single file system. Snapshots impact the performance of writes to the snapped file system.
FOS35_Sol_R1.0_20020930 10-22

Managing Snapshot File Systems


Selecting Snapshot File System Size The amount of disk space required for the snapshot depends on the rate of change of the snapped file system and the amount of time the snapshot is maintained. A snapshot file system is disabled if it runs out of blocks to hold copied data and all further access to the snapshot file system fails. The failure of a snapshot file system does not affect the snapped file system. A snapshot file system must be able to hold any blocks on the snapped file system that can be written to while the snapshot file system exists. If every file in the snapped file system was rewritten, the snapshot would require enough blocks to hold a copy of every block on the snapped file system, plus additional blocks for the data structures that make up the snapshot, or approximately 101 percent of the snapped file system size. Most file systems do not change at such an extreme rate. A snapshot file system must be at least five percent of the size of the snapped file system. During a period of low activity when the system is relatively inactive (for example, on nights and weekends), the snapshot only needs to contain five to six percent of the blocks of the snapped file system. During a period of higher activity, the snapshot of an average file system might require 15 to 20 percent of the blocks of the snapped file system. These percentages tend to be lower for larger file systems and higher for smaller ones.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-27

Unmounting a Snapshot File System A snapshot file system is always read-only and exists only as long as it and the file system that has been snapped are mounted. To unmount a snapped file system, you must first unmount any corresponding snapshots. A snapshot file system ceases to exist when unmounted. If remounted, the snapshot initializes and is available as a new snapshot. Multiple Snapshots of One File System You can have multiple snapshots of a single file system made at different times. However, it is not possible to make a snapshot of a snapshot. If multiple snapshots of the same snapped file system exist, writes are slower, because each snapshot must record the original data. Only the initial write to a block suffers this penalty. Subsequent writes to the same block do not change what is in the snapshot. Performance of Snapshot File Systems Snapshot file systems maximize the performance of the snapshot at the expense of writes to the snapped file system. Reads from a snapshot file system typically perform at nearly the throughput of reads from a normal VxFS file system, allowing backups to proceed at the full speed of the VxFS file system. The performance of reads from the snapped file system should not be affected. Writes to the snapped file system, however, typically average two to three times longer than without a snapshot, because the initial write to a data block now requires a read of the old data, a write of the data to the snapshot, and finally, the write of the new data to the snapped file system. Reads from the snapshot file system are impacted if the snapped file system is busy, because the snapshot reads are slowed by all of the disk I/O associated with the snapped file system. The overall impact of the snapshot is dependent on the read-to-write ratio of an application and the mixing of the I/O operations. For example, Oracle running an OLTP workload on a snapped file system was measured at about 15 to 20 percent slower than a file system that was not snapped.

10-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting Tip
If a snapshot file system runs out of space, it is disabled.
Was the snapshot left mounted by mistake? Was the snapped file system unusually busy? Was enough space originally allocated to the snapshot?

FOS35_Sol_R1.0_20020930

10-23

Troubleshooting Tips: Snapshot File Systems Problem: Snapshot File System Runs Out of Space During Backup If a snapshot file system runs out of space during a backup, then it is disabled. The snapshot file system may have been left mounted for too long by mistake, it may have been allocated too little disk space, or the primary file system may have had an unexpected burst of activity. Solution Ensure that the snapshot file system has the correct amount of space and determine the activity level on the primary file system. If the primary file system is unusually busy, then rerun the backup. If the primary file system is no busier than normal, reschedule the backup to a time when the primary file system is relatively idle or increase the amount of disk space allocated to the snapshot file system.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-29

Summary
You should now be able to:
Resize a file system. Back up a file system by using vxdump. Restore a file system by using vxrestore. Create a snapshot file system. Manage snapshot file systems.

FOS35_Sol_R1.0_20020930

10-24

Summary
This lesson described online administration features of VERITAS File System. Methods for resizing, backing up, and restoring a file system were covered, including procedures for creating a snapshot file system. Next Steps The next lesson describes how VERITAS File System handles one of the most common performance problems of any file system: fragmentation. Additional Resource VERITAS File System System Administrators Guide This guide describes VERITAS File System concepts, how to use various utilities, and how to perform backup procedures.

10-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 10
Lab 10: Online File System Administration In this lab, you investigate online administration tasks, including:
Resizing a file system Backing up and restoring a file system Creating a snapshot file system

Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

10-25

Lab 10: Online Administration


Goal This lab enables you to investigate and practice online administration tasks. In this lab, you resize a file system using fsadm, back up and restore a file system using vxdump and vxrestore, and create and use a snapshot file system. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 10: Online File System Administration


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10-31

10-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11

Defragmenting a File System

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

11-2

11-2

Introduction
Overview This lesson describes the online defragmentation utilities available with VERITAS File System (VxFS). This lesson also provides an overview of the VxFS file system layout versions and file system structural components, which are designed to help minimize fragmentation. Procedures for upgrading the file system layout and for converting a UFS to VxFS are also covered. Importance Fragmentation is a problem that is common to all file systems. Traditional UNIX file systems require that you take the file system offline to reorganize, or defragment, a file system. VERITAS File System enables you to defragment a file system while it remains online and available to users. Outline of Topics Extent-Based Allocation VxFS File System Layout Options Upgrading the File System Layout File System Structure Converting UFS to VxFS Fragmentation Monitoring Fragmentation Defragmenting a File System

11-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Describe VxFS extent-based allocation. List features of the VxFS file system layout versions. Upgrade the file system layout by using vxupgrade. Identify structural components of VxFS. Convert a UFS file system to VxFS. Define two types of fragmentation. Run fragmentation reports by using fsadm. Defragment a file system by using fsadm.
FOS35_Sol_R1.0_20020930 11-3

Objectives After completing this lesson, you will be able to: Describe VxFS extent-based allocation. List features of the VxFS file system layout options. Upgrade the file system layout by using the vxupgrade command. Identify structural components of VxFS. Convert a UFS file system to VxFS. Define two types of fragmentation. Run fragmentation reports by using the fsadm command. Defragment a file system by using the fsadm command.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-3

Block and Extent Allocation


Block-Based
n n+3 n+8 n+13 n+20 n+21

Extent-Based
n, 6 n+15, 3

n n+8 n+21

n+1 n+2

n n+13

n+3 n+20

n+3 n+4 n+5

n+15 n+16 n+17


11-4

FOS35_Sol_R1.0_20020930

Extent-Based Allocation
Comparing VxFS with Traditional UNIX Allocation Policies Both VxFS and traditional UNIX file systems, such as UFS, implement variations of the indexed allocation method. Both use index tables to store information and location information about blocks used for files. However, VxFS allocation is extent-based, while UFS allocation is block-based. Block-based allocation: File systems that use block-based allocation assign disk space to a file one block at a time. Extent-based allocation: File systems that use extent-based allocation assign disk space in groups of contiguous blocks, called extents.

11-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

UFS Block-Based Allocation


Block-based allocation:
Allocates space to the next rotationally adjacent block Allocates blocks at random from a free block map Provides extent-like performance through block clustering Becomes less effective as file system fills
n n+3 n+8 n+13 n+20 n+21

n n+13

n+3 n+20

n+8 n+21

Requires extra disk I/O to write metadata


FOS35_Sol_R1.0_20020930 11-5

UFS Block-Based Allocation UFS allocates space for files one block at a time. When allocating space to a file, UFS uses the next rotationally adjacent block until the file is stored. Block Clustering UFS can perform at a level similar to an extent-based file system on sequential I/O by using a technique called block clustering. In UFS, the maxcontig file system tunable parameter can be used to cluster reads and writes together into groups of multiple blocks. Through block clustering, writes are delayed so that several small writes are processed as one large write. Sequential read requests can be processed as one large read through read-ahead techniques. Metadata Overhead Block-based allocation requires extra disk I/O to write file system block structure information, or metadata. Metadata is always written synchronously to disk, which can significantly slow overall file system performance. Fragmentation Over time, block-based allocation produces a fragmented file system with random file access.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-5

VxFS Extent-Based Allocation


Inode: An index block Inode: An index block associated with a file associated with a file Extent: A set Extent: A set of contiguous of contiguous blocks blocks
n, 6 n+15, 3

n+1 n+2

Address-length Address-length pair consists of: pair consists of: Starting block block Length of extent of

Extent size is based on the size of I/O write requests.

n+3 n+4 n+5

When a file expands, another extent is allocated.

n+15 n+16 n+17

Additional extents are progressively larger, reducing the total number of extents used by a file.
FOS35_Sol_R1.0_20020930 11-6

VxFS Extent-Based Allocation VERITAS File System selects a contiguous range of file system blocks, called an extent, for inclusion in a file. The number of blocks in an extent varies and is based on either the I/O pattern of the application, or explicit requests by the user or programmer. Extent-based allocation enables larger I/O operations to be passed to the underlying drivers. VxFS attempts to allocate each file in one extent of blocks. If this is not possible, VxFS attempts to allocate all extents for a file close to each other. Inodes Each file is associated with an index block, called an inode. In an inode, an extent is represented as an address-length pair, which identifies the starting block address and the length of the extent in logical blocks. This enables the file system to directly access any block of the file. Extent Size VxFS automatically selects an extent size by using a default allocation policy that is based on the size of I/O write requests. The default allocation policy attempts to balance two goals: Optimum I/O performance through large allocations Minimal file system fragmentation through allocation from space available in the file system that best fits the data

11-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The first extent allocated is large enough for the first write to the file. Typically, the first extent is the smallest power of 2 that is larger than the size of the first write, with a minimum extent allocation of 8K. Additional extents are progressively larger, doubling the size of the file with each new extent. This method reduces the total number of extents used by a single file. There is no restriction on the size of an extent. When a file needs to expand to a size larger than the extent size, the operating system allocates another extent of disk blocks, and the inode is updated to include a pointer to the first block of the new extent along with its size.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-7

Benefits of Extents
Extent-based allocation:
Enhances performance Results in less metadata overhead
Note: If you specify improper extent sizes, then performance benefits are reduced.

FOS35_Sol_R1.0_20020930

11-7

Benefits of Extent-Based Allocation Benefits of extent-based allocation include: Good performance: By grouping multiple blocks into large writes, extentbased allocation is faster than block-at-a-time operations. Note: Random I/O does not benefit as much, because the I/O sizes are generally small. To perform a random read of a file, the file system must look up the block address for each desired block, which is similar to block-based allocation. Less metadata overhead: Metadata is written when a file is created, but subsequent writes within an extent do not require additional metadata writes. Therefore, a file with only a few very large extents requires only a small amount of metadata. Also, to read all blocks in an extent sequentially, the file system must only read the starting block number and the length of the extent, resulting in very little sequential read overhead. Extent-based allocation can address files of any supported size up to 20 GB directly and efficiently. Also, large files can be accessed with fewer pointers and less indirection than blockbased allocation. Note: Improper extent sizes can reduce performance benefits, as follows: If the extent size is too small, the system loses some performance benefits and acts more like an indexed allocation system. If the extents size is too large, the file system contains allocated disk space that is not actually in use, which is wasted space.

11-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

File System Layout Versions


Version 1 Layout (VxFS 1.x)
Intent logging, extent allocation, and unlimited inodes

Version 2 Layout (VxFS 2.x)


Added dynamic inode allocation, ACLs, and quotas

Version 3 Layout
HP-UX specific

Version 4 Layout (VxFS 3.2.x)


Added large file support and ability for extents to span allocation units

Version 5 Layout (VxFS 3.5 and up)


Added support for file system sizes up to 32 TB
FOS35_Sol_R1.0_20020930 11-8

VxFS File System Layout Options


File System Layout The placement of file system structures and the organization of user data on disk is referred to as the file system layout. The evolution of VERITAS File System has included five different file system layout versions. Each version has become increasingly complex to support greater scalability for large files and to minimize file system fragmentation. VERITAS File System Layout Versions The versions of the VERITAS file system layout are: Version 1: The Version 1 layout was the original layout for VxFS release 1.x. This version introduced intent logging, extent allocation, and unlimited inodes. Version 2: The Version 2 layout was introduced with VxFS release 2.x. This version added dynamic inode allocation and support for access control lists and quotas to the Version 1 layout. Version 3: The Version 3 layout is specific to the HP-UX operating system. Version 4: The Version 4 layout was introduced with VxFS release 3.2.x. This layout added large file support and the ability for extents to span allocation units. Version 5: The Version 5 layout was introduced with VxFS release 3.5. This layout enables the creation of file system sizes up to 32 TB. Files can be a maximum of 2 TB. File systems larger than 1 TB must be created on a VERITAS Volume Manager volume and on a 64-bit kernel operating system.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-9

The vxupgrade Command


For better performance, use file system layout Version 5 for new file systems. To upgrade the layout online, use vxupgrade: vxupgrade [-n new_version] [-o noquota] [-r rawdev] mount_point Upgrading must be performed in stages. For example, to upgrade the file system layout from Version 2 to Version 5: # vxupgrade -n 4 /mnt # vxupgrade -n 5 /mnt To display the file system layout version number: # vxupgrade /mnt
FOS35_Sol_R1.0_20020930 11-9

Upgrading the File System Layout


Upgrading the Layout For better performance, you should use file system layout Version 5 for all new file systems. By default, any new file system that you create using VxFS 3.5 or later has file system layout Version 5. You can upgrade an existing file system that has an earlier file system layout to Version 5 by using the vxupgrade command. The upgrade does not require an unmount and can be performed online. VERITAS recommends upgrading Version 1 and 2 layouts to Version 5. In future releases, Version 1 and Version 2 file system layouts will not be supported. Performing Online Upgrades Only a privileged user can upgrade the file system layout. Once you upgrade to a later layout version, you cannot downgrade to an earlier layout version while the file system is online. You must perform the layout upgrade procedure in stages when using the vxupgrade command. You cannot upgrade Version 1 and 2 file systems directly to Version 5. For example, you must upgrade from Version 1 to Version 2, then from Version 2 to Version 4, and finally from Version 4 to Version 5.

11-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The vxupgrade Command To upgrade the VxFS file system layout, you use the vxpgrade command. The vxupgrade command only operates on file systems mounted for read/write access. The syntax for the command is: vxupgrade [-n new_version] [-o noquota] [-r rawdev] mount_point In the syntax, you use the -n option to specify the new file system layout version number to which you are upgrading. The new version can be 2, 4, or 5. By default, vxupgrade -n 2 creates a file system layout with quotas. You can add the -o noquota option to create a Version 2 file system layout without quotas. This is the same file system layout supported by VxFS 2.x releases prior to VxFS 2.3. Before upgrading to a Version 2 layout, you should remove or rename the quotas file in the root directory. The -r rawdev option specifies the path of the raw device. You use this option when vxupgrade cannot determine which raw device corresponds to the mount pointfor example, when /etc/mnttab is corrupted. You complete the command by specifying the mount point that identifies the mounted VxFS file system. Using the vxupgrade Command A VxFS file system with Version 2 file system layout is mounted at /mnt. To upgrade this file system to Version 5 layout, you execute the following sequence of commands:
# vxupgrade -n 4 /mnt # vxupgrade -n 5 /mnt

If you attempt to upgrade directly from file system layout Version 2 to Version 5, you receive an error:
# vxupgrade -n 5 /mnt ux: vxfs vxupgrade: ERROR: /dev/vx/rdsk/datadg/datavol: current version is 2 with quotas Can only upgrade to 4

Displaying the File System Layout Version You can use the vxupgrade command without the -n option to display the file system layout version number of a file system. To display the file system layout version number of a VERITAS file system mounted at /mnt, you type:
# vxupgrade /mnt /mnt: vxfs file system version 2 layout with quotas

In the output, the current file system layout version is displayed.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-11

How Does vxupgrade Work?


# vxupgrade ...
Lock file is created in /lost+found/.fsadm. Lock file is created in /lost+found/.fsadm. File system is frozen for a few seconds. File system is frozen for a few New file system structures New file system structures are allocated and initialized. are allocated and initialized. File system thaws, and File system inodes are released. inodes released.
FOS35_Sol_R1.0_20020930

Lock file is removed. Lock file is removed.

11-10

How Does vxupgrade Work? The upgrade process follows this sequence of events: 1 The vxupgrade command creates the lock file in /lost+found/.fsadm. The lock file blocks any use of the fsadm utility on this file system during the vxupgrade procedure. 2 The file system is frozen. 3 New file system structures are allocated and initialized. 4 The file system thaws, and the inodes are released. 5 The lock file in /lost+found/.fsadm is removed. This process does not keep the file system frozen for more than a few seconds.

11-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

UFS Structure
UFS File System on Disk1
Bootblock Superblock Cylinder Group Map Inodes Storage Blocks Storage Blocks Superblock Cylinder Group Map Inodes Storage Blocks Superblock Cylinder Group Map Inodes Storage Blocks

Storage Blocks

FOS35_Sol_R1.0_20020930

Cylinder Group 0

Cylinder Group 1

Cylinder Group 2

11-11

FOS35_Sol_R1.0_20020930

11-11

File System Structure


UFS Structure A UFS file system is divided into one or more cylinder groups. The UFS cylinder group is designed to reduce the effects of fragmentation. Files are generally allocated space within a single cylinder group. This means that an inode and the data blocks it references are within reasonable proximity. If an additional address block is required, it is located in another cylinder group, and all the data blocks referenced by that address block are within that same cylinder group. A cylinder group can contain five components: Bootblock: The bootblock stores procedures used in booting the system. Superblock: The superblock stores detailed information about the file system. Cylinder group map: The cylinder group map tracks free and used blocks and fragments. Inodes: Inodes contain information about files. Storage blocks: Storage blocks, also called data blocks, contain the actual data for each file. In a typical UFS file system, the first cylinder group (cylinder group 0) contains the bootblock, the superblock, inodes, and data blocks. All other cylinder groups (cylinder group 1, cylinder group 2, and so on) contain inodes, data blocks, and a replica of the superblock. Superblock replicas are offset by a different amount within each cylinder group.
Lesson 11: Defragmenting a File System
Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-13

VxFS Structure
Allocation Unit 0
32K blocks

Structural Fileset
Object Location Table file Label file (Superblock) Device file Fileset Header file Inode List file Inode Allocation Unit file Log file (Intent Log) Extent AU State file Extent AU Summary file Free Extent Map file Quotas files
11-12

Allocation Unit 1
32K blocks

Allocation Unit 2
32K blocks

... Allocation Unit n


32K blocks
FOS35_Sol_R1.0_20020930

VxFS Structural Components The structure of a VERITAS file system is complex, and only the main structures are presented in this topic. For more information about structural components, see the VERITAS File System System Administrators Guide. VxFS layout Versions 4 and 5 include the following structural components: Allocation units Structural files Allocation Units With VxFS layout Versions 4 and 5, the entire file system space is divided into fixed-size allocation units. The first allocation unit starts at block zero, and all allocation units are a fixed length of 32K blocks. A file system with a block size of 1K has an AU size of 32 MB, and for a block size of 8K, the AU size is 256 MB. An exception is the last allocation unit in the file system, which occupies whatever space remains at the end of the file system. An allocation unit is roughly equivalent to the cylinder group in UFS. Structural Files All structural information about the file system is contained in files within a structural fileset. With the exception of the superblock, which has a known location, structural files are not stored in a fixed location. The object location table (OLT) is used to keep track of locations of other structural files.

11-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Earlier VxFS layout versions placed structural information in fixed locations within allocation units. When structural information from the allocation units is separated, expansion of the file system simply requires extending the appropriate structural files. This design also removes extent size restrictions of layout Versions 1 and 2 by enabling extents to span allocation units. The structural files in the VxFS Version 4 and 5 file system layouts are:
File Object Location Table File Label File Description Contains the object location table (OLT), which is used to locate the other structural files Encapsulates the superblock and superblock replicas The superblock contains fundamental information about the file system, such as file system type, size, layout, and available resources. The location of the primary superblock is known. The label file can locate superblock copies if there is structural damage to the file system. Records device information, such as volume length and volume label, and contains pointers to other structural files Holds information on a per-fileset basis, which may include the inode of the filesets inode list file, the maximum number of inodes allowed, an indication of whether the file system supports large files, and the inode number of the quotas file if the fileset supports quotas Contains inode lists that are stored in inode list files Increasing the number of inodes involves increasing the size of the file after expanding the inode allocation unit file. Holds the free inode map, extended operations map, and a summary of inode resources Maps the block used by the file system intent log (The intent log is a record of current activity used to guarantee file system integrity in the event of system failure.) Indicates the allocation state of each AU by defining whether each AU is free, allocated as a whole (no bitmaps allocated), or expanded Contains the AU summary for each allocation unit, which contains the number of free extents of each size (The summary for an extent is created only when an allocation unit is expanded for use.) Contains the free extent maps for each of the allocation units If the file system supports quotas, there is a quotas file that is used to track the resources allocated to each user.

Device File Fileset Header File

Inode List File

Inode Allocation Unit File Log File

Extent Allocation Unit State File Extent Allocation Unit Summary File

Free Extent Map File Quotas Files

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-15

Converting UFS to VxFS


You can convert a UFS file system to VxFS by using the vxfsconvert utility.
Supports UFS to VxFS conversion Supports all block sizes, except for file systems with fragment sizes of 512 bytes. Requires free space within or after the file system: approximately 12%15% of total file system size Takes two to three times longer than UFS fsck
FOS35_Sol_R1.0_20020930

UFS UFS

vxfsconvert vxfsconvert

VxFS VxFS

11-13

Converting UFS to VxFS


To take advantage of VxFS structural components designed to help minimize fragmentation, you can convert existing UFS file systems to VERITAS file systems by using the vxfsconvert utility. This utility is available with VxFS version 3.4 and later. Note: The vxfsconvert utility also supports the conversion of HFS file systems to VxFS file systems on HP-UX. What Block Sizes Can Be Converted? The utility supports the conversion of all file system block sizes, except for file systems with a fragment size of 512 bytes. After a file system is converted to VxFS, its block size is the value of the fragment size before conversion. How Much Free Space Is Required?
vxfsconvert requires sufficient disk space to convert existing metadata to VxFS metadata. Free space must be available within the file system, or immediately after the end of the file system, and on the same device or volume on which the file system resides. Free space required by vxfsconvert is approximately 12 to 15 percent of the total file system size, depending on the number of directories, size of directories, files, and the number of allocated inodes.

11-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

How Long Does the Conversion Take? Running vxfsconvert takes approximately two to three times longer than running file system-specific fsck on UFS. Running vxfsconvert on the raw device is almost always faster than running it on a block device.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-17

The vxfsconvert Command


vxfsconvert [-s size] [-efnNvyY] special
Available disk space past end of file system to be used for conversion process Options Character (raw) disk device

-e -f -n|N -y|Y -v
FOS35_Sol_R1.0_20020930

Estimates space required for conversion Displays list of supported file system types Assumes yes or no to all questions asked during conversion Specifies verbose mode

Command is located in /opt/VRTSvxfs/sbin.

11-14

The vxfsconvert Command The syntax for the /opt/VRTSvxfs/sbin/vxfsconvert command is:
vxfsconvert [-s size] [-efnNvyY] special special is the character (raw) disk device. Conversion options include:
Option Description This option estimates the amount of space required to complete the conversion. The file system is not converted to VxFS, and the file system remains clean. In general, free space is overestimated. This option displays the list of supported file system types. Currently, UFS on Solaris and HFS on HP-UX are supported. Use these options to assume a Yes (y|Y) or No (n|N) response to all questions asked by the vxfsconvert process. This option specifies the amount of available disk space beyond the end of the file system that can be used for the conversion process. size is in kilobytes. If -s is not specified, vxfsconvert uses free blocks from within the previous file system layout to complete the conversion. If -s is specified, vxfsconvert uses the space past the current end of the file system. If the device is a volume, you can use vxassist to expand the volume to create space for using -s. If the device is a raw partition, you can use -s only if there is extra space on the partition past the end of the file system. This option shows the conversion progress for every inode converted.

-e

-f -n|N|y|Y -s size

-v
11-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Conversion Process
1. 2. 3. 4. Install VxFS 3.2 or higher. Clean and unmount the UFS file system. Perform a full file system backup. Ensure free space is available for the conversion: # df -k /dev/vx/dsk/device # vxfsconvert -e /dev/vx/rdsk/device Run vxfsconvert: # vxfsconvert /dev/vx/rdsk/device Run fsck on the converted file system: # fsck -F vxfs -y -o full /dev/vx/rdsk/device Mount and reorganize the file system: # mount -F vxfs /dev/vx/dsk/device /mount # fsadm -ed /mount A Version 4 layout is created. Upgrade to Version 5 with vxupgrade.
11-15

5. 6. 7.

FOS35_Sol_R1.0_20020930

8.

UFS to VxFS Conversion Process Follow these steps to ensure a successful conversion: 1 Ensure that VxFS 3.2 or higher is installed on your system. vxfsconvert creates a Version 4 file system layout. 2 Clean and unmount the file system that you want to convert. vxfsconvert cannot convert a mounted or dirty file system. 3 Perform a full backup of the file system before starting the conversion process. 4 Ensure that free space is available for the conversion. For example, to check available free space on the device /dev/vx/dsk/datadg/datavol and check the amount of free space required for the conversion: # df -k /dev/vx/dsk/datadg/datavol # vxfsconvert -e /dev/vx/rdsk/datadg/datavol The -e option checks for required free space but does not perform the conversion. When running the conversion under a VERITAS Volume Manager (VxVM) volume, if you need to add space for the conversion, you can enlarge the volume by adding to the end of the device. For example: # vxassist -g datadg growby datavol 100m Without VxVM, space required to complete the conversion comes directly from the free space available on the UFS file system. After the conversion, the space becomes part of the converted file system.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-19

5 To convert the file system, run the vxfsconvert command. For example, to convert the file system on the volume datavol in the disk group datadg: # vxfsconvert /dev/vx/rdsk/datadg/datavol 6 After the file system is converted, run fsck on the converted file system. During pass 4 of the VxFS-specific fsck, several error messages are displayed, because vxfsconvert does not create all metadata files. You must respond yes to all of these messages so that fsck can complete the conversion process. For example: # fsck -F vxfs -y -o full /dev/vx/rdsk/datadg/datavol super-block indicates that intent logging was disabled cannot perform log replay pass0 - checking structural files pass1 - checking inode sanity and blocks pass2 - checking directory linkage pass3 - checking reference counts pass4 - checking resource maps fileset 1 au 0 imap incorrect - fix (ynq)y fileset 1 au 0 iemap incorrect - fix (ynq)y fileset 999 au 0 imap incorrect - fix (ynq)y fileset 999 au 0 iemap incorrect - fix (ynq)y corrupted CUT entries, clear? (ynq)y au 0 emap incorrect - fix? (ynq)y au 0 summary incorrect - fix? (ynq)y au 1 emap incorrect - fix? (ynq)y au 1 summary incorrect - fix? (ynq)y au 1 state file incorrect - fix? (ynq)y fileset 1 iau 0 summary incorrect - fix? (ynq)y fileset 999 iau 0 summary incorrect - fix? (ynq)y free block count incorrect 0 expected 48878 fix? (ynq)y free extent vector incorrect fix? (ynq)y OK to clear log? (ynq)y set state to CLEAN? (ynq)y 7 The final step in the conversion process is to mount and reorganize the file system. For example, to mount the file system at the mount point /mnt, then reorganize the file system: # mount -F vxfs /dev/vx/dsk/datadg/datavol /mnt # fsadm -ed /mnt & 8 The vxfsconvert command creates a Version 4 file system layout. After converting the file system from UFS to VxFS, you can upgrade the file system layout to Version 5 by using the vxupgrade command: # vxupgrade -n 5 /mnt

11-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

FAQs About vxfsconvert


What is converted?
Regular files Regular files Directories Directories Symbolic links Symbolic links Character devices Character devices Block devices Block devices Sockets Sockets Named pipes Named pipes

What is not converted?


Access control lists (ACLs) Access control lists (ACLs) Quotas Quotas

What if the conversion fails?


Run fsck to return to the original file system: Run fsck to return to the original file system: # fsck -F ufs /dev/vx/rdsk/device # fsck -F ufs /dev/vx/rdsk/device
FOS35_Sol_R1.0_20020930 11-16

What Is Converted? The vxfsconvert utility converts: Regular files Directories Symbolic links Character devices Block devices Sockets Named pipes What Is Not Converted? The vxfsconvert utility does not convert: Access control lists (ACLs) Quotas What If the Conversion Fails? If the conversion fails (for example, due to I/O failure), run fsck to return to the original file system:
# fsck -F ufs /dev/vx/rdsk/device_name

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-21

How Does vxfsconvert Work?


# vxfsconvert ...
Examine the superblock to ensure it is CLEAN. Examine the superblock to ensure it is CLEAN. Initialize VxFS metadata. Initialize VxFS metadata. Convert UFS inodes to VxFS inodes. Convert UFS inodes to VxFS inodes. For files, allocate extents to map data blocks. For files, allocate extents to map data blocks. For directories, allocate space for directory entries. For directories, allocate space for directory entries. Convert character, block, link, and socket inodes to VxFS. Convert character, block, link, and socket inodes to VxFS. Note: Up to this point, the conversion process can be stopped.
FOS35_Sol_R1.0_20020930

Replace the original superblock with the VxFS superblock. Replace the original superblock with the VxFS superblock. The file system is fully converted to VxFS.

11-17

FOS35_Sol_R1.0_20020930

11-17

How Does the Conversion Process Work? To convert a file system, vxfsconvert performs these tasks: 1 vxfsconvert examines the superblock to make sure it is marked CLEAN. 2 Based on information in the file system superblock, vxfsconvert sets up VxFS metadata. This includes initializing all metadata required by the VxFS Version 4 disk layout (for example, OLT, log, and structural fileset). 3 vxfsconvert reads and converts each inode in the file system to a VxFS inode. 4 For every regular file inode, vxfsconvert allocates and initializes enough extent data to map all of the files data blocks. This translates only the representation of the files data blocks from the old format to that of VxFS. It never copies or relocates user data blocks. 5 For every UFS directory inode, vxfsconvert allocates sufficient disk space to hold all the VxFS directory entries. vxfsconvert converts each directory entry to a VxFS directory entry and writes all converted directory blocks. 6 vxfsconvert then converts all symbolic link, character special, block special, FIFO, and socket inodes to VxFS. Note: Up to this point, all metadata of the original file system is intact, and the conversion process can be stopped. 7 vxfsconvert replaces the original superblock with the VxFS superblock and clears any alternate superblocks written by the original file system. After the superblock is overwritten, the original file system is no longer accessible and is fully converted to VxFS.

11-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Fragmentation
Degree of fragmentation depends on:
File system usage File system activity patterns
Initial Allocation Initial Allocation Fragmented Fragmented Defragmented Defragmented

VxFS provides online monitoring and defragmentation utilities.


FOS35_Sol_R1.0_20020930 11-18

Fragmentation
What Is Fragmentation? In a VERITAS file system, when free resources are initially allocated to files, they are aligned in the most efficient order possible to provide optimal performance. On an active file system, the original order is lost over time as files are created, removed, and resized. As space is allocated and deallocated from files, the available free space becomes broken up into fragments. This means that space has to be assigned to files in smaller and smaller extents. This process is known as fragmentation. Fragmentation leads to degraded performance and availability. The degree of fragmentation depends on file system usage and activity patterns. Controlling Fragmentation Allocation units in VxFS and cylinder groups in UFS are both designed to help minimize and control fragmentation. However, over time both file systems eventually become fragmented. In UFS, cylinder group allocation policies reduce the likelihood of data fragmentation by placing inodes and data blocks in close proximity. This strategy reduces, but does not eliminate, fragmentation. To eliminate fragmentation in UFS, you must back up and reload the file system, which results in user downtime. VxFS provides online reporting and optimization utilities to enable you to monitor and defragment a mounted file system. These utilities are accessible through the file system administration command, fsadm. Using the fsadm command, you can track and eliminate fragmentation without interrupting user access to the file system.
Lesson 11: Defragmenting a File System
Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-23

Fragmentation Types
Directory fragmentation:
Gaps are left in directory inodes. Directory lookups become slower.

Extent fragmentation:
The free extent map changes from one large free area to many small free areas. Files cannot be allocated in contiguous chunks. More extents are necessary to reference a file. The file system may have free space that cannot be allocated.

FOS35_Sol_R1.0_20020930

11-19

Types of Fragmentation VxFS addresses two types of fragmentation: Directory fragmentation As files are created and removed, gaps are left in directory inodes. This is known as directory fragmentation. Directory fragmentation causes directory lookups to become slower. Extent fragmentation As files are created and removed, the free extent map for an allocation unit changes from having one large free area to having many smaller free areas. Extent fragmentation occurs when files cannot be allocated in contiguous chunks and more extents must be referenced to access a file. In a case of extreme fragmentation, a file system may have free space, none of which can be allocated.

11-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Monitoring Fragmentation
To monitor directory fragmentation:

fsadm D
To monitor extent fragmentation:

fsadm -E
To monitor free extents:

df -F vxfs -o s

FOS35_Sol_R1.0_20020930

11-20

Monitoring Fragmentation
Running Fragmentation Reports You can monitor fragmentation in a VERITAS file system by running reports that describe fragmentation levels. You use the fsadm command to run reports on both directory and extent fragmentation. The df command, which reports on file system free space, also provides information useful in monitoring fragmentation. The fsadm -D command reports on directory fragmentation. The fsadm -E command reports on extent fragmentation. The df -F vxfs -o s command prints the number of free extents of each size.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-25

Monitoring Directories
To monitor directory fragmentation: fsadm -D mount_point Example: # fsadm -D /mnt1
Dirs Total Searched Blocks total 486 99 Immed Immeds Dirs to Add 388 6 Dirs to Blocks Reduce to Reduce 6 6

A high total in the Dirs to Reduce column indicates that directories are fragmented.
FOS35_Sol_R1.0_20020930 11-21

Running the Directory Fragmentation Report To obtain a directory fragmentation report, you use the -D option in the fsadm command:
fsadm -D mount_point

In the syntax, you specify the fsadm -D command and the mount point that identifies the file system. Example: Reporting on Directory Fragmentation
# fsadm -D /mnt1 Dirs Total Searched Blocks total 486 99 Immed Immeds Dirs to Add 388 6 Dirs to Reduce 6 Blocks to Reduce 6

11-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Interpreting the Report Output for the fsadm -D command contains the following columns of information:
Column Dirs Searched Description Total number of directories A directory is associated with the extent allocation unit containing the extent in which the directorys inode is located. Total number of blocks used by directory extents Number of directories that are immediate, that is, the directory data is in the inode itself, as opposed to being in an extent Immediate directories save space and speed up pathname resolution. Number of directories that currently have a data extent, but that could be reduced in size and contained entirely in the inode Number of directories for which one or more blocks could be freed if the entries in the directories are compressed to make free space in the directory contiguous Number of blocks that could be freed if the entries in the directory are compressed

Total Blocks Immed Dirs

Immeds to Add

Dirs to Reduce Blocks to Reduce

The directories that fragment are usually those with the most activity. A small number of fragmented directories can account for a large percentage of name lookups in the file system. If the total in the Dirs to Reduce column is substantial, you can improve the performance of pathname resolution through defragmentation.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-27

Monitoring Extents
To monitor extent fragmentation: fsadm -E [-l largesize] mount_point Example: # fsadm -E /home
... % Free blocks in extents smaller than 64 blks: 8.35 % Free blocks in extents smaller than ... 8 blks: 4.16 % blks allocated to extents 64 blks or larger: 45.81

Output displays percentages of free and allocated blocks per extent size.
FOS35_Sol_R1.0_20020930 11-22

Running the Extent Fragmentation Report To obtain an extent fragmentation report, you use the -E option in the fsadm command:
fsadm -E [-l largesize] mount_point

In the syntax, you specify the fsadm -E command followed by the mount point that identifies the file system. By default, the largesize value is 64 blocks. This means that the extent fragmentation report considers extents of size 64 blocks or larger to be immovable; that is, reallocating and consolidating these extents does not improve performance. You can specify a different largesize value by using the -l option.

11-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Example: Reporting on Extent Fragmentation


# fsadm -E /home Extent Fragmentation Report Total Files 939 Average File Blks 11 Average # Extents 2 Total Free Blks 245280

blocks used for indirects: 0 % Free blocks in extents smaller than 64 blks: 8.35 % Free blocks in extents smaller than Free Extents By Size 1: 16: 256: 4096: . . . 356 23 1 2 2: 32: 512: 8192: 292 7 0 2 4: 64: 1024: 16384: 271 3 1 1 8: 128: 2048: 32768: 181 1 1 2 8 blks: 4.16 % blks allocated to extents 64 blks or larger: 45.81

Interpreting the Report Output for the fsadm -E command contains the following information:
Element Total Files Average File Blks Average # Extents Total Free Blks Blocks used for indirects Percentages Description Total number of files that have data extents Average number of blocks belonging to all files Average number of extents used by files in the file system Total number of free blocks in the file system Number of blocks used for indirect address extents Percentage of free extents smaller than 64 blocks Percentage of free extents smaller than 8 blocks Percentage of blocks that are part of extents 64 blocks or larger (Files with a single small extent are not included in this calculation. This number is generally large on file systems that contain many large files and small on file systems that contain many small files.) Free Extents By Size Total free extents of each size, up to a maximum size of the number of data blocks in an allocation unit

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-29

Interpreting Fragmentation
Percentage
% of free space in extents of less than 64 blocks % of free space in extents of less than 8 blocks % of total file system size in extents of more than 64 blocks
FOS35_Sol_R1.0_20020930

Badly Unfragmented Fragmented < 5% < 1% > 5% > 50% > 5% < 5%

11-23

Guidelines for Interpreting Fragmentation Data In general, for optimum performance, the percentage of free space in a file system should not fall below 10 percent. A file system with 10 percent or more free space has less fragmentation and better extent allocation. The simplest way to determine the degree of fragmentation is to view the percentages in the extent fragmentation report and follow these guidelines: An unfragmented file system has one or more of the following characteristics: Less than five percent of free space in extents of less than 64 blocks in length Less than one percent of free space in extents of less than eight blocks in length More than five percent of the total file system size available as free extents in lengths of 64 or more blocks A badly fragmented file system has one or more of the following characteristics: More than 50 percent of free space used by small extents of less than 64 blocks in length A large number of small extents that are free (Generally, a fragmented file system has greater than five percent of free space in extents of less than 8 blocks in length.) Less than five percent of the total file system size available is in large extents, which are defined as free extents in lengths of 64 or more blocks.

11-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Percentage Percentage of free space in extents of less than 64 blocks in length Percentage of free space in extents of less than 8 blocks in length Percentage of total file system size in extents of length 64 blocks or greater

Unfragmented < 5% < 1% > 5%

Badly Fragmented > 50% > 5% < 5%

Example: Fragmented File System The following extent fragmentation report shows a fragmented file system.
Extent Fragmentation Report Total Files 939 Average File Blks 11 Average # Extents 2 Total Free Blks 245280

blocks used for indirects: 0 % Free blocks in extents smaller than 64 blks: 10.81 % Free blocks in extents smaller than . . . 8 blks: 8.16 % blks allocated to extents 64 blks or larger: 44.81

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-31

Defragmenting a File System


Syntax: fsadm [-d] [-D] [-e] [-E] [-s] [-v] [-l largesize] [-a days] [-t time] [-p passes] [-r rawdev] mount_point
-d -D -e -E
Reorganize directories Report on directories Reorganize extents Report on extents

-a Aged files -t Time to run reorganization -p Number of passes to run -s Summarize activity -v Verbose reporting -l Size of large files

FOS35_Sol_R1.0_20020930

11-24

Defragmenting a File System


VxFS Defragmentation You can use the online administration utility fsadm to defragment, or reorganize, file system directories and extents. The fsadm utility defragments a file system mounted for read/write access by: Removing unused space from directories Making all small files contiguous Consolidating free blocks for file system use Only a privileged user can reorganize a file system. The fsadm Command The syntax for the fsadm command is:
fsadm [-d] [-D] [-e] [-E] [-s] [-v] [-l largesize] [-a days] [-t time] [-p passes] [-r rawdev] mount_point

In the syntax, you specify the fsadm command, followed by options specifying the type and amount of defragmentation to perform. You complete the command by specifying the mount point or raw device to identify the file system.

11-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The options available are:


Option Description The -d option reorganizes directories. Directory entries are reordered to place subdirectory entries first, then all other entries in decreasing order of time of last access. The directory is also compacted to remove free space. You can use the -a option in conjunction with the -d option to consider files not accessed within the specified number of days as aged files. Aged files are moved to the end of the directory. The default is 14 days. The -e option reorganizes extents. Files are reorganized to have the minimum number of extents. The -D and -E options produce reports on directory and extent fragmentation, respectively. The -v option specifies verbose mode and reports reorganization activity. You can use the -v option to examine the amount of work performed by fsadm. You can adjust the frequency of reorganization based on the rate of file system fragmentation. The -l option specifies the size of a file that is considered large. The default is 64 blocks. Extent reorganization tries to group large files into large extents of at least 64 blocks. The -t option specifies a maximum length of time to run in seconds. The -p option specifies a maximum number of passes to run. By default, fsadm runs five passes. The -s option prints a summary of activity at the end of each pass. The -r option specifies the pathname of the raw device to read to determine file layout and fragmentation. This option is used when fsadm cannot determine the raw device.

-d

-a

-e -D, -E -v

-l

-t -p -s -r

Notes on fsadm Options If you specify both -d and -e, directory reorganization is always completed before extent reorganization. If you use the -D and -E with the -d and -e options, the fragmentation reports are produced both before and after the reorganization. The -t and -p options control the amount of work performed by fsadm, either in a specified time or by a number of passes. By default, fsadm runs five passes. If both -t and -p are specified, fsadm exits if either of the terminating conditions is reached.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-33

Defragmenting Extents
During extent reorganization:
Small files are made contiguous. Large files are built from large extents. Small, recent files are moved near the inodes. Large, old files are moved to end of the AU. Free space is clustered in the center.

Example: fsadm -e -E -s /mnt1


FOS35_Sol_R1.0_20020930 11-25

Defragmenting Extents Defragmenting extents, called extent reorganization, can improve performance. During extent reorganization: Small files (less than 64K) are made into one contiguous extent. Large files are built from large extents. Small and recently used (less than 14 days) files are moved near the inode area. Large or old files (more than 14 days since last access) are moved to the end of the allocation unit. Free space is clustered in the center of the data area. Extent reorganization is performed on all inodes in the file system. Each pass through the inodes moves the file system closer to optimal organization. Duration of Defragmentation The time it takes to complete extent reorganization varies, depending on the degree of fragmentation, disk speed, and the number of inodes in the file system. In general, extent reorganization takes approximately one minute for every 100 megabytes of disk space. Example: Defragmenting Extents In the example, the fsadm command is used to reorganize extents in a file system mounted at /mnt1. Fragmentation reports are run both before and after the reorganization, and summary statistics are printed at the end of each pass.

11-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

# fsadm -e -E -s /mnt1
Extent Fragmentation Report Total Files 939 Average File Blks 11 Average # Extents 2 Total Free Blks 245280

blocks used for indirects: 0 % Free blocks in extents smaller than 64 blks: 8.35 % Free blocks in extents smaller than 8 blks: 4.16 % blks allocated to extents 64 blks or larger: 45.81 Free Extents By Size 1: 16: 256: 4096: 356 23 1 2 2: 32: 512: 8192: 292 7 0 2 4: 64: 1024: 16384: 271 3 1 1 8: 128: 2048: 32768: 181 1 1 2

Pass 1 Statistics Extents Searched total 2016 Reallocations Ioctls Attempted 1473 Issued 789 FileBusy 0 Errors NoSpace Total 0 0

Pass 2 Statistics Extents Searched total 1836 Reallocations Ioctls Attempted 0 Issued 0 FileBusy 0 Errors NoSpace Total 0 0

Extent Fragmentation Report Total Files 939 Average File Blks 11 Average # Extents 1 Total Free Blks 245280

blocks used for indirects: 0 % Free blocks in extents smaller than 64 blks: 0.46 % Free blocks in extents smaller than 8 blks: 0.03 % blks allocated to extents 64 blks or larger: 45.53 Free Extents By Size 1: 16: 256: 4096: 10 3 3 3 2: 32: 512: 8192: 1 4 4 1 4: 64: 1024: 16384: 1 3 4 1 8: 128: 2048: 32768: 4 3 2 2

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-35

In the example, the default five passes were scheduled, but the reorganization finished in two passes. The columns in the Pass Statistics reports contain the following information:
Column Extents Searched Reallocations Attempted Ioctls Issued Description Total number of extents examined Total number of consolidations or merging of extents performed Total number of reorganization request calls made during the pass (This corresponds closely to the number of files that are being operated on in that pass, because most files can be reorganized with a single ioctl. More than one extent may be consolidated in one operation.) Total number of reorganization requests that failed because the file was active during reorganization (This column is located under the heading Errors.) Total number of reorganization requests that failed because an extent presumed free was allocated during the reorganization (This column is located under the heading Errors.) Total number of errors encountered during the reorganization and may include errors that were not included with FileBusy or NoSpace (This column is located under the heading Errors.)

File Busy

NoSpace

Total

11-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Defragmenting Directories
During directory reorganization:
Valid entries are moved to the front. Free space is grouped at the end. Directories are packed into inode area. Directories are placed before other files. Entries are sorted by access time.

Example: fsadm -d -D /mnt1


FOS35_Sol_R1.0_20020930 11-26

Defragmenting Directories Defragmenting directories, called directory reorganization, is not nearly as critical as extent reorganization, but regular directory reorganization improves performance. Directories are reorganized through compression and sorting. During directory reorganization: Valid entries are moved to the front of the directory. Free space is grouped at the end of the directory. Directories and symbolic links are packed into the inode immediate area. Directories and symbolic links are placed before other files. Entries are sorted by the time of last access. Example: Defragmenting Directories In the example, the fsadm command is used to reorganize directories in a file system mounted at /mnt1. Fragmentation reports are run both before and after the reorganization.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-37

# fsadm -d -D /mnt1
Directory Fragmentation Report Dirs Searched total 1365 Total Blocks 1512 Immed Dirs 4 Immeds to Add 0 Dirs to Reduce 1 Blocks to Reduce 3149

Directory Reorganization Statistics (pass 1 of 2) Dirs Searched fset 999 total 1361 1361 Dirs Changed 1 1 Total 128 128 Failed 0 0 Blocks Reduced 3 3 Blocks Changed 253 253 Immeds Added 0 0

Ioctls Ioctls

Directory Reorganization Statistics (pass 2 of 2) Dirs Searched fset 999 total 1361 1361 Dirs Changed 1 1 Total 120 120 Failed 0 0 Blocks Reduced 2 2 Blocks Changed 253 253 Immeds Added 0 0

Ioctls Ioctls

Directory Fragmentation Report Dirs Searched total 1365 Total Blocks 1504 Immed Dirs 4 Immeds to Add 0 Dirs to Reduce 1 Blocks to Reduce 2760

11-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The columns in the Directory Reorganization Statistics reports contain the following information:
Column Dirs Searched Dirs Changed Total Ioctls Description Number of directories searched (Only directories with data extents are reorganized. Immediate directories are skipped.) Number of directories for which a change was made Total number of VX_DIRSORT ioctls performed (Reorganization of directory extents is performed using this ioctl.) Number of requests that failed (The reason for failure is usually that the directory being reorganized is active. A few failures are typical in most file systems. If the -v option is used, all ioctl calls and status returns are recorded.) Total number of directory blocks freed by compressing entries Total number of directory blocks updated while sorting and compressing entries Total number of directories with data extents that were compressed into immediate directories

Failed Ioctls

Blocks Reduced Blocks Changed Immeds Added

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-39

Scheduling Defragmentation
The frequency of defragmentation depends on usage, activity patterns, and importance of performance. Run defragmentation on demand or as a cron job, when the file system is relatively idle:
Daily or weekly for frequently used file systems Monthly for infrequently used file systems

Adjust defragmentation intervals based on before and after reports. To run directory and extent reorganization from within the VEA GUI, highlight a file system and select Actions>Defrag File System.
FOS35_Sol_R1.0_20020930 11-27

Scheduling Defragmentation The best way to ensure that fragmentation does not become a problem is to defragment the file system on a regular basis. The frequency of defragmentation depends on file system usage, activity patterns, and the importance of file system performance. In general, follow these guidelines: Schedule defragmentation during a time when the file system is relatively idle. For frequently used file systems, you should schedule defragmentation daily or weekly. For infrequently used file systems, you should schedule defragmentation at least monthly. Full file systems tend to fragment and are difficult to defragment. You should consider expanding the file system. To determine the defragmentation schedule that is best for your system, select what you think is an appropriate interval for running extent reorganization and run the fragmentation reports both before and after the reorganization. If the degree of fragmentation is approaching the bad fragmentation figures, then the interval between fsadm runs should be reduced. If the degree of fragmentation is low, then the interval between fsadm runs can be increased. You should schedule directory reorganization for file systems when the extent reorganization is scheduled.

11-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Scheduling Defragmentation as a cron Job The fsadm utility can run on demand and can be scheduled regularly as a cron job. The following is a sample script that is run periodically at 3:00 a.m. from cron for a number of file systems:
outfile=/usr/spool/fsadm/out./bin/date +%m%d for i in /home /home2 /project /db do /bin/echo Reorganizing $i /bin/timex /opt/VRTSvxfs/sbin/fsadm -e -E -s -t 3600 $i /bin/timex /opt/VRTSvxfs/sbin/fsadm -s -d -D -t 3600 $i done > $outfile 2>&1

Note: The -t option is used to specify a maximum length of time for the fsadm utility to run (in seconds). Defragmenting a File System: VEA You can defragment a file system from within the VERITAS Enterprise Administrator (VEA) graphical user interface. To defragment a file system: 1 Highlight the file system to be defragmented, and select Actions>Defrag File System. 2 When prompted, click Yes to confirm that you want to defragment the file system. Directory and extent reorganization is performed. You can view the underlying command in the Task Properties window:

The defragmentation process can take some time. You receive an alert when the process is complete.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-41

Summary
You should now be able to:
Describe VxFS extent-based allocation. List features of the four VxFS file system layout versions. Upgrade the file system layout by using vxupgrade. Identify structural components of VxFS. Convert a UFS file system to VxFS. Define two types of fragmentation. Run fragmentation reports by using fsadm. Defragment a file system by using fsadm.
FOS35_Sol_R1.0_20020930 11-28

Summary
This lesson described the online defragmentation utilities available with VERITAS File System (VxFS). This lesson also provided an overview of the VxFS file system layout versions and file system structural components, which are designed to help minimize fragmentation. Next Steps While extent management and defragmentation enhance file system performance, the VxFS intent log helps to preserve file system integrity. The next lesson describes the role of the intent log in a VERITAS file system. Additional Resource VERITAS File System System Administrators Guide This guide describes VERITAS File System concepts, how to use various utilities, and how to perform backup procedures.

11-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 11
Lab 11: Defragmenting a File System In this lab, you practice converting a UFS file system to VxFS, and you monitor and defragment a file system by using the fsadm command. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

11-29

Lab 11: Defragmenting a File System


Goal In this lab, you practice converting a UFS file system to VxFS, and you monitor and defragment a file system by using the fsadm command. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 11: Defragmenting a File System


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11-43

11-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12

Intent Logging

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

12-2

12-2

Introduction
Overview VERITAS File System (VxFS) uses a feature called intent logging to ensure file system integrity in the event of system failure. This lesson describes the role of the intent log in maintaining file system consistency. Guidelines for selecting an intent log size and for controlling the behavior of the intent log through mount options are also covered. Importance With traditional UNIX file systems, recovery of a file system in the event of system failure requires a lengthy examination of all file system metadata structures. Through the intent logging feature, recovery of a VERITAS file system can be performed in a fraction of the time. Outline of Topics Role of the Intent Log Maintaining File System Consistency Selecting an Intent Log Size Controlling Logging Behavior Improving Performance Through Logging Options

12-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Describe the role of the intent log in VxFS. Maintain file system consistency by using fsck. Identify guidelines for selecting intent log size. Control logging behavior by using mount options. Identify guidelines for selecting logging options.

FOS35_Sol_R1.0_20020930

12-3

Objectives After completing this lesson, you will be able to: Describe the role of the intent log in a VERITAS file system. Maintain file system consistency by using the fsck command. Identify guidelines for selecting an intent log size to maximize file system performance. Control logging behavior by using mount command options. Identify guidelines for selecting mount options for logging to maximize file system performance.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-3

Intent Log
Allocation Units Structural Files
Intent Log

Data

Metadata

2 After the intent log is written, other After the intent log is written, other file system updates are made. file system are made.

1 The intent log The intent log records pending records pending file system file system changes before changes before metadata is metadata is changed. changed.
12-4

FOS35_Sol_R1.0_20020930

Disk

Role of the Intent Log


What Is Intent Logging? VERITAS File System provides fast file system recovery after a system failure by using a tracking feature called intent logging, or journaling. Intent logging is the process by which intended changes to file system metadata are written to a log before changes are made to the file system structure. The intent log records pending changes to the file system structure and writes the log records to disk in advance of the changes to the file system. Once the intent log has been written, the other updates to the file system can be written in any order. In the event of a system failure, the VxFS fsck utility replays the intent log to nullify or complete file system operations that were active when the system failed. Traditional File System Recovery A file system may be left in an inconsistent state after a system failure. Recovery of structural consistency requires examination of file system metadata structures. Traditionally, the length of time taken for recovery was proportional to the size of the file system. Traditional UNIX file systems use the fsck utility for file system recovery. For large disk configurations, running fsck is a time-consuming process that checks the entire file system structure, verifies that all structures are intact, and corrects any inconsistencies.

12-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Intent Log Replay


Allocation Units Structural Files
Intent Log

Crash Crash Data Metadata

fsck fsck If the system crashes, the system crashes, the intent log is replayed the intent log replayed by VxFS fsck. by VxFS fsck.

FOS35_Sol_R1.0_20020930

Disk

Full structural Full structural recovery is not recovery necessary. necessary.


12-5

VxFS Intent Log Replay The VxFS version of the fsck utility performs an intent log replay to recover a file system without completing a full structural check of the entire file system. The time required for log replay is proportional to the log size, not the file system size. Therefore, the file system can be recovered and mounted only seconds after a system failure. The intent log recovery feature is not readily apparent to the user or the system administrator, and the intent log can be replayed multiple times with no adverse effects. Note: Replaying the intent log may not completely recover the damaged file system structure if the disk suffers a hardware failure. Such situations may require a complete system check using the VxFS fsck utility. The importance of logging has been increasingly recognized in other robust file systems. UFS for Solaris 2.6 and earlier releases do not support logging without Solstice Disk Suite (SDS). Starting with Solaris 7, UFS also supports metadata logging.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-5

Intent Log Contents


The intent log: Is a circular activity log with a default size of 1024 blocks Records changes to file system structure, not changes to file data Contains encoded information about data structures that need to be updated, such as: Free extent map updates Directory block updates Inode modifications for directory and file changes Inode map updates
FOS35_Sol_R1.0_20020930 12-6

Intent Log

What Does the Intent Log Contain? The VxFS intent log records changes to the file system structure. File data changes are not normally logged. Log records are encoded for compactness. An update to the file system structure, or a transaction, is divided into separate subfunctions for each data structure that needs to be updated. For example, the creation of a file that expands the directory in which the file is contained produces a transaction consisting of the following subfunctions: A free extent map update for the allocation of the new directory block A directory block update An inode modification for the directory size change An inode modification for the new file A free inode map update for the allocation of the new file Preventing Intent Log Changes from Being Overwritten The intent log is a circular activity log with a default size of 1024 blocks. To prevent the intent log from wrapping and transactions from being overwritten, VxFS uses the extended inode operations map to keep track of inodes on which operations remain pending for too long to reside in the intent log. This map is updated to identify inodes that have extended operations to be completed. This map allows the fsck utility to quickly identify which inodes had extended operations pending at the time of a system failure. The length of the extended inode operations map is 2K for file systems with 1K or 2K block sizes and is equal to the block size for file systems with larger block sizes.
12-6 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Maintaining Consistency
By default, VxFS fsck replays the intent log, rather than performing a full structural recovery.
fsck [-F vxfs] [options] [-o options] special Generic options include: -m Check, but do not repair, the file system. VxFS-specific options include: -o full Perform a full file system check.

-n|N Respond no to all prompts. -y|Y Respond yes to all prompts.


FOS35_Sol_R1.0_20020930

-o nolog Do not replay the log. -o p Perform parallel log replay.

12-7

Maintaining File System Consistency


The fsck Command You use the VxFS-specific version of the fsck command to check the consistency of and repair a VERITAS file system. Because VERITAS file systems record pending file system updates in an intent log, the fsck utility replays the intent log by default instead of performing a full structural file system check. Using the intent log is usually sufficient to set the file system state to CLEAN. You can also use the fsck utility to perform a full structural recovery in the unlikely event that the log is unusable. The syntax for the fsck command is:
fsck [-F vxfs] [generic_options] [-y|Y] [-n|N] [-o full,nolog] [-o p] special

In the syntax, you specify the command and the file system type. You can add generic options supported by the generic fsck command, and you can add VxFS-specific options.
special specifies one or more special character devicesfor example, /dev/rdsk/c1t0d0s5. If multiple devices are specified, each device is checked sequentially unless the -o p option is also specified, in which case the devices are checked in parallel.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-7

Generic Options For a complete list of generic options, see the fsck(1m) manual page. Some of the generic options include:
Option -m -n|N Description Checks, but does not repair, a file system before mounting Assumes a response of no to all prompts by fsck (This option does not open the file system for writing, does not replay the intent log, and performs a full file system check.) Echoes the expanded command line but does not execute the command Assumes a response of yes to all prompts by fsck (If the file system requires a full file system check after the log replay, or if the nolog suboption causes the log replay to be skipped and the file system is not clean, then a full file system check is performed.)

-V -y|Y

VxFS-Specific Options You specify VxFS-specific options using -o. You can use any combination of the suboptions in a comma-separated list. VxFS-specific options for use with the fsck command include:
Option -o full Description Perform a full file system check. By default, VxFS performs an intent log replay only. You use the -o full option to perform a full file system check. If the file system detects damage or the log replay operation detects damage, an indication that a complete check is required is placed in the super-block. Do not perform log replay. You can use this option if the log area becomes physically damaged. Note: This option is supported in Solaris 8, update 2 and later. Allow parallel log replay for several VxFS file systems. Each message from fsck is prefixed with the device name to identify the device. This suboption does not perform a full file system check in parallel; that is still done sequentially on each device, even when multiple devices are specified. This option is compatible only with the -y|Y option (that is, noninteractive full file system check), in which case a log replay is done in parallel on all specified devices. A sequential full file system check is performed on devices where needed. The number of devices that can be checked in parallel is determined by the amount of physical memory in the system. One instance of fsck on a single device can consume up to a maximum of 32 megabytes of memory.

-o nolog -o p

12-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxFS fsck Examples


To check file system consistency by using the intent log for the VxFS on the volume datavol: # fsck -F vxfs /dev/vx/rdsk/datadg/datavol To perform a full check without using the intent log: # fsck -F vxfs -o full,nolog /dev/vx/rdsk/datadg/datavol To check two file systems in parallel using the intent log: # fsck -F vxfs -o p /dev/rdsk/c1t2d0s4 /dev/rdsk/c1t0d0s5 To perform a file system check using the VEA GUI, highlight an unmounted file system, and select Actions>File System>Check File System.
FOS35_Sol_R1.0_20020930 12-8

VxFS fsck Example: Using the Intent Log To check file system consistency by using the intent log for the VERITAS file system on the volume datavol, you type:
# fsck -F vxfs /dev/vx/rdsk/datadg/datavol

VxFS fsck Example: Without Using the Intent Log To perform a full file system check without using the intent log for the VERITAS file system on the volume datavol, you type:
# fsck -F vxfs -o full,nolog /dev/vx/rdsk/datadg/datavol

VxFS fsck Example: Parallel Log Replay To check two file systems in parallel using the intent log:
# fsck -F vxfs -o p /dev/rdsk/c1t2d0s4 /dev/rdsk/c1t0d0s5

Checking Consistency Using VEA You can also check file system consistency and repair a file system, if needed, by using VERITAS Enterprise Administrator (VEA): 1 Select the volume containing the file system to be checked. The file system must not be mounted. 2 Select Actions>File System>Check File System. 3 Click Yes in the Check File System dialog box to begin the file system check.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-9

VxFS fsck Output


Output of fsck:
# fsck -F vxfs /dev/vx/rdsk/datadg/datavol log replay in progress replay complete - marking super-block as CLEAN

If file system is already clean:


# fsck -F vxfs /dev/vx/rdsk/datadg/datavol file system is clean - log replay is not required

Other messages and errors appear in standard output.


FOS35_Sol_R1.0_20020930 12-9

Output of the fsck Command In most cases, fsck prints the message: log replay in progress replay complete - marking super-block as CLEAN If the file system is already clean, fsck prints the message: file system is clean - log replay is not required If fsck prints any other messages, a full structural check is needed. All error messages that relate to the contents of a file system produced during a log replay are displayed in the standard output. If a full check is performed, errors are displayed in the standard output.

Notes on Running fsck If a structural flaw is detected during the intent log replay, the full fsck flag is set on the file system without operator interaction. Large files (over two gigabytes) are supported on Solaris 2.6 systems and above. If fsck encounters a large file on an older OS version, it stops without completing the file system check.

12-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Intent Log Size


Intent log size is selected at the creation of a file system and cannot be changed. Default log size is 16384 blocks. To select a different size, use the mkfs option: -o logsize=n Larger log sizes may improve performance for intensive synchronous writes, but may increase:
Recovery time Memory requirements Log maintenance time
FOS35_Sol_R1.0_20020930 12-10

Selecting an Intent Log Size


Default Intent Log Size The intent log size is chosen when a file system is created and cannot be subsequently changed. By default, the mkfs utility uses a default intent log size of 16384 blocks, which is sufficient for most workloads. If the file system is smaller than 512 MB, the default log size is reduced to avoid wasting space. Guidelines for Selecting an Intent Log Size You can specify the size of the intent log when you create the file system by using the -o logsize option of the VxFS mkfs command. If you want to change the size of the intent log from the default size, follow these guidelines: Before selecting a new intent log size, you should test representative system loads against various sizes to determine which intent log size results in the fastest file system performance. Larger log sizes may improve performance for intensive synchronous write workloads or if the file system is used as an NFS server. Larger log sizes increase recovery time. Memory requirements for log maintenance increase as the log size increases. The log size should never be more than 50 percent of the physical memory size of the system.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-11

Logging mount Options


mount -F vxfs [-o specific_options] ...
All structural changes logged Most logging delayed; great performance improvement, but changes could be lost

-o log
Integrity

-o tmplog
Performance

-o blkclear
All storage initialized; provides increased security; slower than standard file system

-o delaylog
Default; some logging delayed; improves performance

-o nodatainlog
Only use with disks that do not support bad block revectoring.
12-11

FOS35_Sol_R1.0_20020930

Controlling Logging Behavior


Selecting mount Options for Logging VERITAS File System provides VxFS-specific logging options that you can use when mounting a file system to alter default logging behavior. By default, when you mount a VERITAS file system, the -o delaylog option is used with the mount command. With this option, some system calls return before the intent log is written. This logging delay improves the performance of the system, and this mode approximates traditional UNIX guarantees for correctness in case of system failures. You can specify other mount options to change logging behavior to further improve performance at the expense of reliability. Logging mount Options You can add VxFS-specific mount options to the standard mount command using -o in the syntax:
mount [-F vxfs] [generic_options] [-o specific_options] special mount_point

The logging mount options include: -o log -o delaylog -o tmplog -o nodatainlog -o blkclear

12-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

-o log This option guarantees that all structural changes to the file system have been logged on disk when the system call returns. If a system failure occurs, fsck replays recent changes so that they are not lost. -o delaylog This is the default option that does not need to be specified. When you use this option, some system calls return before the intent log is written, and the logging delay improves the performance of the system. However, some changes are not guaranteed until a short time after the system call returns, when the intent log is written. If a system failure occurs, recent changes may be lost. This mode approximates traditional UNIX guarantees for correctness in case of system failures. -o tmplog With the tmplog option, intent logging is almost always delayed. This option greatly improves performance, but recent changes may disappear if the system crashes. This mode is only recommended for temporary file systems. On most UNIX systems, temporary file system directories (such as /tmp and /usr/tmp) often hold files that do not need to be retained when the system reboots. The underlying file system does not need to maintain a high degree of structural integrity for these temporary directories. -o nodatainlog The nodatainlog mode should be used on systems with disks that do not support bad block revectoring. Normally, a VxFS file system uses the intent log for synchronous writes. The inode update and the data are both logged in the transaction, so a synchronous write only requires one disk write instead of two. When the synchronous write returns to the application, the file system has told the application that the data is already written. If a disk error causes the data update to fail, then the file must be marked bad, and the entire file is lost. If a disk supports bad block revectoring, then a failure on the data update is unlikely, so logging synchronous writes should be allowed. If the disk does not support bad block revectoring, then a failure is more likely, so the nodatainlog mode should be used. -o blkclear The blkclear option is used in increased data security environments. This option guarantees that all storage is initialized before being allocated to files. The increased integrity is provided by clearing extents on disk when they are allocated within a file. Extending writes are not affected by this mode. A blkclear mode file system should be approximately ten percent slower than a standard mode VxFS file system, depending on the workload.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-13

Logging and Performance


Logging is essential to data reliability, but does incur performance overhead. To select the best logging mode for your environment:
Understand the different logging options. Test sample loads using different options and compare performance results. Consider the type of operations performed as well as the workload. Use VERITAS QuickLog to enhance VxFS performance with logging. QuickLog exports the log to a separate physical volume.
FOS35_Sol_R1.0_20020930 12-12

Improving Performance Through Logging Options


Logging and VxFS Performance In environments where data reliability and integrity is of the highest importance, logging is essential. However, logging does incur performance overhead. If maximum data reliability is less important than maximum performance, then you can experiment with logging mount options. Guidelines for Selecting mount Options When selecting mount options for logging to try to improve performance, follow these guidelines: Test representative system loads. The best way to select a logging mode is to test representative system loads against the logging modes and compare the performance results. Consider the type of operations and the workload. The degree of performance improvement depends on the operations being performed and the workload. File system structure-intensive loads (such as mkdir, create, and rename) may show over 100 percent improvement. Read/write intensive loads should show less improvement.

12-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Experiment with different logging modes. The delaylog and tmplog modes are capable of significantly improving performance. With delaylog, the improvement over log mode is typically about 15 to 20 percent. With tmplog, the improvement is even higher. A nodatainlog mode file system should be approximately 50 percent slower than a standard mode VxFS file system for synchronous writes. Other operations are not affected. Use VERITAS QuickLog. Use VERITAS QuickLog to enhance VxFS performance by exporting the file system log to a separate physical volume. This eliminates the disk seek time between the VxFS data and log areas on disk and increases the performance of synchronous log writes.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-15

Selecting I/O Size for Logging


Performance of devices using read-modify-write improves if writes are performed in a particular size, or in a multiple of that size. When you mount a file system, you can specify the I/O size to be used for logging by using the logiosize option:
# mount -F vxfs -o logiosize=size special mnt_point

The size (in bytes) can be:



FOS35_Sol_R1.0_20020930

512 1024 2048 4096 8192


12-13

Specifying an I/O Size for Logging The performance of some storage devices, such as those using read-modify-write features, improves if the writes are performed in a particular size, or in a multiple of that size. When you mount a file system, you can specify the I/O size to be used for logging by using the logiosize option to the mount command:
# mount -F vxfs -o logiosize=size special mount_point

You can specify a size (in bytes) of 512, 1024, 2048, 4096, or 8192. If you specify an I/O size for logging, VxFS writes the intent log in at least that size, or in a multiple of that size, to obtain maximum performance from devices that employ a read-modify-write feature. Note: A read-modify-write operation is a RAID-5 algorithm used for short write operations, that is, write operations in which the number of data columns that must be written to is less than half the total number of data columns.

12-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to:
Describe the role of the intent log in VxFS. Maintain file system consistency by using fsck. Identify guidelines for selecting intent log size. Control logging behavior by using mount options. Identify guidelines for selecting logging options.

FOS35_Sol_R1.0_20020930

12-14

Summary
This lesson described the role of the intent log in maintaining file system consistency in a VxFS file system. Guidelines for selecting an intent log size and for controlling the behavior of the intent log through mount options were also covered. Next Steps This lesson described how the VERITAS File System intent log helps to ensure file system integrity in the event of system failure. The next lesson describes the VxVM architecture in more detail to illustrate the internal configuration components that help to protect against system failure. Additional Resource VERITAS File System System Administrators Guide This guide describes VERITAS File System concepts, how to use various utilities, and how to perform backup procedures.

Lesson 12: Intent Logging


Copyright 2002 VERITAS Software Corporation. All rights reserved.

12-17

Lab 12
Lab 12: Intent Logging In this lab, you investigate the impact of different logging mount options and the impact of intent log size on file system performance. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

12-15

Lab 12: Intent Logging


Goal In this lab, you investigate the impact of different log mount options and the impact of intent log size on file system performance. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

12-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13

Architecture

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

13-2

13-2

Introduction
Overview This lesson describes the VxVM architecture, special files, and other configuration components. Importance By understanding how VxVM components are related in the overall architecture of VxVM, you can more effectively manage VxVM in your environment. Outline of Topics VxVM Component Design Monitoring the VxVM Configuration Database Controlling the Configuration Daemon Managing the volboot File

13-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Describe the components in the VxVM architecture. Interpret VxVM configuration database information. Control the VxVM configuration daemon. Manage the volboot file.

FOS35_Sol_R1.0_20020930

13-3

Objectives After completing this lesson, you will be able to: Describe the components in the VxVM architecture. Interpret VxVM configuration database information. Control the VxVM configuration daemon. Manage the volboot file.

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-3

VxVM Architecture
User Applications Configuration Utilities Configuration
(vxdiskadm, VEA, CLI) (vxdiskadm, VEA, CLI)

vxrelocd vxrelocd File System Operating System


Block Device Switch (dsk) Character Device Switch (rdsk)

vxconfigd vxconfigd

Kernel
Kernel Log Kernel Log

VxVM Config Driver VxVM


Device Drivers
FOS35_Sol_R1.0_20020930

VxVM Config Database


13-4

VxVM Component Design


VxVM is a device driver that is placed between the UNIX operating system and the SCSI device drivers. When VxVM is installed, UNIX invokes the VxVM device drivers instead of the SCSI device drivers. VxVM determines which SCSI drives are involved in the requested I/O and delivers the I/O request to those drives.

vxconfigd
When a system is booted, the command vxdctl enable is automatically executed to start the VxVM configuration daemon, vxconfigd. VxVM reads the /etc/vx/volboot file to determine disk ownership, and automatically imports rootdg and all other disk groups owned by this host.
vxconfigd reads the kernel log to determine the state of VxVM objects. vxconfigd reads the configuration database on the disks, then uses the kernel log to update the state information of the VxVM objects.

vxrelocd
vxrelocd is the hot-relocation daemon that monitors events that affect data redundancy. If redundancy failures are detected, vxrelocd automatically relocates affected data from mirrored or RAID-5 subdisks to spare disks or other free space within the disk group. vxrelocd also notifies the system administrator by e-mail of redundancy failures and relocation activities.

13-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Disks For disks on a Solaris system, the volume table of contents (VTOC) is used to determine the disk size and creates two regions on the disk: The private region stores VxVM information, such as disk headers, configuration copies, and kernel logs. The disk header contains the disk label, disk group information, host ID, and pointers to the private and public regions. You can display disk header information by using vxdisk list diskname. The configuration database contains VxVM object definitions. The size of the configuration database is approximately 70 percent of the private region. Kernel logs contain configuration changes, including information about log plex attachment, object creation, object deletion, object states, and flags. The public region consists of the unused space on the disk. Types of VxVM Disks There are three types of VxVM disks: A simple disk is a disk that is created dynamically in the kernel and has public and private regions that are contiguous inside a single partition. A sliced disk is a disk that has separate slices for the public and private regions. A NOPRIV disk is a disk that does not contain a private region. VxVM uses the VTOC to determine where the private and public regions are located on the disk. You can use the prtvtoc utility to display the partition tags for the private and public regions of the disk. The VTOC is located on the first sector of the disk. The private region always has a sector offset of 1 to protect the VTOC data.

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-5

VxVM Configuration Database


Contains all disk, volume, plex, and subdisk configuration records Is stored in the private region of a VxVM disk Is replicated to maintain a copy on multiple disks in a disk group
VxVM stores a minimum of four copies per disk group. With VxVM 3.2 and later, copies are stored across enclosures to maximize redundancy.

Is updated by the vxconfigd process

FOS35_Sol_R1.0_20020930

13-5

Monitoring the VxVM Configuration Database


VxVM Configuration Database The VxVM configuration database stores all disk, volume, plex, and subdisk configuration records. The vxconfig device (/dev/vx/config) is the interface through which all changes to the volume driver state are performed. This device can only be opened by one process at a time, and the initial volume configuration is downloaded into the kernel through this device. The configuration database is stored in the private region of a VxVM disk. The VxVM configuration is replicated within the disk group so that sufficient copies exist to protect against loss of the configuration in case of physical disk failure. VxVM attempts to store at least five copies for each disk group. If there are multiple controllers represented by disks in a disk group, VxVM attempts to store two copies per controller by default. With VxVM 3.2 and later, VxVM configuration copies are placed across the enclosures spanned by a disk group to ensure maximum redundancy across enclosures. The configuration daemon, vxconfigd, is the process that updates the configuration through the vxconfig device. The vxconfigd daemon was designed to be the sole and exclusive owner of this device.

13-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Displaying Disk Group Data


# vxdg list acctdg # vxdg list acctdg
Group: acctdg Group: acctdg dgid: 1023996467.1130.train5 dgid: 1023996467.1130.train5 import-id: 0.1129 Configuration Configuration import-id: 0.1129 ... database size ... database size copies: nconfig=default nlog=default copies: nconfig=default nlog=default config: seqno=0.1056 permlen=5083 free=5074 templen=4 config: seqno=0.1056 permlen=5083 free=5074 templen=4 loglen=770 loglen=770 config disk c1t0d0s2 copy 1 len=5083 state=clean online config disk c1t0d0s2 copy 1 len=5083 state=clean online config disk c1t1d0s2 copy 1 len=5083 state=clean online config disk c1t1d0s2 copy 1 len=5083 state=clean online config disk c1t2d0s2 copy 1 len=5083 state=clean online config disk c1t2d0s2 copy 1 len=5083 state=clean online config disk c1t3d0s2 copy 1 len=5083 disabled config disk c1t3d0s2 copy 1 len=5083 disabled config disk c1t9d0s2 copy 1 len=5083 disabled config disk c1t9d0s2 copy 1 len=5083 disabled config disk c1t10d0s2 copy 1 len=5083 state=clean online config disk c1t10d0s2 copy 1 len=5083 state=clean online config disk c1t11d0s2 copy 1 len=5083 state=clean online config disk c1t11d0s2 copy 1 len=5083 state=clean online log disk c1t0d0s2 copy 1 len=770 log disk c1t0d0s2 copy 1 len=770 log disk c1t1d0s2 copy 1 len=770 Active Active log disk FOS35_Sol_R1.0_20020930 c1t1d0s2 copy 1 len=770 13-6 configuration configuration Not active active ... ... databases databases
FOS35_Sol_R1.0_200207930

13-6

Displaying Disk Group Configuration Data To display the status of the configuration database for a disk group:
vxdg list diskgroup

If no diskgroup argument is specified, then information from all disk groups is displayed in an abbreviated format. By specifying a disk group, a longer format is used to display the status of the disk group and its configuration. For example, to display the configuration of the disk group acctdg, you type:
# vxdg list acctdg

In the output, there are five disks that have configuration databases that are active (online), and there are two disks that do not have an active copy of the data (disabled). The size of the configuration database for a disk group is the size of the smallest private region in the disk group. Log entries are on all disks that have databases. The log is used by the VxVM kernel to keep the state of the drives accurate, in case the database cannot be kept accurate (for example, if the configuration daemon is stopped).

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-7

Command Output
# vxdg list acctdg Group: dgid: flags: version: copies: 90 nconfig=default nlog=default detach-policy:global config: seqno=0.1056 permlen=5083 free=5074 templen=4 loglen=770 config disk c1t0d0s2 copy 1 len=5083 state=clean online config disk c1t1d0s2 copy 1 len=5083 state=clean online config disk c1t2d0s2 copy 1 len=5083 state=clean online config disk c1t3d0s2 copy 1 len=5083 disabled config disk c1t9d0s2 copy 1 len=5083 disabled config disk c1t10d0s2 copy 1 len=5083 state=clean online config disk c1t11d0s2 copy 1 len=5083 state=clean online log disk c1t0d0s2 copy 1 len=770 log disk c1t1d0s2 copy 1 len=770 log disk c1t2d0s2 copy 1 len=770 log disk c1t3d0s2 copy 1 len=770 disabled log disk c1t9d0s2 copy 1 len=770 disabled log disk c1t10d0s2 copy 1 len=770 log disk c1t11d0s2 copy 1 len=770 acctdg 1023996467.1130.train5

import-id: 0.1129

Configuration Database Quotas By default, for each disk group, VxVM maintains a minimum of five active database copies on the same controller. In most cases, VxVM also attempts to alternate active copies with inactive copies. In the example, c1t3d0 and c1t9d0 are disabled. If different controllers are represented on the disks in the same disk group, VxVM maintains a minimum of two active copies per controller.

13-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Displaying Disk Data


# vxdisk list newdg01 # vxdisk list newdg01
Device: c1t2d0s2 Device: c1t2d0s2 devicetag: c1t2d0 Disk, host, and disk group Disk, host, and disk group devicetag: c1t2d0 type: sliced identification information identification information type: sliced hostid: cassius hostid: cassius disk: name=newdg01 id=1023905680.1116.cassius disk: name=newdg01 id=1023905680.1116.cassius group: name=newdg id=1023905686.1123.cassius group: name=newdg id=1023905686.1123.cassius info: privoffset=1 info: privoffset=1 flags: online ready private autoconfig autoimport imported flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/c1t2d0s4 char=/dev/vx/rdmp/c1t2d0s4 pubpaths: block=/dev/vx/dmp/c1t2d0s4 char=/dev/vx/rdmp/c1t2d0s4 privpaths: block=/dev/vx/dmp/c1t2d0s3 char=/dev/vx/rdmp/c1t2d0s3 privpaths: block=/dev/vx/dmp/c1t2d0s3 char=/dev/vx/rdmp/c1t2d0s3 version: 2.2 version: 2.2 iosize: min=512 (bytes) max=2048 (blocks) iosize: min=512 (bytes) max=2048 (blocks) Block and character Block and character public: slice=4 offset=0 len=17671311 public: slice=4 offset=0 len=17671311 device paths private: slice=3 offset=1 len=6925 device paths private: slice=3 offset=1 len=6925 Offset and length of public Offset and length of public Header version Header version and private regions and private regions and size of I/O and size of I/O
FOS35_Sol_R1.0_20020930 13-7

(Output continued on next slide)


FOS35_Sol_R1.0_200207930 13-7

Displaying Disk Configuration Data To list detailed information from the configuration database about specific disks:
vxdisk -g diskgroup list disk_name

The disk_name can be the disk media name (for example, newdg01) or the device tag (for example, c1t2d0). For example, to list detailed information about the disk newdg01 in the disk group newdg:
# vxdisk -g newdg list newdg01 Device: c1t2d0s2 devicetag: c1t2d0 type: sliced hostid: cassius disk: name=newdg01 id=1023905680.1116.cassius group: name=newdg id=1023905686.1123.cassius info: privoffset=1 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/c1t2d0s4 char=/dev/vx/rdmp/c1t2d0s4 privpaths: block=/dev/vx/dmp/c1t2d0s3 char=/dev/vx/rdmp/c1t2d0s3 version: 2.2 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=17671311 private: slice=3 offset=1 len=6925

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-9

Displaying Disk Data


update: update: headers: headers: configs: configs: logs: logs: time=1023905687 seqno=0.5 time=1023905687 seqno=0.5 Last update to private region Last update to private region 0 248 0 248 and offset to header copies count=1 len=5083 and offset to header copies count=1 len=5083 count=1 len=770 count=1 len=770 Number and size of configuration Number and size of configuration database copies and logs database copies and logs

Defined regions: Defined regions: config priv 000017-000247[000231]:copy=01 config priv 000017-000247[000231]:copy=01 offset=000000 enabled offset=000000 enabled config priv 000249-005100[004852]:copy=01 config priv 000249-005100[004852]:copy=01 offset=000231 enabled offset=000231 enabled log priv 005101-005870[000770]:copy=01 log priv 005101-005870[000770]:copy=01 offset=000000 enabled offset=000000 enabled Multipathing information: Multipathing information: numpaths: 2 numpaths: 2 FOS35_Sol_R1.0_20020930 state=enabled type=primary c1t0d0s2 state=enabled type=primary c1t0d0s2 c1t2d0s2 state=disabled type=secondary c1t2d0s2 state=disabled type=secondary
FOS35_Sol_R1.0_200207930

Location of Location of configuration configuration database copies database copies and logs and logs

Information and Information and status of multiple status of multiple paths to disk paths to disk

13-8

13-8

Continuation of vxdisk list Output update: time=1023905687 seqno=0.5 headers: 0 248 configs: count=1 len=5083 logs: count=1 len=770 Defined regions: config priv 000017-000247[000231]:copy=01 offset=000000 enabled config priv 000249-005100[004852]:copy=01 offset=000231 enabled log priv 005101-005870[000770]:copy=01 offset=000000 enabled Multipathing information: numpaths: 2 c1t0d0s2 state=enabled type=primary c1t2d0s2 state=disabled type=secondary

13-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The terms displayed in the output are defined in the following table:
Term Description Full UNIX device name of disk Device name used by VxVM to reference the physical disk Method of placing disk under VxVM control Default is sliced. Name of system that manages the disk group If blank, no host is currently controlling this group. VM disk media name and internal ID Disk group name and internal ID Settings that describe status and options for the disk Paths for block and character device files of public region of disk Paths for block and character device files of private region of disk Version number of header format The size of I/O to private region Partition (slice) number, offset from beginning of partition, and length of partition (public offset=0, private offset=1) Date, time, and sequence number of last update to private region Offset to two copies of the private region header Number of config database copies kept in the private region Number of kernel logs kept in the private region Location of configuration databases and kernel logs in the private region Because the database or logs can be split, there could be multiple pieces. Therefore, the offset is the starting location within the private region where this piece of the database begins. Copy represents which copy of the database that this piece is a part of. Note: There are multiple pieces of the configuration database because one is READ-ONLY and the other is READ-WRITE. When you perform tasks in VxVM, only the RW pieces data changes. The other piece contains the control files for the configuration database. If dynamic multipathing is enabled and there are multiple paths to the disk, this item shows information about the paths and their status.

Device devicetag type hostid disk name group name flags pubpaths privpaths version iosize public, private slices update time headers configs count logs count Defined regions

Multipathing information

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-11

VxVM Configuration Daemon


vxconfigd: Maintains the configuration database Synchronizes changes between multiple requests, based on a database transaction model:
All utilities make changes through vxconfigd. Utilities identify resources needed at start of transaction. Transactions are serialized, as needed. Changes are reflected in all copies immediately.

Does not interfere with access to data on disk Must be running for changes to be made to the configuration database
If vxconfigd is not running, VxVM operates, but configuration changes are not allowed.
FOS35_Sol_R1.0_20020930 13-9

Controlling the Configuration Daemon


VxVM Configuration Daemon: vxconfigd The VxVM configuration daemon, vxconfigd, maintains VxVM disk and disk group configurations. vxconfigd communicates configuration changes to the kernel and modifies configuration information stored on disk. How Does vxconfigd Work? The VxVM configuration daemon must be running in order for configuration changes to be made to the VxVM configuration database. If vxconfigd is not running, VxVM operates properly, but configuration changes are not allowed.
vxconfigd reads the kernel log to determine current states of VxVM components and updates the configuration database. Kernel logs are updated even if vxconfigd is not running. For example, upon startup, vxconfigd reads the kernel log and determines that a volume needs to be resynchronized.

The vxconfigd daemon synchronizes multiple requests and incorporates configuration changes based on a database transaction model. All utilities make changes through vxconfigd. Utilities must identify all resources needed at the start of a transaction. Transactions are serialized, as needed. Changes are reflected immediately in all copies of the configuration database. The vxconfigd daemon does not interfere with user or operating system access to data on disk.
13-12 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Configuration Daemon


vxconfigd reads the kernel log to determine current states of VxVM components and updates the configuration database. Kernel logs are updated even if the vxconfigd is not running. For example, upon startup, vxconfigd reads the kernel log and determines that a volume needs to be resynchronized. vxconfigd modes:
Enabled Disabled Booted
FOS35_Sol_R1.0_20020930

Normal operating state Most operations not allowed Part of normal system startup while acquiring rootdg
13-10

vxconfigd Modes
vxconfigd operates in one of three modes: Enabled Enabled is the normal operating mode in which most configuration operations are allowed. Disk groups are imported, and VxVM begins to manage device nodes stored in /dev/vx/dsk and /dev/vx/rdsk. Disabled In the disabled mode, most operations are not allowed. vxconfigd does not retain configuration information for the imported disk groups and does not maintain the volume and plex device directories. Certain failures, most commonly the loss of all disks or configuration copies in the rootdg disk group, cause vxconfigd to enter the disabled state automatically. Booted The booted mode is part of normal system startup, prior to checking the root file system. The booted mode imports the rootdg disk group and waits for a request to enter the enabled mode. Volume device node directories are not maintained, because it may not be possible to write to the root file system.

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-13

The vxdctl Command


Use vxdctl to control vxconfigd.
To display vxconfigd status: # vxdctl mode To enable vxconfigd: # vxdctl enable To disable vxconfigd: # vxdctl disable To stop vxconfigd: # vxdctl stop Note: # vxdctl -k stop (sends a kill -9) To start vxconfigd: # vxconfigd
13-11

FOS35_Sol_R1.0_20020930

The vxdctl Utility


vxconfigd is invoked by startup scripts during the boot procedure. To manage some aspects of vxconfigd, you can use the vxdctl utility.

Displaying vxconfigd Status To determine whether the configuration daemon is enabled, you type:
# vxdctl mode mode: enabled

This command displays the status of the configuration daemon. If the configuration daemon is not running, it must be started in order to make configuration changes. Disk failures are also configuration changes, but there is another way of tracking them if the daemon is down (kernel logs). Enabling vxconfigd If vxconfigd is running, but not enabled, the following message is displayed:
mode: disabled

To enable the configuration daemon, you type:


# vxdctl enable

This command forces the configuration daemon to read all the disk drives in the system and to set up its tables to reflect each known drive. When a drive fails and the administrator fixes the drive, this command enables VxVM to recognize the drive.
13-14 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Starting vxconfigd If vxconfigd is not running, the following message is displayed:


mode: not-running

To start vxconfigd, you type:


# vxconfigd

Once started, vxconfigd automatically becomes a background process. By default, vxconfigd issues errors to the console. However, vxconfigd can be configured to issue errors to a log file. Stopping vxconfigd There should not really be a reason to stop the daemon, but if it is necessary, you can use the command:
# vxdctl stop

To send a kill -9 to vxconfigd:


# vxdctl -k stop

Disabling vxconfigd To prevent configuration changes from occurring, but to allow administrative commands to be used, you can disable the daemon:
# vxdctl disable

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-15

The vxdctl Command


To check licensing: # vxdctl license To display version information: # vxdctl support Support information: vxconfigd_vrsn: 15 dg_minimum: 10 dg_maximum: 90 kernel: 12 protocol_minimum: 30 protocol_maximum: 40 protocol_current: 0
FOS35_Sol_R1.0_20020930 13-12

Checking Licensing Information To display the list of VxVM features that are currently available based on known licensing information:
# vxdctl license [init]

By adding the init argument, you can request that vxconfigd reread any persistently stored license information. If licenses have expired, some features may become unavailable. If new licenses have been added, vxconfigd rescans the licenses, and the features defined in those licenses become available. Displaying Supported VxVM Object Versions To display information about versions of VxVM objects and components that are supported by the currently running configuration daemon:
# vxdctl support Support information: vxconfigd_vrsn: 15 dg_minimum: 10 dg_maximum: 90 kernel: 12 protocol_minimum: 30 protocol_maximum: 40 protocol_current: 0

13-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

In the output: vxconfigd_vrsn is the version of vxconfigd that is currently running. dg_minimum is the lowest disk group version supported by vxconfigd. dg_maximum is the highest disk group version supported by vxconfigd. kernel is the highest kernel version supported by vxconfigd. protocol_minimum is the lowest cluster protocol version supported by the node. protocol_maximum is the highest cluster protocol version supported by the node. protocol_current is the cluster protocol version currently running on the node. Note: The protocol version information is only meaningful in a clustering environment. For more information on the vxconfigd daemon, see the vxconfigd(1m) and vxdctl(1m) manual pages.

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-17

The volboot File


/etc/vx/volboot contains:
A host ID that is used by VxVM to establish ownership of physical disks A list of disks to scan in search of the rootdg disk group To display the contents of volboot: # vxdctl list To change the host ID in volboot: # vxdctl hostid hostname # vxdctl enable To re-create volboot: # vxdctl init hostname
13-13

Caution: Do not edit volboot, or its checksum is invalidated.

FOS35_Sol_R1.0_20020930

Managing the volboot File


The volboot File The /etc/vx/volboot file contains a host ID that is used by VxVM to establish ownership of physical disks. This host ID is used to ensure that two or more hosts that can access disks on a shared SCSI bus do not interfere with each other in their use of those disks. This host ID is important in the generation of unique ID strings that are used internally for stamping disks and disk groups. The volboot file also contains a list of disks to scan in search of the rootdg disk group. At least one disk in this list must be both readable and a part of the rootdg disk group, or VxVM is unable to start up correctly. Caution: Never edit the volboot file manually. If you do so, its checksum is invalidated. Viewing the Contents of volboot To view the decoded contents of the volboot file:
# vxdctl list volboot file version: 3/1 seqno: 0.5 cluster protocol version: 40 hostid: train1 ...

13-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Changing the Host ID If you change your host name in UNIX, you need to change your host ID in the volboot file. To change the host ID in the volboot file and on all disks in disk groups currently imported on the machine:
# vxdctl hostid hostname # vxdctl enable

Note: If some disks are inaccessible at the time of a hostid operation, it may be necessary to use the vxdisk clearimport operation to clear out the old host ID on those disks when they become reaccessible. Otherwise, you may not be able to re-add those disks to their disk groups. Caution: Be careful when using this command. If the system crashes before the hostid operation completes, some disk groups may not reimport automatically. Re-Creating the volboot File To re-create the volboot file because it was removed or invalidated:
# vxdctl init [hostname]

If a hostname operand is specified, then this string is used; otherwise, a default host ID is used. The default host ID is the network node name for the host. On systems with a hardware-defined system ID, the default host ID might be derived from this hardware ID.

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-19

Summary
You should now be able to: Describe the components in the VxVM architecture. Interpret VxVM configuration database information. Control the VxVM configuration daemon. Manage the volboot file.

FOS35_Sol_R1.0_20020930

13-14

Summary
This lesson described the VxVM architecture, special files, and other configuration components. Next Steps The next lesson introduces basic recovery operations. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.

13-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 13
Lab 13: Architecture In this lab, you explore the components of the VxVM architecture. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

13-15

Lab 13: Architecture


Goal In this lab, you explore the components of the VxVM architecture. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 13: Architecture


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13-21

13-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14

Introduction to Recovery

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

14-2

14-2

Introduction
Overview This lesson introduces basic recovery concepts and techniques. This lesson describes how data consistency is maintained after a system crash and how hot relocation restores redundancy to failed VxVM objects. This lesson also describes how to manage spare disks, replace a failed disk, and recover a volume. Importance VxVM protects systems from disk failures and helps you to recover from disk failures. You can use the techniques discussed in this lesson to recover from a variety of disk- and volume-related problems that may occur. Outline of Topics Maintaining Data Consistency Hot Relocation Managing Spare Disks Replacing a Disk Unrelocating a Disk Recovering a Volume Protecting the VxVM Configuration

14-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Describe mirror resynchronization processes. Describe the hot-relocation process. Manage spare disks. Replace a failed disk. Return relocated subdisks back to their original disk. Recover a volume. Describe two disaster recovery preparation tasks.
FOS35_Sol_R1.0_20020930 14-3

Objectives After completing this lesson, you will be able to: Describe how VxVM maintains data consistency after a system crash. Describe the hot-relocation process. Manage spare disks. Replace a failed disk. Return relocated subdisks back to their original disk. Recover a volume. Describe tasks used to protect the VxVM configuration.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-3

Resynchronization
Resynchronization is the process of ensuring that, after a system crash:
All mirrors in a volume contain exactly the same data. Data and parity in RAID-5 volumes agree. Crash Crash
Writes Writes Did all writes Did all writes complete? complete? Do all mirrors Do all mirrors contain the contain the same data? same data?

Resynchronize Resynchronize

Types of mirror resynchronization:


FOS35_Sol_R1.0_20020930

Atomic-copy resynchronization Read-writeback resynchronization

14-4

FOS35_Sol_R1.0_200207930

14-4

Maintaining Data Consistency


What Is Resynchronization? Resynchronization is the process of ensuring that, after a system crash: All mirrors in mirrored volumes contain exactly the same data. Data and parity in RAID-5 volumes agree. Data is written to the mirrors of a volume in parallel. If a system crash occurs before all the individual writes complete, some writes may complete while other writes do not. This can cause two reads from the same region of the volume to return different results if different mirrors are used to satisfy the read request. In the case of RAID-5 volumes, it can lead to parity corruption and incorrect data reconstruction. VxVM uses volume resynchronization processes to ensure that all copies of the data match exactly. VxVM records when a volume is first written to and marks it as dirty. When a volume is closed by all processes or stopped cleanly by the administrator, all writes have been completed, and the Volume Manager removes the dirty flag for the volume. Only volumes that are marked dirty when the system reboots require resynchronization. Not all volumes require resynchronization after a system failure. Volumes that were never written or that had no active I/O when the system failure occurred do not require resynchronization.

14-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Atomic-Copy Resynchronization
Atomic-copy resynchronization involves the sequential writing of all blocks of a volume to a plex. This type of resynchronization is used in: Adding a new plex (mirror) Reattaching a detached plex (mirror) to a volume Online reconfiguration operations:
Moving a plex Copying a plex Creating a snapshot Moving a subdisk

FOS35_Sol_R1.0_20020930

14-5

Resynchronization Processes VxVM uses two basic types of resynchronization processes to maintain consistency of plexes in a volume: Atomic-copy resynchronization Read-writeback resynchronization Atomic-Copy Resynchronization Atomic-copy resynchronization refers to the sequential writing of all blocks of the volume to a plex. This operation is used anytime a new mirror is added to a volume, or an existing mirror is in stale mode and has to be resynchronized. Atomic-copy resynchronization is also needed for online reconfiguration operations, such as: Moving a plex Copying a plex Creating a snapshot Moving subdisks Atomic-Copy Resynchronization Process 1 The plex being copied to is set to a write-only state. 2 A read thread is started on the whole volume. (Every block is read internally.) 3 Blocks are written from the good plex to the stale or new plex.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-5

Read-Writeback Resynchronization
Read-writeback resynchronization is used for volumes that were fully mirrored prior to a system failure. This type of resynchronization involves: Mirrors marked ACTIVE remain ACTIVE, and volume is placed in the SYNC state. An internal read thread is started. Blocks are read from the plex specified in the read policy, and the data is written to the other plexes. Upon completion, the SYNC flag is turned off.
FOS35_Sol_R1.0_20020930 14-6

Read-Writeback Resynchronization Read-writeback resynchronization is a process where two or more plexes have the same data, but there may have been outstanding writes to the volume when the system crashed. Because the application must ensure that all writes are completed, the application must fix any writes that are not completed. The responsibility of VxVM is to guarantee that the mirrors have the same data. A database (as an application) usually does this by writing the original data back to the disk. A file system checks to ensure that all of its structures are intact. The applications using the file system must do their own checking. Read-Writeback Resynchronization Process 1 All plexes that were ACTIVE at the time of the crash have the volumes data, and each plex is set to the ACTIVE state again, but the volume is placed in the SYNC state (or NEEDSYNC). 2 An internal read thread is started to read the entire volume, and blocks are read from whatever plex is in the read policy and are written back to the other plexes. 3 When the resynchronization process is complete, the SYNC flag is turned off (set to ACTIVE). User-initiated reads are also written to the other plexes in the volume but otherwise have no effect on the internal read thread.

14-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Impact of Resynchronization
Resynchronization takes time and impacts performance. To minimize this performance impact, VxVM has the following solutions: Dirty region logging for mirrored volumes RAID-5 logging for RAID-5 volumes FastResync for mirrored and snapshot volumes SmartSync Recovery Accelerator for volumes used by database applications

FOS35_Sol_R1.0_20020930

14-7

Minimizing the Impact of Resynchronization The process of resynchronization can impact system performance and can take time. To minimize the performance impact of resynchronization, VxVM offers the following solutions: Dirty region logging for mirrored volumes RAID-5 logging for RAID-5 volumes FastResync for mirrored and snapshot volumes SmartSync Recovery Accelerator for volumes used by database applications

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-7

Dirty Region Logging


For mirrored volumes with logging enabled, DRL speeds plex resynchronization. Only regions that are dirty need to be resynchronized after a crash. You can mirror logs for log redundancy. Do not put log subdisks on a heavily-used disk. VxVM selects an appropriate log size based on volume size. For example:
Volume Size Less than 1 GB 1 GB to 4 GB 4 GB to 6 GB 6 GB to 9 GB 9 GB to 12 GB
FOS35_Sol_R1.0_20020930

Default Log Size 16K 33K 49K 82K 99K


14-8

Dirty Region Logging You were introduced to dirty region logging (DRL) when you created a volume with a log. This section describes how dirty region logging works. How Does DRL Work? DRL logically divides a volume into a set of consecutive regions and keeps track of the regions to which writes occur. A log is maintained that contains a status bit representing each region of the volume. For any write operation to the volume, the regions being written are marked dirty in the log before the data is written. If a write causes a log region to become dirty when it was previously clean, the log is synchronously written to disk before the write operation can occur. On system restart, VxVM recovers only those regions of the volume that are marked as dirty in the dirty region log. Log subdisks store the dirty region log of a volume that has DRL enabled. Only one log subdisk can exist per plex. Multiple log subdisks can be used to mirror the dirty region log. If a plex contains a log subdisk and no data subdisks, it is called a log plex. Only a limited number of bits can be marked dirty in the log at any time. The dirty bit for a region is not cleared immediately after writing the data to the region. Instead, it remains marked as dirty until the corresponding volume region becomes the least recently used.

14-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Dirty Region Logging


Volume
0 1 2 3 3998 3999

DRL Before a Crash


0 1 2 3 ... 3999 4000 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 2 3 ... 3999 4000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Region

DRL After a Crash


0 1 2 3
FOS35_Sol_R1.0_20020930

0 1 2 3 ... 3999 4000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 ... 3999 4000 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0


14-9

3998 3999

FOS35_Sol_R1.0_200207930

14-9

Dirty Region Log Size VxVM selects an appropriate dirty region log size based on the volume size. In the dirty region log: A small number of bytes of the DRL is reserved for internal use. The remaining bytes are used for the DRL bitmap. The bytes are divided into two bitmaps: an active bitmap and a recovery bitmap. Each bit in the active bitmap maps to a single region of the volume. A maximum of 2048 dirty regions per system is allowed by default. How the Bitmaps Are Used in Dirty Region Logging Both bitmaps are zeroed when the volume is started initially, after a clean shutdown. As regions transition to dirty, the log is flushed before the writes to the volume occur. If the system crashes, the active map is ORd with the recovery map. Mirror resynchronization is now limited to the dirty bits in the recovery map. The active map is simultaneously reset, and normal volume I/O is permitted. Utilization of two bitmaps in this fashion allows VxVM to handle multiple system crashes.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-9

RAID-5 Logging
For RAID-5 volumes, logging prevents data corruption during recovery. RAID-5 logging records changes to data and parity on a persistent device before committing the changes to the RAID-5 array. Logs are associated with a RAID-5 volume by being attached as log plexes. You can mirror RAID-5 logs for redundancy.

FOS35_Sol_R1.0_20020930

14-10

RAID-5 Logging Dirty region logging is used for mirrored volumes only. RAID-5 volumes use RAID-5 logs to keep a copy of the data and parity currently being written. You were introduced to RAID-5 logging when you created a volume with a log. Without logging, data not involved in any active writes can be lost or silently corrupted if both a disk in a RAID-5 volume and the system fail. If this doublefailure occurs, there is no way of knowing if the data being written to the data portions of the disks or the parity being written to the parity portions have actually been written. RAID-5 logging is used to prevent corruption of data during recovery by immediately recording changes to data and parity to a log area on a persistent device (such as a disk-resident volume or nonvolatile RAM). The new data and parity are then written to disk. Logs are associated with a RAID-5 volume by being attached as log plexes. More than one log plex can exist for each RAID-5 volume, in which case the log areas are mirrored.

14-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

SmartSync
SmartSync Recovery Accelerator increases the efficiency of resynchronizing mirrored databases.
Only changed data is resynchronized. Oracle automatically uses SmartSync to perform database resilvering. SmartSync works with Oracle.

To take advantage of SmartSync, the way in which you configure the volume depends on the type of volume:
Data volumes: Configure as mirrored volumes without dirty region logs. Redo log volumes: Configure as mirrored volumes with sequential dirty region logging.
14-11

FOS35_Sol_R1.0_20020930

SmartSync Recovery Accelerator What Is SmartSync? The SmartSync Recovery Accelerator feature of VERITAS Volume Manager increases the availability of mirrored databases on volumes by only resynchronizing changed data. The process of resynchronizing mirrored databases is also called resilvering. SmartSync works with the Oracle Universal Database. SmartSync reduces the time required to restore consistency, which frees I/O bandwidth for business-critical applications. The SmartSync feature uses an extended interface between VxVM volumes and the database software to avoid unnecessary work during mirror resynchronization. Oracle database software automatically recognizes and uses SmartSync to perform database resynchronization. Volumes Used by Databases Two types of VxVM volumes are typically used by databases: Data volumes, which contain control files and tablespace files Redo log volumes, which contain database redo logs Redo log volumes have dirty region logs, while data volumes do not. Therefore, SmartSync works with these two types of volumes differently. You must configure each type of volume correctly to take full advantage of the extended interfaces.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-11

Configuring Data Volumes The recovery of a data volume occurs when the database software is started, not at system startup, which reduces the overall impact of recovery when a system reboots. Because recovery is controlled by the database, the recovery time for the volume is the resilvering time for the databasethat is, the time required to replay the redo logs. Because the database keeps its own logs, it is not necessary for VxVM to perform logging. Therefore, you should configure data volumes as mirrored volumes without dirty region logs. This improves recovery time and improves normal database write access by avoiding run-time I/O overhead associated with DRL. Configuring Redo Log Volumes A redo log is a log of changes to the database data. Because the database does not maintain changes to the redo logs, it cannot provide information about which sections require resilvering. Redo logs are also written sequentially, and since traditional dirty region logs are most useful with randomly-written data, they are of minimal use for reducing recovery time for redo logs. VxVM can reduce the number of dirty regions by modifying the behavior of its dirty region logging feature to take advantage of sequential access patterns. Sequential DRL decreases the amount of data requiring recovery and reduces recovery time impact on the system. The enhanced interfaces for redo logs enable the database software to inform VxVM when a volume is to be used as a redo log. This enables VxVM to modify the DRL behavior of the volume to take advantage of the access patterns. Because the improved recovery time depends on dirty region logs, you should configure redo log volumes as mirrored volumes with sequential DRL.

14-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Failure
Permanent: Disk becomes corrupted and is unusable.
Example: VTOC is damaged.

Disk must be logically and physically removed, then replaced with a new disk.

Temporary: Communication to disk is interrupted.


Example: Power is disrupted.

Disk can be logically removed, then reattached as the replacement disk.


FOS35_Sol_R1.0_20020930 14-12

Hot Relocation
Disk Failure Permanent disk failure: When a disk is corrupted and no longer usable, the disk must be logically and physically removed, and then replaced with a new disk. With permanent disk failure, data on the disk is lost. Example: VTOC is damaged. Temporary disk failure: When communication to a disk is interrupted, but the disk is not damaged, the disk can be logically removed, then reattached as the replacement disk. With temporary (or intermittent) disk failure, data still exists on the disk. Example: Power is disrupted. Impact of Disk Failure VxVM is designed to protect your system from the impact of disk failure through a feature called hot relocation. The hot-relocation feature of VxVM automatically detects disk failures and restores redundancy to failed VxVM objects by moving subdisks from failed disks to other disks. When hot relocation is enabled, the system administrator is notified by e-mail about disk failures. You can also view disk failures through the output of the vxprint command or by using VEA to display the status of the disks. You can also see driver error messages on the console or in the system messages file.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-13

What Is Hot Relocation?


Hot Relocation: System automatically reacts to I/O failures on redundant VxVM objects and restores redundancy to those objects by relocating affected subdisks.
Spare Disks VM Disks

Subdisks are relocated to disks designated as spare disks or to free space in the disk group.
FOS35_Sol_R1.0_20020930 14-13

What Is Hot Relocation? Hot relocation is a feature of VxVM that enables a system to automatically react to I/O failures on redundant (mirrored or RAID-5) VxVM objects and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them redundant and accessible again. Partial Disk Failure When a partial disk failure occurs (that is, a failure affecting only some subdisks on a disk), redundant data on the failed portion of the disk is relocated. Existing volumes on the unaffected portions of the disk remain accessible. With partial disk failure, the disk is not removed from VxVM control and is labeled as FAILING, rather than as FAILED. Before removing a FAILING disk for replacement, you must evacuate any remaining volumes on the disk. Note: Hot relocation is only performed for redundant (mirrored or RAID-5) subdisks on a failed disk. Nonredundant subdisks on a failed disk are not relocated, but the system administrator is notified of the failure.

14-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Hot-Relocation Process
Volumes

4
Spare Disks VM Disks

1 3

1. vxrelocd detects disk failure. 1. vxrelocd detects disk failure. 2. Administrator is notified by e-mail. 2. Administrator is notified by e-mail. 3. Subdisks are relocated to a spare. 3. Subdisks are relocated to a spare. 4. Volume recovery is attempted. 4. Volume recovery is attempted.
FOS35_Sol_R1.0_20020930

2
Administrator
14-14

How Does Hot Relocation Work? The hot-relocation feature is enabled by default. No system administrator action is needed to start hot relocation when a failure occurs. The vxrelocd daemon starts during system startup and monitors VxVM for failures involving disks, plexes, or RAID-5 subdisks. When a failure occurs, vxrelocd triggers a hot-relocation attempt and notifies the system administrator, through e-mail, of failures and any relocation and recovery actions. The vxrelocd daemon is started from the S95vxvm-recover file. The argument to vxrelocd is the list of people to e-mail notice of a relocation (default is root). To disable vxrelocd, you can place a # in front of the line in the S95vxvm-recover file. A successful hot-relocation process involves: Failure detection: Detecting the failure of a disk, plex, or RAID-5 subdisk Notification: Notifying the system administrator and other designated users and identifying the affected Volume Manager objects Relocation: Determining which subdisks can be relocated, finding space for those subdisks in the disk group, and relocating the subdisks (The system administrator is notified of the success or failure of these actions. Hot relocation does not guarantee the same layout of data or the same performance after relocation.) Recovery: Initiating recovery procedures, if necessary, to restore the volumes and data (Again, the system administrator is notified of the recovery attempt.) For more information, see the vxrelocd(1m) manual page.
Lesson 14: Introduction to Recovery
Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-15

How Is Space Selected?


Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk. If there is not enough spare disk space, a combination of spare disk space and free space is used. If no disks have been designated as spares, VxVM uses any available free space in the disk group in which the failure occurs. Free space that you exclude from hot relocation is not used.

FOS35_Sol_R1.0_20020930

14-15

How Is Space Selected for Relocation? When relocating subdisks, VxVM attempts to select a destination disk with the fewest differences from the failed disk: 1 Attempt to relocate to the same controller, same target, and same device as the failed drive. 2 Attempt to relocate to the same controller and same target, but to a different device. 3 Attempt to relocate to the same controller, but to any target and any device. 4 Attempt to relocate to a different controller. 5 Potentially scatter the subdisks to different disks. A spare disk must be initialized and placed in a disk group as a spare before it can be used for replacement purposes. Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk, if possible. If no disks have been designated as spares, VxVM automatically uses any available free space in the disk group in which the failure occurs. If there is not enough spare disk space, a combination of spare disk space and free space is used. Free space that you exclude from hot relocation is not used. In all cases, hot relocation attempts to relocate subdisks to a spare in the same disk group, which is physically closest to the failing or failed disk. When hot relocation occurs, the failed subdisk is removed from the configuration database. The disk space used by the failed subdisk is not recycled as free space.
14-16 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Spare Disks: VEA


Select a disk, and select Actions>Set Disk Usage. Select a disk, and select Actions>Set Disk Usage.
Checked: Disk is Checked: Disk is designated as a spare. designated as a spare. Unchecked: Disk is not Unchecked: Disk is not designated as a spare. designated as a spare.

Checked: Disk is Checked: Disk is excluded from hot excluded from hot relocation use. relocation use. Unchecked: Disk is Unchecked: Disk is available for hot available for hot relocation. relocation.
FOS35_Sol_R1.0_20020930 14-16

Managing Spare Disks


Setting up a disk as a spare was introduced in the Managing Disk Groups lesson. You can designate one or more disks as hot-relocation spares when you add a disk to a disk group by using VEA, vxdiskadm, or the CLI command vxedit. Managing Spare Disks: VEA Setting Up a Disk As a Spare: VEA To designate a disk as a hot-relocation spare: 1 Initialize a disk and add it to a disk group. 2 In the main window, highlight the disk to be designated as a hot-relocation spare, and select Actions>Set Disk Usage. 3 In the Set Disk Usage window, mark the Spare check box.

4 Click OK.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-17

Removing the Spare Designation: VEA If you decide that you want to remove the disk from the pool of hot-relocation spares, open the Set Disk Usage window and clear the Spare check box. Excluding a Disk from Hot-Relocation Use: VEA To exclude a disk from hot-relocation use, you mark the No hot use check box in the Set Disk Usage window. Making a Disk Available for Hot Relocation: VEA If the disk was previously excluded from hot relocation, you can make the disk available for hot relocation by clearing the No hot use check box. Reserving a Disk: VEA A reserved disk is not the same as a spare disk. By marking the Reserved check box, you can designate a disk as a reserved disk. A reserved disk is not considered part of the free space pool. If you perform a task that requires disk space, VEA does not allocate space from disks designated as reserved.

14-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Spare Disks: vxdiskadm


Menu 1 2

To manage spare disks from the vxdiskadm main menu:


Volume Manager Support Operations Menu: VolumeManager/Disk . . . 12 Mark a disk as a spare for a disk group 13 Turn off the spare flag on a disk 14 Unrelocate subdisks back to a disk 15 Exclude a disk from hot-relocation use 16 Make a disk available for hot-relocation use . . .

FOS35_Sol_R1.0_20020930

14-17

Managing Spare Disks: vxdiskadm Setting Up a Disk As a Spare: vxdiskadm By using the vxdiskadm interface, you can set up the disk as a spare disk when you add a disk to a disk group. 1 In the vxdiskadm main menu, select option 1, Add or initialize one or more disks. 2 When vxdiskadm asks whether this disk should become a hot-relocation spare, type y to set up the disk as a spare disk: Add disk as a spare disk for datadg? [y,n,q,?] (default: n) y Alternatively, you can: 1 Select option 12, Mark a disk as a spare for a disk group, in the main menu. 2 When prompted, enter the name of the disk to be marked as a spare: Enter disk name [<disk>,list,q,?] datadg01 3 After the disk has been designated as a spare, you receive the following confirmation: Marking of datadg01 in datadg as a spare disk is complete.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-19

Removing the Spare Designation: vxdiskadm To remove the spare designation from a disk: 1 Select option 13, Turn off the spare flag on a disk, in the main menu. 2 When prompted, enter the name of the spare disk: Enter disk name [<disk>,list,q,?] datadg01 3 After the spare flag has been turned off, you receive the following confirmation: Disk datadg01 in datadg no longer marked as a spare disk. Excluding a Disk from Hot-Relocation Use: vxdiskadm To exclude a disk from hot-relocation use: 1 In the main menu, select menu item 15, Exclude a disk from hot-relocation use. 2 When prompted, enter name of the disk to be excluded: Enter disk name [<disk>,list,q,?] datadg01 3 After the disk has been excluded, you receive the following confirmation: Excluding datadg01 in datadg from hot-relocation use is complete. Making a Disk Available for Hot Relocation: vxdiskadm If a disk was previously excluded from hot-relocation use, you can undo the exclusion and add the disk back to the hot-relocation pool by using vxdiskadm. To make a previously excluded disk available for hot-relocation use: 1 Select option 16, Make a disk available for hot-relocation use, in the main menu. 2 When prompted, enter the name of the disk to be made available: Enter disk name [<disk>,list,q,?] datadg01 3 After the disk has been made available for hot relocation, you receive the following confirmation: Making datadg01 in datadg available for hot-relocation use is complete.

14-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Spare Disks: CLI

#_

To designate a disk as a spare, or remove the designation, turn the spare flag on or off:
vxedit -g diskgroup set spare=on|off dm_name

To exclude a disk from hot relocation, or remove the exclusion, turn the nohotuse flag on or off:
vxedit -g diskgroup set nohotuse=on|off dm_name

To force hot relocation to only use spare disks:


Add spare=only to /etc/default/vxassist

To include spare disks in a space check, use -r:


vxassist -g mydg -r maxsize layout=stripe
FOS35_Sol_R1.0_20020930 14-18

Managing Spare Disks: CLI Setting Up a Disk As a Spare: CLI To set up a disk as a spare from the command line, you use the vxedit command to set the spare flag on for a disk. If the spare flag is set for a disk, then the disk is designated for use by the hot-relocation facility. A disk media record with the spare flag set is used only for hot relocation.
vxedit -g diskgroup set spare=on disk_media_name

Note: A disk with the spare flag set is used only for hot relocation. Subsequent vxassist commands do not allocate a subdisk on that disk, unless you explicitly specify the disk in the argument of a vxassist command. Available space on a spare disk is not included in the disk groups free space pool. Removing the Spare Designation: CLI To remove the spare designation for a disk, you set the spare flag off:
vxedit -g diskgroup set spare=off disk_media_name

Excluding a Disk from Hot Relocation: CLI To exclude a disk from hot-relocation use, you can set the nohotuse flag:
vxedit -g diskgroup set nohotuse=on disk_media_name

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-21

Making a Disk Available for Hot Relocation: CLI If a disk was previously excluded from hot-relocation use, you can make the disk available for hot relocation by setting the nohotuse flag to off:
vxedit -g diskgroup set nohotuse=off disk_media_name

Using Spare Disks Only: CLI You can force the hot-relocation feature to use only the disks marked as spare by adding the flag spare=only into the /etc/default/vxassist file. Note: You cannot set hot relocation to force all data from a failed drive to relocate to another single drive. An older feature called hot sparing (vxsparecheck) provided that functionality. Including Spare Disks in Space Availability: CLI To include spare disks when determining how much space is available using the maxsize or maxgrow options, you add the -r flag in the vxassist command:
# vxassist -g mydg -r maxsize layout=stripe ncolumns=3

Reserving Disks: CLI A spare disk is not the same as a reserved disk. You can reserve a set of disks for special purposes, such as to avoid general use of a particularly slow or a particularly fast disk. To reserve a disk for special purposes, you use the command:
# vxedit set reserve=on diskname

After you type this command, vxassist does not allocate space from the selected disk unless that disk is specifically mentioned on the vxassist command line. For example, if disk disk03 is reserved, the command:
# vxassist make vol03 20m disk03

overrides the reservation and creates a 20-MB volume on disk03. However, the command:
# vxassist make vol04 20m

does not use disk03, even if there is no free space on any other disk. To turn off reservation of a disk, you type:
# vxedit set reserve=off diskname

14-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Replacement Tasks


1 Replace failed disk
Replace corrupt disk with a new disk.

2 Recover volumes
Start disabled volumes. Resynchronize mirrors. Resynchronize RAID-5 parity.
FOS35_Sol_R1.0_20020930

Volume Volume

14-19

Replacing a Disk
Disk Replacement Tasks Replacing a failed or corrupted disk involves two main operations: Disk replacement: When a disk fails, you replace the corrupt disk with a new disk. The disk used to replace the failed disk must be either an uninitialized disk or a disk in the free disk pool. The replacement disk cannot already be in a disk group. If you want to use a disk that exists in another disk group, then you must remove the disk from the disk group and place it back into the free disk pool before you can use it as the replacement disk. Volume recovery: When a disk fails and is removed for replacement, the plex on the failed disk is disabled, until the disk is replaced. Volume recovery involves: Starting disabled volumes Note: A volume remains started, and does not need to be restarted, if it has a RAID-5 or mirrored layoutthat is, if the volume has one remaining active plex. Resynchronizing mirrors Resynchronizing RAID-5 parity After successful recovery, the volume is available for use again. Redundant (mirrored or RAID-5) volumes can be recovered by VxVM. Nonredundant (unmirrored) volumes must be restored from backup.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-23

Adding a New Disk


1. Connect the new disk. 2. Get Solaris to recognize the disk: drvconfig disks
Note: In Solaris 7 and later, Note: Solaris and later, you can use devfsadm. you can use devfsadm.

3. Verify that Solaris recognizes the disk: prtvtoc /dev/dsk/device_name 4. Get VxVM to recognize the disk: vxdctl enable 5. Verify that VxVM recognizes the disk: vxdisk list
FOS35_Sol_R1.0_20020930

Note: In VEA, use Actions>Rescan to run disk setup commands appropriate for the OS and ensure that VxVM recognizes newly attached hardware.

14-20

FOS35_Sol_R1.0_200207930

14-20

Adding a New Disk Before VxVM can use a new disk, you must ensure that Solaris recognizes the disk. When adding a new disk, follow these steps to ensure that the new disk is recognized: 1 Connect the new disk. 2 Get Solaris to recognize the disk: # drvconfig # disks Note: In Solaris 7 and later, you can use devfsadm, a one-command replacement for drvconfig and disks. 3 Verify that Solaris recognizes the disk: # prtvtoc /dev/dsk/device_name 4 Get VxVM to recognize that a failed disk is now working again: # vxdctl enable 5 Verify that VxVM recognizes the disk: # vxdisk list After Solaris and VxVM recognize the new disk, you can then use the disk as a replacement disk. Note: In VEA, use the Actions>Rescan option to run disk setup commands appropriate for the operating system. This option ensures that VxVM recognizes newly attached hardware.

14-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Replacing a Disk: Methods


VEA Highlight a disk and select Actions>Replace Disk. Select the new (replacement) disk and click OK.
Menu 1 2

vxdiskadm
Option 5, Replace a failed or removed disk

#_
FOS35_Sol_R1.0_20020930

CLI
vxdg adddisk

14-21

Disk Replacement Methods When you replace a disk using VEA or vxdiskadm, multiple operations are performed to complete the disk replacement. You can use any of the following methods to replace a disk. These methods are detailed in the sections that follow.
VEA vxdiskadm CLI Highlight a disk and select Actions>Replace Disk. Complete the Replace Disk dialog box and click OK. Option 5, Replace a failed or removed disk vxdg adddisk (to add new disk)

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-25

Replacing a Disk: VEA


Select Actions>Replace Disk. Select Actions>Replace Disk.

Select a replacement disk from Select a replacement disk from the list of available disks. the list of available disks.

FOS35_Sol_R1.0_20020930

14-22

Replacing a Disk: VEA To replace a disk: 1 In the main window, select the disk to be replaced. 2 In the Actions menu, select Replace Disk. 3 Complete the Replace Disk dialog box by selecting the disk to be used as the new (replacement) disk.

4 Click OK. VxVM replaces the disk and attempts to recover volumes.

14-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Replacing a Disk: vxdiskadm


Menu 1 2

To replace a failed disk, use option 5:


Volume Manager Support Operations Menu: VolumeManager/Disk . . . 3 4 5 . . . Remove a disk Remove a disk for replacement Replace a failed or removed disk

Enter the name of the disk to be replaced. Enter the name of the replacement disk. VxVM automatically tries to recover volumes.
FOS35_Sol_R1.0_20020930 14-23

Replacing a Failed Disk: vxdiskadm To replace a disk that has already failed or that has already been removed, you select option 5, Replace a failed or removed disk. This process creates a public and private region on the new disk and populates the private region with the disk media name of the failed disk. 1 In the main menu, select option 5, Replace a failed or removed disk. 2 When prompted, specify the name of the disk to be replaced: Select a removed or failed disk [<disk>,list,q,?] datadg02 3 Next, the disks available for use as replacement disks are displayed. The devices displayed are disks in the free disk poolthat is, disks that have been initialized for use by VxVM, but that have not been added to a disk group. Type a device name or press Return to select the default device. You can type none to initialize a different disk to replace the removed disk: The following devices are available as replacements: c1t0d0s2 c1t1d0s2 Choose a device, or select "none" [<device>,none,q,?] (default: c1t0d0s2) 4 After you confirm the operation, the following status message is displayed: Replacement of disk datadg02 in group datadg with disk device c1t0d0s2 completed successfully.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-27

Replacing a Disk: CLI

#_

If the failed disk has been removed, you can add a new disk to replace it:
# vxdg -k -g diskgroup adddisk disk_name=device_name

The -k option forces VxVM to take the disk media name of the failed disk and assign it to the new disk. For example:
# vxdg -k -g datadg adddisk datadg01=c1t1d0s2

Note: Use the -k option with caution.


FOS35_Sol_R1.0_20020930 14-24

Replacing a Disk: CLI Assuming that hot relocation has already removed the failed disk, to replace a failed disk from the command line, you add the new disk in its place:
vxdg -k -g diskgroup adddisk disk_name=device_name

The -k switch forces VxVM to take the disk media name of the failed disk and assign it to the new disk. For example, if the failed disk datadg01 in the datadg disk group was removed, and you want to add the new device c1t1d0s2 as the replacement disk:
# vxdg -k -g datadg adddisk datadg01=c1t1d0s2

Note: Exercise caution when using the -k option to vxdg. Attaching the wrong disk with the -k option can cause unpredictable results in VxVM.

14-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Unrelocating a Disk
VEA Highlight a disk and select Actions>Undo Hot Relocation. Select the name of the original disk.
Menu 1 2

vxdiskadm
Select option 14, Unrelocate subdisks back to a disk, from the main menu.

#_

CLI
vxunreloc
14-25

FOS35_Sol_R1.0_20020930

Unrelocating a Disk
The vxunreloc Utility The hot-relocation feature detects I/O failures in a subdisk, relocates the subdisk, and recovers the plex associated with the subdisk. VxVM also provides a utility that unrelocates a diskthat is, moves relocated subdisks back to their original disk. After hot relocation moves subdisks from a failed disk to other disks, you can return the relocated subdisks to their original disk locations after the original disk is repaired or replaced. Unrelocation is performed using the vxunreloc utility, which restores the system to the same configuration that existed before a disk failure caused subdisks to be relocated.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-29

Unrelocating a Disk: VEA


Select Actions>Undo Hot Relocation. Select Actions>Undo Hot Relocation.

Select the disk that contained the Select the disk that contained the subdisks before relocation occurred. subdisks occurred.

FOS35_Sol_R1.0_20020930

14-26

Unrelocating a Disk: VEA To move relocated subdisks back to a disk: 1 In the main window, select the original disk that contained the subdisks before hot relocation. 2 In the Actions menu, select Undo Hot Relocation. Note: This option is only available after hot relocation or hot sparing has occurred. 3 In the Undo Hot Relocation dialog box, select the disk that contained the subdisks before relocation occurred.

4 To begin the unrelocation operation, click OK. Note: It is not possible to return relocated subdisks to their original disks if their disk groups relocation information has been cleared.

14-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Unrelocating a Disk: vxdiskadm


Menu 1 2

Select Option 14 from the vxdiskadm main menu:


Volume Manager Support Operations Menu: VolumeManager/Disk . . . 13 Turn off the spare flag on a disk 14 Unrelocate subdisks back to a disk 15 Exclude a disk from hot-relocation use . . .

Enter the name of the original disk. Optional: Select a new destination disk. Specify whether to force unrelocation if exact offsets are not possible.
FOS35_Sol_R1.0_20020930 14-27

Unrelocating a Disk: vxdiskadm To unrelocate a disk using the vxdiskadm interface: 1 In the vxdiskadm main menu, select option 14, Unrelocate subdisks back to a disk. 2 When prompted, specify the disk media name of the original diskthat is, where the hot-relocated subdisks originally resided: Enter the original disk name [<disk>,list,q,?]datadg01 3 Next, if you do not want to unrelocate the subdisks to the original disk, you can select a new destination disk. Unrelocate to a new disk [y,n,q,?] (default: n)n 4 If moving subdisks to the original offsets is not possible, you can also choose the force option to unrelocate the subdisks to the specified disk, but not necessarily to the exact original offsets. Use -f option to unrelocate the subdisks if moving to the exact offset fails? [y,n,q,?] (default: n)y 5 Confirm the requested operation. Requested operation is to move all the subdisks which were hot-relocated from datadg01 back to datadg01 of disk group datadg. Continue with operation? [y,n,q,?] (default: y)y 6 Upon completion of the unrelocation, the status is displayed: Unrelocate to disk datadg01 is complete.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-31

Unrelocating a Disk: CLI

#_

To unrelocate a disk:
vxunreloc [-f] [-g diskgroup] [-t tasktag] [-n disk_name] orig_disk_name
orig_disk_name -n disk_name -f Original disk before relocation Unrelocates to a disk other than the original Forces unrelocation if exact offsets are not possible

To view relocated subdisks:


vxprint -g diskgroup se sd_orig_dmname=disk_name
FOS35_Sol_R1.0_20020930 14-28

Unrelocating a Disk: CLI To unrelocate a disk from the command line, you use the vxunreloc command:
vxunreloc [-f] [-g diskgroup] [-t tasktag] [-n disk_name] orig_disk_name

In the syntax: orig_disk_name is the disk where the relocated subdisks originally resided. -g diskgroup unrelocates the subdisks from the specified disk group. -t tasktag specifies a tag to be passed to the underlying utility. -n disk_name unrelocates to a disk other than the original disk. Use this option to specify a new disk media name. -f is used if unrelocating to the original disk using the same offsets is not possible. This option forces unrelocation to different offsets. Viewing Relocated Subdisks: CLI When a subdisk is hot-relocated, its original disk media name is stored in the sd_orig_dmname field of the subdisk record files. You can search this field to find all the subdisks that originated from a failed disk using the vxprint command:
vxprint -g diskgroup -se sd_orig_dmname=disk_name

For example, to display all the subdisks that were hot-relocated from datadg01 within the datadg disk group:
# vxprint -g datadg -se sd_orig_dmname=datadg01
14-32 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering a Volume: VEA


Select Actions>Recover Volume. Select Actions>Recover Volume.

Verify the volume name and confirm the operation. Verify the volume name and confirm the operation.
FOS35_Sol_R1.0_20020930 14-29

Recovering a Volume
Recovering a Volume: VEA To perform volume recovery: 1 In the main window, select the volume to be recovered. 2 Select Actions>Recover Volume. 3 When prompted, confirm that you want to recover the specified volume.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-33

The vxreattach Command

#_

To reattach disks to a disk group:


/etc/vx/bin/vxreattach [-bcr] [disk_name]

-b Performs reattachment in the background -c Checks to determine if reattachment is possible No operation is performed. -r Attempts to recover stale plexes by invoking vxrecover Used if disk has a transient failure, such as when a drive is turned off and then turned back on
FOS35_Sol_R1.0_20020930 14-30

Recovering a Volume: CLI The vxreattach Command The vxreattach utility reattaches disks to a disk group and retains the same media name. This command attempts to find the name of the drive in the private region and to match it to a disk media record that is missing a disk access record. This operation may be necessary if a disk has a transient failurefor example, if a drive is turned off and then back on, or if the Volume Manager starts with some disk drivers unloaded and unloadable.
vxreattach tries to find a disk in the same disk group with the same disk ID for the disks to be reattached. The reattach operation may fail even after finding the disk with the matching disk ID if the original cause (or some other cause) for the disk failure still exists. /etc/vx/bin/vxreattach [-bcr] [disk_name]

In the syntax:
Option Description Performs the reattach operation in the background Checks to determine if a reattach is possible No operation is performed, but the disk group name and the disk media name at which the disk can be reattached are displayed. Attempts to recover stale plexes of any volumes on the failed disk by invoking vxrecover
VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

-b -c

-r

14-34

The vxrecover Command

#_

To recover specific volumes (volume_name) or all volumes on a disk (disk_name):


vxrecover [-bnpsvV] [-g diskgroup] [volume_name|disk_name]
-b -n -p -s -v|V Performs recovery in the background Starts volumes, but does not perform recovery Displays a list of startable volumes Starts disabled volumes Displays task information

Example
# vxrecover -b -g datadg datavol
FOS35_Sol_R1.0_20020930 14-31

The vxrecover Command To perform volume recovery operations from the command line, you use the vxrecover command. The vxrecover program performs plex attach, RAID-5 subdisk recovery, and resynchronize operations for specified volumes (volume_name), or for volumes residing on specified disks (disk_name). You can run vxrecover any time to resynchronize mirrors. Note: The vxrecover command only works on a started volume. A started volume displays an ENABLED state in vxprint -ht. Recovery operations are started in an order that prevents two concurrent operations from involving the same disk. Operations that involve unrelated disks run in parallel.
vxrecover [-bnpsvV] [-g diskgroup] [volume_name|disk_name]

In the syntax:
Option Description Performs recovery operations in the background If used with -s, then volumes are started before recovery begins in the background. Limits operation of the command to the given disk group, as specified by disk group ID or disk group name

-b

-g diskgroup

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-35

-n

Does not perform any recovery operations If used with -s, then volumes are started, but no other actions are taken. If used with -p, then the only action of vxrecover is to print a list of startable volumes. Prints the list of selected volumes that are startable Starts disabled volumes that are selected by the operation With -s and -n, volumes are started, but no other recovery takes place. Displays information about each task started by vxrecover For recovery operations (as opposed to start operations), prints a completion status when each task completes. The -V option displays more detailed information.

-p -s

-v, -V

The vxrecover Command: Examples After replacing the failed disk datadg01 in the datadg disk group, and adding the new disk c1t1d0s2 in its place, you can attempt to recover the volume datavol:
# vxrecover -bs -g datadg datavol

To recover, in the background, any detached subdisks or plexes that resulted from replacement of the disk datadg01 in the datadg disk group:
# vxrecover -b -g datadg datadg01

To monitor the operations during the recovery, you add the -v option:
# vxrecover -v -g datadg datadg01

Recovering Volumes: vxdiskadm The vxdiskadm utility automatically attempts to recover volumes by invoking the vxreattach and vxrecover utilities.

14-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Saving the VxVM Configuration


To save a VxVM disk group configuration database: # vxprint -g diskgroup -hmQqr > backup.diskgroup # vxprint -g diskgroup -hmvpsQqr > backup.diskgroup To display saved configuration information: # vxprint -D - -rht < backup.diskgroup To recover a lost volume using the saved configuration: # vxprint -D - -rhtmqQ lostvolume < backup.diskgroup > restoredvolume # vxmake -g diskgroup -d restoredvolume To start the restored volume, and recover its plexes: # vxrecover -Es restoredvolume
14-32

FOS35_Sol_R1.0_20020930

Protecting the VxVM Configuration


Precautionary Tasks To protect the VxVM configuration, you can perform two precautionary tasks: Save a copy of the VxVM configuration using the vxprint command. Save a copy of the /etc/system file. The vxprint Command The vxprint utility displays complete or partial information from records stored in the VxVM configuration database. You can also use the vxprint command to save the VxVM configuration to a file and later use that file to recover removed volumes. By saving the output of the vxprint command to a file, you can then use the vxmake command with the saved file to restore the configuration, if needed. When saving the VxVM database configuration, you use the -m option. This option displays all information about each record in a format that is useful as input to the vxmake utility. To view the saved records, you can use the -D - option. This option reads a configuration from the standard input format.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-37

Saving the Configuration Database To save a VxVM disk group configuration database:
# vxprint -g diskgroup -hmQqr > backup.diskgroup

This command saves the definition of the volumes, plexes, subdisks, and the disk group itself. You can also use the command:
# vxprint -g diskgroup -hmvpsQqr > backup.diskgroup

This command saves the definition of the volumes, plexes, and subdisks only. Displaying a Saved Configuration To display the saved configuration information:
# vxprint -D - -rht < backup.diskgroup

This command displays the entire disk group definition, with all its objects. Recovering a Lost Volume To recover a lost volume using the saved configuration:
# vxprint -D - -rhtmqQ lostvolume < backup.diskgroup > restoredvolume

This command creates object definitions for a restored volume out of the object definitions in the lost volume. To implement the object definitions of restoredvolume into a real volume:
# vxmake -g diskgroup -d restoredvolume

To start the restored volume, and recover its plexes, if appropriate:


# vxrecover -Es restoredvolume

Note: For more information on the vxmake command, see the vxmake(1m) manual page.

14-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Saving the System File


The /etc/system file contains entries for root encapsulation. Save the /etc/system file prior to root encapsulation. If you encounter problems with the root volume, the saved system file can be used to recover the volume and your system. To boot with the saved system file, use: boot -a

FOS35_Sol_R1.0_20020930

14-33

Saving the /etc/system File By saving the /etc/system file, you can recover your root volume if the system becomes unbootable. The /etc/system file has entries placed in it for root encapsulation. If the root volume has problems with its plexes or subdisks, it may not be possible to boot the system. If you maintain a saved system file, VERITAS support can assist in recovering your volume and system. To specify the saved system file to the boot program, boot the system with the boot -a command. When the system prompts for the name of the system file, type the path of the saved system file. If you have already encapsulated the root, and want to save the system file anyway, you need to edit the file: 1 Copy the file to a new name in /etc. For example: cp /etc/system /etc/system.prevm 2 Edit /etc/system.prevm to place a * to any entries between the lines * vxvm_START (do not remove) and * vxvm_END (do not remove) that do not start with the word forceload. Some of the lines may appear as rootdev:/pseudo/vxio@0:0 and set vxio:vol_rootdev_is_volume=1. 3 Call support if you encounter problems booting your root disk, and let them know you have a saved system file that does not include an encapsulated root.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-39

Summary
You should now be able to: Describe mirror resynchronization processes. Describe the hot-relocation process. Manage spare disks. Replace a failed disk. Return relocated subdisks back to their original disk. Recover a volume. Describe two disaster recovery preparation tasks.
FOS35_Sol_R1.0_20020930 14-34

Summary
This lesson introduced basic recovery concepts and techniques. This lesson described how data consistency is maintained after a system crash and how hot relocation restores redundancy to failed VxVM objects. This lesson also described how to manage spare disks, replace a failed disk, and recover a volume. Next Steps The next lesson examines specific types of disk failure in greater detail, how VxVM reacts to disk failures, and how to resolve disk failures. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.

14-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 14
Lab 14: Introduction to Recovery In this lab, you perform a variety of basic recovery operations. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

14-35

Lab 14: Introduction to Recovery


Goal In this lab, you perform a variety of basic recovery operations. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 14: Introduction to Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14-41

14-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15

Disk Problems and Solutions

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

15-2

15-2

Introduction
Overview This lesson describes various disk failures that you may experience, how VERITAS Volume Manager (VxVM) reacts to the failures, and how to apply basic recovery techniques to resolve different types of disk failure. This lesson provides step-by-step solutions for each disk failure scenario. Importance By understanding how VxVM reacts to disk failures, you can troubleshoot and recover from disk failure problems. Outline of Topics Identifying I/O Failure Disk Failure Types Resolving Permanent Disk Failure Resolving Temporary Disk Failure Resolving Intermittent Disk Failure

15-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Identify and interpret I/O failure through console messages, disk records, and volume states. Describe three types of disk failure. Resolve permanent disk failures by using VxVM commands. Resolve temporary disk failures by using VxVM commands. Resolve intermittent disk failures by using VxVM commands.
FOS35_Sol_R1.0_20020930 15-3

Objectives After completing this lesson, you will be able to: Identify and interpret I/O failure through console messages, disk records, and volume states. Describe three types of disk failure. Resolve permanent disk failures by using VxVM commands. Resolve temporary disk failures by using VxVM commands. Resolve intermittent disk failures by using VxVM commands.

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-3

Disk Failure Handling


Solaris an detects error and informs vxconfigd. Solaris an detects error and informs vxconfigd. Yes Is the volume No Is the volume redundant? redundant? Yes Is the private Is the private No region region accessible? accessible?

Mark the disk as Mark the disk as FAILING. FAILING. Mark the affected Mark the affected plex with IOFAIL. plex with IOFAIL. Relocate subdisks. Relocate subdisks.
FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

Mark the disk as Mark the disk as FAILED. FAILED. Detach the disk. Detach the disk. Mark the affected plex Mark the affected plex with NODEVICE. with NODEVICE. Disable nonredundant Disable nonredundant volumes. volumes. Relocate redundant Relocate redundant volumes. volumes.

Display error Display error messages. messages. Do not detach Do not detach the disk. the disk. Do not change Do not change volume states. volume states.
15-4

15-4

Identifying I/O Failure


Disk Failure Data availability and reliability are ensured through most failures if you are using VxVM redundancy features, such as mirroring or RAID-5. If the volume layout is not redundant, loss of a drive may result in loss of data and may require recovery from backup. For I/O failure on a nonredundant volume, VxVM reports the error, but does not take any further action. Disk Failure Handling When a drive becomes unavailable during an I/O operation or experiences uncorrectable I/O errors, the operating system detects SCSI failures and reports them to VxVM. The method that VxVM uses to process the SCSI failure depends on whether the failure occurs on a nonredundant or a redundant volume. Failure on a Nonredundant Volume If the I/O failure occurs on a nonredundant volume: VxVM prints path failure and uncorrectable I/O error messages on the console. VxVM does not detach the VxVM disk on the failed drive. VxVM does not change the states of the volumes on the disk.

15-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Failure on a Redundant Volume If the I/O failure occurs on a redundant volume: VxVM prints error messages on the console. VxVM checks to determine if it can access the private region on the disk. If VxVM can still access the private region on the disk: VxVM marks the disk as FAILING. The plex with the affected subdisk is set with the IOFAIL condition flag. Hot relocation relocates the affected subdiskif it is enabled and if there is available redundancy. If VxVM cannot access the private region on the disk: The VxVM disk on the failed drive is detached and marked as FAILED. All plexes using that disk are changed to the NODEVICE state. Nonredundant volumes on the disk are disabled. If hot relocation is enabled, hot relocation is performed for redundant volumes.

FAILING vs. FAILED Disks


Volume Manager differentiates between FAILING and FAILED drives. FAILING: If there are uncorrectable I/O failures on the public region of the drive, but VxVM can still access the private region of the drive, the disk is marked as FAILING. FAILED: If VxVM cannot access the private region or the public region, the disk is marked as FAILED. The condition flags and object states are described in detail in the next lesson.

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-5

Identifying Failure: Console


When a disk fails, you receive console messages similar to:
WARNING: /pci@1f,0/pci@1/pci@3/SUNW,isptwo@4/sd@2,0 (sd2): SCSI transport failed: reason incomplete: retrying command WARNING: /pci@1f,0/pci@1/pci@3/SUNW,isptwo@4/sd@2,0 (sd2): disk not responding to selection NOTICE: vxvm:vxdmp: disabled path 32/0x10 belonging to the dmpnode 154/0x10 NOTICE: vxvm:vxdmp: disabled dmpnode 154/0x10 NOTICE: vxdmp: Path failure on 32/20 WARNING: vxvm:vxio: error on Plex vol01-02 while writing volume vol01 offset 183680 length 0 WARNING: vxvm:vxio: Plex vol01-02 detached from volume vol01 WARNING: vxvm:vxio: datadg02-01 Subdisk failed in plex vol01-02 in vol vol01 vxvm:vxconfigd: NOTICE: Offlining config copy 1 on disk c1t2d0s2: Reason: Disk write failure vxvm:vxconfigd: NOTICE: Detached disk datadg02 FOS35_Sol_R1.0_20020930 15-5 vxvm:vxassist: ERROR: Cannot allocate space to replace subdisks
FOS35_Sol_R1.0_20020930 15-5

Identifying Failure: Console Messages If a drive fails and is detached by VxVM, the configuration copy on the private region of that disk is taken offline, and another copy is activated by default on another disk in the same disk group. If the only disk in a disk group fails, VxVM configuration data as well as the data itself may be lost. Console Messages If the disk named c1t2t0 fails, you receive console messages similar to:
WARNING: /pci@1f,0/pci@1/pci@3/SUNW,isptwo@4/sd@2,0 (sd2): SCSI transport failed: reason incomplete: retrying command ... WARNING: /pci@1f,0/pci@1/pci@3/SUNW,isptwo@4/sd@2,0 (sd2): disk not responding to selection NOTICE: vxvm:vxdmp: disabled path 32/0x10 belonging to the dmpnode 154/0x10 NOTICE: vxvm:vxdmp: disabled dmpnode 154/0x10 NOTICE: vxdmp: Path failure on 32/20 WARNING: vxvm:vxio: error on Plex vol01-02 while writing volume vol01 offset 183680 length 0 WARNING: vxvm:vxio: Plex vol01-02 detached from volume vol01 WARNING: vxvm:vxio: datadg02-01 Subdisk failed in plex vol01-02 in vol vol01 vxvm:vxconfigd: NOTICE: Offlining config copy 1 on disk c1t2d0s2: Reason: Disk write failure vxvm:vxconfigd: NOTICE: Detached disk datadg02 vxvm:vxassist: ERROR: Cannot allocate space to replace subdisks

15-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Identifying Failure: Disk Records


VxVM disk records before the failure: # vxdisk list
DEVICE c0t0d0s2 c1t1d0s2 c1t2d0s2 c1t3d0s2 c1t4d0s2 TYPE sliced sliced sliced sliced sliced DISK rootdisk datadg01 datadg02 GROUP rootdg datadg datadg STATUS online online online online online

VxVM disk records after the failure: # vxdisk list


DEVICE c0t0d0s2 c1t1d0s2 c1t2d0s2 c1t3d0s2 c1t4d0s2 FOS35_Sol_R1.0_20020930 FOS35_Sol_R1.0_20020930

TYPE sliced sliced sliced sliced sliced -

DISK rootdisk datadg01 datadg02

GROUP rootdg datadg datadg

STATUS online online online online online 15-6 failed was:c1t2d0s2


15-6

Identifying Failure: Disk Records VxVM Disk Records Before the Failure
# vxdisk list

DEVICE c0t0d0s2 c1t1d0s2 c1t2d0s2 c1t3d0s2 c1t4d0s2

TYPE sliced sliced sliced sliced sliced

DISK rootdisk datadg01 datadg02 -

GROUP rootdg datadg datadg -

STATUS online online online online online

VxVM Disk Records After the Failure


# vxdisk list

DEVICE c0t0d0s2 c1t1d0s2 c1t2d0s2 c1t3d0s2 c1t4d0s2 -

TYPE sliced sliced sliced sliced sliced -

DISK rootdisk datadg01 datadg02

GROUP rootdg datadg datadg

STATUS online online online online online failed was:c1t2d0s2

When VxVM detaches the disk, it breaks the mapping between the VxVM diskdisk media record (datadg02)and the disk drive (c1t2d0s2).

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-7

However, information on the disk media record, such as the disk media name, the disk group, the volumes, plexes, and subdisks on the VxVM disk, and so on, is maintained in the configuration database in the active private regions of the disk group. The output of vxdisk list displays the failed drive as online until the VxVM configuration daemon is forced to reread all the drives in the system and to reset its tables. To force the VxVM configuration daemon to reread all the drives in the system:
# vxdctl enable

After you run this command, the drive status changes to error for the failed drive, and the disk media record changes to failed.

15-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Identifying Failure: Volume States


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 1519 ACTIVE ACTIVE 0 ACTIVE 0 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH

Before the failure Before the failure


STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW ENA fsgen RW 15-7 ENA
15-7

c1t1d0s2 sliced c1t2d0s2 sliced vol01 vol01-01 vol01 vol01-02 ENABLED ENABLED datadg01 ENABLED datadg02

FOS35_Sol_R1.0_20020930

v vol02 ENABLED ACTIVE pl vol02-01 vol02 ENABLED ACTIVE sd datadg02-02 vol02-01 datadg02 205200

FOS35_Sol_R1.0_20020930

Identifying Failure: Volume States Volume States Before the Failure VxVM object states are discussed in more detail in the next lesson. All of the volume and plex states before the failure are ENABLED and ACTIVE, which indicates that the volumes are already started and all the volumes and plexes are actively participating in user I/O activities. # vxprint -g datadg -ht
DG DM RV RL V PL SD SV . . dg dm dm v pl sd pl sd v pl sd NAME NAME NAME NAME NAME NAME NAME NAME . datadg datadg01 datadg02 vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG NLOG DEVICE TYPE RLINK_CNTKSTATE RVG KSTATE RVG KSTATE VOLUME KSTATE PLEX DISK PLEX VOLNAME default default MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 1519 ACTIVE ACTIVE 0 ACTIVE 0 ACTIVE ACTIVE 205200 GROUP-ID PUBLEN STATE PRIMARY DATAVOLS SRL REM_HOSTREM_DG REM_RLNK LENGTH READPOL PREFPLEX UTYPE LENGTH LAYOUT NCOL/WID MODE LENGTH [COL/]OFF DEVICE MODE LENGTH [COL/]OFF AM/NM MODE 954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 fsgen RW ENA RW ENA

c1t1d0s2 sliced c1t2d0s2 sliced vol01 vol01-01 vol01 vol01-02 ENABLED ENABLED datadg01 ENABLED datadg02

vol02 ENABLED vol02-01 vol02 ENABLED datadg02-02 vol02-01 datadg02

fsgen RW c1t2d0 ENA

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-9

Identifying Failure: Volume States


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 ACTIVE ACTIVE 0 NODEVICE 0 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH

After the failure After the failure


STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

954250803.2005.train06 4152640 NODEVICE 204800 205200 205200 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 fsgen RW ENA RW RLOC fsgen 15-8 RW NDEV
15-8

c1t1d0s2 sliced vol01 vol01-01 vol01 vol01-02 ENABLED ENABLED datadg01 DISABLED datadg02

FOS35_Sol_R1.0_20020930

v vol02 DISABLED ACTIVE 204800 pl vol02-01 vol02 DISABLED NODEVICE 205200 sd datadg02-02 vol02-01 datadg02 205200 205200

FOS35_Sol_R1.0_20020930

Volume States After the Failure After the failure, notice the NODEVICE status of the disk media record, datadg02, and the plexes using it. The VxVM disk, datadg02, was associated with the failed drive, c1t2d0.
# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG NLOG DEVICE TYPE RLINK_CNTKSTATE RVG KSTATE RVG KSTATE VOLUME KSTATE PLEX DISK PLEX VOLNAME default default MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 ACTIVE ACTIVE 0 NODEVICE 0 GROUP-ID PUBLEN STATE PRIMARY DATAVOLS REM_HOSTREM_DG LENGTH READPOL LENGTH LAYOUT LENGTH [COL/]OFF LENGTH [COL/]OFF

SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

954250803.2005.train06 4152640 NODEVICE 204800 205200 205200 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 fsgen RW ENA RW RLOC fsgen RW NDEV

c1t1d0s2 sliced vol01 vol01-01 vol01 vol01-02 ENABLED ENABLED datadg01 DISABLED datadg02

v vol02 DISABLED pl vol02-01 vol02 DISABLED sd datadg02-02 vol02-01 datadg02

ACTIVE 204800 NODEVICE 205200 205200 205200

15-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Identifying a Degraded Plex of a RAID-5 Volume


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . .
dg datadg dm dm dm dm datadg01 datadg02 datadg03 datadg04

NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX


default c1t2d0s2 c1t3d0s2 c1t4d0s2

NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME


default sliced sliced sliced ENABLED ENABLED datadg01 datadg02 datadg03 ENABLED datadg04

MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR


45000 1519 1519 1519 ACTIVE ACTIVE 0 0 0 LOG 0

GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH

STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF

SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

955106287.2817.train06 4152640 4152640 4152640 409600 410368 205200 205200 205200 1520 1520 NODEVICE RAID RAID 0/0 1/0 2/0 CONCAT 0 3/32 c1t2d0 c1t3d0 c1t4d0 raid5 RW NDEV ENA ENA RW ENA

v raid5vol pl raid5vol-01 sd datadg01-01 sd datadg02-01 sd datadg03-01 FOS35_Sol_R1.0_20020930 pl raid5vol-02 sd datadg04-01


FOS35_Sol_R1.0_20020930

raid5vol raid5vol-01 raid5vol-01 raid5vol-01 raid5vol raid5vol-02

15-9

15-9

Example: Degraded Plex of a RAID-5 Volume In the RAID-5 volume, note the NDEV state of the subdisk that was on the failed drive. Note also that the volume is still ENABLED and ACTIVE (started), and therefore its objects are open to I/O. NDEV and NODEVICE are the only indicators that something is wrong.
# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm dm dm dm v pl sd sd sd pl sd datadg01 datadg02 datadg03 datadg04 raid5vol raid5vol-01 datadg01-01 datadg02-01 datadg03-01 raid5vol-02 datadg04-01 NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 45000 1519 1519 1519 ACTIVE ACTIVE 0 0 0 LOG 0 GROUP-ID PUBLEN STATE PRIMARY DATAVOLS REM_HOST REM_DG LENGTH READPOL LENGTH LAYOUT LENGTH [COL/]OFF LENGTH [COL/]OFF

SRL REM_RLNK PREFPLEX UTYPE NCOL/WID MODE DEVICE MODE AM/NM MODE

955106287.2817.train06 4152640 4152640 4152640 409600 410368 205200 205200 205200 1520 1520 NODEVICE RAID RAID 0/0 1/0 2/0 CONCAT 0 3/32 c1t2d0 c1t3d0 c1t4d0 raid5 RW NDEV ENA ENA RW ENA

c1t2d0s2 c1t3d0s2 c1t4d0s2 raid5vol raid5vol-01 raid5vol-01 raid5vol-01 raid5vol raid5vol-02

sliced sliced sliced ENABLED ENABLED datadg01 datadg02 datadg03 ENABLED datadg04

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-11

Identifying a Degraded Plex of a RAID-5 Volume


# vxprint -l raid5vol
Disk group: datadg Volume: raid5vol info: len=409600 type: usetype=raid5 state: state=ACTIVE kernel=ENABLED cdsrecovery=0/0 (clean) assoc: plexes=raid5vol-01,raid5vol-02 policies: read=RAID exceptions=GEN_DET_SPARSE flags: closed degraded writecopy writeback logging: type=RAID5 loglen=960 serial=0/0 (enabled) apprecov: seqno=0 recov_id=0 device: minor=45000 bdev=155/45000 cdev=155/45000 path=/dev/vx/dsk/datadg/raid5vol perms: user=root group=root mode=0600

# vxinfo -p -g datadg
FOS35_Sol_R1.0_20020930

vol plex plex

raid5vol raid5vol-01 raid5vol-02

raid5 Started Degraded ACTIVE DEGRADED LOG

15-10

FOS35_Sol_R1.0_20020930

15-10

The procedure for fixing a RAID-5 volume is the same as fixing a mirrored volume. Therefore, the steps for recovering from disk failures apply to RAID-5 volumes without any change. The differing factors between recovering the two types of volumes are how the volumes are recovered and how long each volume takes to recover.
# vxprint -l raid5vol Disk group: datadg Volume: info: type: state: assoc: policies: flags: logging: apprecov: device: perms: vol plex plex raid5vol len=409600 usetype=raid5 state=ACTIVE kernel=ENABLED cdsrecovery=0/0 (clean) plexes=raid5vol-01,raid5vol-02 read=RAID exceptions=GEN_DET_SPARSE closed degraded writecopy writeback type=RAID5 loglen=960 serial=0/0 (enabled) seqno=0 recov_id=0 minor=45000 bdev=155/45000 cdev=155/45000 path=/dev/vx/dsk/datadg/raid5vol user=root group=root mode=0600 raid5 Started Degraded ACTIVE DEGRADED LOG

# vxinfo -p -g datadg raid5vol raid5vol-01 raid5vol-02

15-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Failure Types


Three basic types of disk failure are: Permanent failures Temporary failures Intermittent failures Note: When recovering from disk failures, all commands work on layered volumes the same way as they do on nonlayered volumes.

FOS35_Sol_R1.0_20020930

15-11

Disk Failure Types


Three Disk Failure Types The three basic types of disk failure are permanent, temporary, and intermittent. Permanent disk failures are failures in which the data on the drive can no longer be accessed for any reason (that is, uncorrectable). In this case, the data on the disk is lost. Temporary disk failures are disk devices that have failures that are repaired some time later. This type of failure includes a drive that is powered off and back on, or one that has a loose SCSI connection that is fixed later. In these cases, the data is still on the disk, but it may not be synchronized with the other disks being actively used in a volume. Intermittent disk failures are failures that occur off and on and that involve problems that cannot be consistently reproduced. Intermittent failures are usually hardware failures localized to a part of the disk, such as bad block reads. If the bad block reads cannot be revectored, a disk with these problems may completely fail in the near future. By replacing a disk that is experiencing this type of failure, you can avoid an unexpected failure later.

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-13

Permanent Disk Failure: Volume States After the Failure


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG NLOG DEVICE TYPE RLINK_CNT KSTATE RVG KSTATE STATE RVG KSTATE VOLUME KSTATE PLEX DISK PLEX VOLNAME default c1t1d0s2 vol01 vol01-01 vol01 vol01-02 default sliced ENABLED ENABLED datadg01 DISABLED datadg02

datadg02 is the failed disk. datadg02 is the failed disk.


MINORS GROUP-ID PRIVLEN PUBLEN STATE STATE PRIMARY DATAVOLS SRL REM_HOST REM_DG REM_RLNK STATE LENGTH READPOL PREFPLEX STATE LENGTH LAYOUT NCOL/WID DISKOFFS LENGTH [COL/]OFF DEVICE NVOLLAYR LENGTH [COL/]OFF AM/NM 64000 1519 ACTIVE ACTIVE 0 NODEVICE 0 954250803.2005.train06 4152640 NODEVICE 204800 205200 205200 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 fsgen RW ENA RW RLOC fsgen 15-12 RW NDEV
15-12

UTYPE MODE MODE MODE

FOS35_Sol_R1.0_20020930

v vol02 pl vol02-01 vol02 sd datadg02-02 vol02-01

DISABLED ACTIVE 204800 DISABLED NODEVICE 205200 datadg02 205200 205200

FOS35_Sol_R1.0_20020930

Resolving Permanent Disk Failure


Volume States After Permanent Disk Failure In this example, assume that the failed disk is datadg02 (c1t2d0s2). Volume states after permanent disk failure are displayed with vxprint:
# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 ACTIVE ACTIVE 0 NODEVICE 0 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH STATE DATAVOLS SRL REM_DG REM_RLNK READPOL PREFPLEX LAYOUT NCOL/WID [COL/]OFFDEVICE [COL/]OFFAM/NM

...

UTYPE MODE MODE MODE

dg datadg dm datadg01 dm datadg02

954250803.2005.train06 4152640 204800 205200 205200 205200 205200 NODEVICE SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 fsgen RW ENA RW RLOC fsgen RW NDEV

c1t1d0s2 sliced ENABLED ENABLED datadg01 DISABLED datadg02

v vol01 pl vol01-01 vol01 sd datadg01-01 vol01-01 pl vol01-02 vol01 sd datadg02-01 vol01-02

v vol02 DISABLED ACTIVE 204800 pl vol02-01 vol02 DISABLED NODEVICE 205200 sd datadg02-02 vol02-01 datadg02 205200 205200

15-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Permanent Disk Failure: Resolving


The failed disk is datadg02 (c1t2d0s2), and the new disk used to replace it is c1t3d0s2: 1. After physically replacing the disk, initialize a new drive: # vxdisksetup -i c1t3d0 2. Attach the disk media name (datadg02) to the new drive: # vxdg -g datadg -k adddisk datadg02=c1t3d0s2 3. Recover the redundant volumes: # vxrecover 4. Start any nonredundant volumes: # vxvol -g datadg -f start vol02 Restore data of any nonredundant volumes from backup.
FOS35_Sol_R1.0_20020930 15-13

Permanent Disk Failure When permanent disk failure occurs, VxVM determines that it is not able to perform I/O to either the public or the private region. If a drive completely fails, your only option is to replace the failed drive with another drive. You can replace a drive by using any of the VxVM interfaces. With permanent disk failures, when you replace the failed drive with a new one, the data is no longer on the disk. If there is a nonredundant volume on the failed drive, the data is lost. To recover the data, you must restore from backup after the VxVM configuration is re-created on the new drive. Resolving Permanent Disk Failure: Process Assume that the failed disk is datadg02 (c1t2d0s2) and the new disk used to replace it is c1t3d0s2, which is originally uninitialized. To recover from the permanent failure: 1 Initialize the new drive: # vxdisksetup -i c1t3d0 2 Attach the disk media name (datadg02) to the new drive: # vxdg -g datadg -k adddisk datadg02=c1t3d0s2 Note: If there are free disks that are already initialized and that do not belong to any disk group, you can use one of these disks to replace the failed drive. You do not have to initialize a new disk with the same disk access name. 3 Recover the redundant volumes: # vxrecover
Lesson 15: Disk Problems and Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-15

4 Start any nonredundant volumes: # vxvol -g datadg -f start vol02 Caution: Only use the -f flag in vxvol start to start a nonredundant volume that was ACTIVE and ENABLED prior to the disk failure. You must restore the data of any nonredundant volumes from backup after you start the volume. Replacing Disks: Other Methods Alternatively, you can use the VEA interface or vxdiskadm to replace the failed disk: In VEA, select the disk to be replaced, and select Actions>Replace Disk. In vxdiskadm, select option 5, Replace a failed or removed disk. When you use VEA or option 5 of vxdiskadm, VxVM performs all necessary steps except for starting nonredundant volumes. VxVM: Creates a new public region and a new private region on the drive (if initialized) Configures the drive to have the identity of the failed drive (disk media name) Sets up all the subdisks on the drive Recovers redundant volumes After replacing the disk, you must then force start any nonredundant volumes in order to restore data on them.

15-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

States After Attaching the Disk


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default c1t1d0s2 c1t2d0s2 vol01 vol01-01 vol01 vol01-02 NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default sliced sliced ENABLED ENABLED datadg01 DISABLED datadg02 MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 1519 ACTIVE ACTIVE 0 IOFAIL 0 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF

Permanent Disk Failure: Volume

SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW RLOC fsgen 15-14 RW ENA
15-14

FOS35_Sol_R1.0_20020930

v vol02 pl vol02-01 vol02 sd datadg02-02 vol02-01

DISABLED ACTIVE DISABLED RECOVER datadg02 205200

FOS35_Sol_R1.0_20020930

Volume States After Attaching the Disk Media This example displays volume states after attaching the disk media to replace a permanently failed disk:
# vxprint -g datadg -ht
. . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 default default 64000 1519 1519 ACTIVE ACTIVE 0 IOFAIL 0 954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW RLOC fsgen RW ENA

c1t1d0s2 sliced c1t2d0s2 sliced vol01 vol01-01 vol01 vol01-02 ENABLED ENABLED datadg01 DISABLED datadg02

v vol02 DISABLED ACTIVE pl vol02-01 vol02 DISABLED RECOVER sd datadg02-02 vol02-01 datadg02 205200

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-17

States After Volume Recovery


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default c1t1d0s2 c1t2d0s2 vol01 vol01-01 vol01 vol01-02 NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default sliced sliced ENABLED ENABLED datadg01 ENABLED datadg02 MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 1519 ACTIVE ACTIVE 0 ACTIVE 0 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF

Permanent Disk Failure: Volume

SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW ENA fsgen 15-15 RW ENA
15-15

FOS35_Sol_R1.0_20020930

v vol02 pl vol02-01 vol02 sd datadg02-02 vol02-01

DISABLED ACTIVE DISABLED RECOVER datadg02 205200

FOS35_Sol_R1.0_20020930

Volume States After Recovering Redundant Volumes When you start the recovery on redundant volumes, the plex that is not synchronized with the mirrored volume has a state of ENABLED and STALE. During the period of synchronization, the plex is write-only (WO). After the synchronization is complete, the plex state changes to ENABLED and ACTIVE and becomes read-write (RW).
# vxprint -g datadg -ht
. . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 default default 64000 1519 1519 ACTIVE ACTIVE 0 ACTIVE 0 954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW ENA fsgen RW ENA

c1t1d0s2 sliced c1t2d0s2 sliced vol01 vol01-01 vol01 vol01-02 ENABLED ENABLED datadg01 ENABLED datadg02

v vol02 DISABLED ACTIVE 204800 pl vol02-01 vol02 DISABLED RECOVER 205200 sd datadg02-02 vol02-01 datadg02 205200 205200

15-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Temporary Disk Failure: Resolving


1. Fix the failure. (Turn on the drive and tighten cables.) 2. Ensure that the operating system recognizes the device: # devfsadm 3. Verify that the operating system recognizes the device: # prtvtoc /dev/rdsk/c1t2d0s2 4. Force the VxVM to reread all drives: # vxdctl enable 5. Reattach the device to the disk media record: # vxreattach 6. Recover the redundant volumes: # vxrecover 7. Start any nonredundant volumes: # vxvol -g datadg -f start vol02 8. Check data for consistency, for example: # fsck /dev/vx/rdsk/diskgroup/volume_name
FOS35_Sol_R1.0_20020930 15-16

Resolving Temporary Disk Failure


Temporary Disk Failure With temporary disk failures, the data is still on the disk, but the data is not synchronized with the rest of the data that is being actively used for redundant volumes. If a drive experiences a temporary failure, you can reattach the drive to Volume Manager and continue to use the drive. After the disk is reattached, the data for redundant volumes must be synchronized using the existing data on the healthy drives. The data for nonredundant volumes must be checked by the application using it. Resolving Temporary Disk Failure: Process Assume that the drive that experienced the temporary failure is c1t2d0s2. To recover from the temporary failure: 1 The first step is to fix the failurethat is, turn on the drive and tighten the SCSI cable. 2 If you moved the drive to a new location in the SCSI chain, or to a new SCSI chain, ensure that the operating system recognizes the device: # drvconfig # disks Note: Because you have not changed the SCSI location of the drive, running the first two commands (drvconfig and disks) may not be necessary. However, running these commands ensures that the disk spins up before you continue. In Solaris 7 and later, you can use devfsadm, a one-command replacement for drvconfig and disks.
Lesson 15: Disk Problems and Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-19

3 Verify that the operating system recognizes the device: # prtvtoc /dev/rdsk/c1t2d0s2 4 Force the VxVM configuration daemon to reread all of the drives in the system: # vxdctl enable 5 Reattach the device to the disk media record: # vxreattach Important: If you run the vxdiskadm option 5 command at this point, you must not to reinitialize the disk. If you choose to reinitialize the drive, vxdiskadm destroys and rewrites the private region. If the drive was initialized differently prior to the failure (for example, if the private region existed at the end of the disk), nonredundant volumes using the drive lose their data. 6 Recover the redundant volumes: # vxrecover 7 Start any nonredundant volumes: # vxvol -g datadg -f start vol02 Caution: Only use the -f flag in vxvol start to start a nonredundant volume that was ACTIVE and ENABLED prior to the disk failure. 8 The data on the nonredundant volumes may not be consistent. The data must be checked by the relevant application using the datafor example, by using fsck if the application is a file system: fsck /dev/vx/rdsk/diskgroup/volume_name

15-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Temporary Disk Failure: Volume States After Reattaching Disk


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default c1t1d0s2 c1t2d0s2 vol01 vol01-01 vol01 vol01-02 NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default sliced sliced ENABLED ENABLED datadg01 DISABLED datadg02 MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 1519 ACTIVE ACTIVE 0 IOFAIL 0 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW RLOC fsgen 15-17 RW ENA
15-17

FOS35_Sol_R1.0_20020930

v vol02 pl vol02-01 vol02 sd datadg02-02 vol02-01

DISABLED ACTIVE DISABLED RECOVER datadg02 205200

FOS35_Sol_R1.0_20020930

Volume States After Reattaching the Disk After reattaching the disk, volume and plex states are as follows:
# vxprint -g datadg -ht
. . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd default default 64000 1519 1519 ACTIVE ACTIVE 0 IOFAIL 0 954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 c1t1d0 c1t2d0 fsgen RW ENA RW RLOC

c1t1d0s2 sliced c1t2d0s2 sliced ENABLED ENABLED datadg01 DISABLED datadg02

vol01 vol01-01 vol01 datadg01-01 vol01-01 vol01-02 vol01 datadg02-01 vol01-02

v vol02 DISABLED ACTIVE pl vol02-01 vol02 DISABLED RECOVER sd datadg02-02 vol02-01 datadg02 205200

SELECT fsgen CONCAT RW 0 c1t2d0 ENA

Notice the different states of vol01 and vol02. The vol01 volume can still receive I/O and contains a plex in the IOFAIL state. This indicates that there was a hardware failure underneath the plex while the plex was online. Also notice that the only plex of vol02 has a state of RECOVER. This state means that VxVM believes that the data in this plex will need to be recovered. In a temporary disk failure, where the disk may have been turned off during an I/O stream, the data on that disk may still be valid. Therefore, you should not always interpret the RECOVER state in terms of bad data on the disk.
Lesson 15: Disk Problems and Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-21

Temporary Disk Failure: Volume States After Recovery


# vxprint -g datadg -ht
DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME . . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX PLEX default c1t1d0s2 c1t2d0s2 vol01 vol01-01 vol01 vol01-02 NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME default sliced sliced ENABLED ENABLED datadg01 ENABLED datadg02 MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR 64000 1519 1519 ACTIVE ACTIVE 0 ACTIVE 0 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH STATE DATAVOLS REM_DG READPOL LAYOUT [COL/]OFF [COL/]OFF SRL REM_RLNK PREFPLEX NCOL/WID DEVICE AM/NM

UTYPE MODE MODE MODE

954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 204800 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW ENA fsgen RW 15-18 ENA
15-18

FOS35_Sol_R1.0_20020930

v vol02 pl vol02-01 vol02 sd datadg02-02 vol02-01

DISABLED ACTIVE DISABLED RECOVER datadg02 205200

FOS35_Sol_R1.0_20020930

Volume States After Recovery When you start vxrecover, the plex that is not synchronized with the mirrored volume changes its state to ENABLED and STALE. During the period of synchronization, the plex is write only (WO). Once the synchronization is complete, the plex changes back to ENABLED and ACTIVE and becomes read-write (RW). Volume states after vxrecover completes:
# vxprint -g datadg -ht
. . . dg datadg dm datadg01 dm datadg02 v pl sd pl sd default default 64000 1519 1519 ACTIVE ACTIVE 0 ACTIVE 0 954250803.2005.train06 4152640 4152640 204800 205200 205200 205200 205200 SELECT CONCAT 0 CONCAT 0 SELECT CONCAT 0 c1t1d0 c1t2d0 c1t2d0 fsgen RW ENA RW ENA fsgen RW ENA

c1t1d0s2 sliced c1t2d0s2 sliced ENABLED ENABLED datadg01 ENABLED datadg02

vol01 vol01-01 vol01 datadg01-01 vol01-01 vol01-02 vol01 datadg02-01 vol01-02

v vol02 DISABLED ACTIVE 204800 pl vol02-01 vol02 DISABLED RECOVER 205200 sd datadg02-02 vol02-01 datadg02 205200 205200

15-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Intermittent Disk Failure: Resolving


If the volume is not redundant, attempt to mirror the volume:
If you can mirror the volume, continue with the procedure for redundant volumes. If you cannot mirror the volume, back up the disk and re-create the volume on another drive.

If the volume is redundant:


Prevent read I/O from accessing the failing disk by changing the volume read policy. Evacuate the failing disk to another drive. Remove the failing disk. Set the volume read policy back to the original policy.
FOS35_Sol_R1.0_20020930 15-19

Resolving Intermittent Disk Failure


Intermittent Disk Failure Intermittent disk failures are failures that occur off and on and involve problems that cannot be consistently reproduced. Therefore, these types of failures are the most difficult for Solaris to handle and can cause the system to slow down considerably while Solaris attempts to determine the nature of the problem. If you encounter intermittent failures, you should move data off of the disk and remove the disk from the system to avoid an unexpected failure later. The method that you use to resolve intermittent disk failure depends on whether the associated volumes are redundant or nonredundant. If the volume is not redundant: Attempt to mirror the volume. If you can mirror the volume, continue with the procedure for redundant volumes. If you cannot mirror the volume, back up the disk and re-create the volume on another drive. If the volume is redundant: Prevent read I/O from accessing the failing disk by changing the read policy. Evacuate the failing disk to another drive. Remove the failing disk. Set the volume read policy back to the original policy.

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-23

Removing a Failing Drive


To move data to a specific drive and remove the failing drive:

Set the volume read policy to PREFER. Evacuate the data to other drives by using vxdiskadm option 7, Move volumes from a disk. Remove the failing disk by using vxdiskadm option 3, Remove a disk. Set the volume read policy back to the original policy.

FOS35_Sol_R1.0_20020930

15-20

Removing a Failing Drive Assume that datadg02 (c1t2d0s2, and with plex vol01-02 from the mirrored volume vol01) is the drive experiencing intermittent problems. To recover: 1 Set the read policy to read from a preferred plex that is not on the failing drive before evacuating the disk. This technique prevents VxVM from accessing the failing drive during a read. Ensure that you set the read policy for all of the volumes using the device. If possible, you should also prevent writes from occurring to the volumes on the failing disk. For example, to set the read policy to use the plex vol01-01: # vxvol -g datadg rdpol prefer vol01 vol01-01 2 Evacuate data from the failing drive to one or more other drives. From the vxdiskadm main menu, select option 7, Move volumes from a disk. Evacuate the volumes on datadg02 to another disk in the disk group, such as datadg03. Note: If you do not care which drives the data is moved to, then you can use vxdiskadm option 3, Remove a disk. When prompted, evacuate the data. VxVM determines where to move the data, and after the evacuation, removes the disk. 3 Remove the failing disk. From the vxdiskadm main menu, select option 3, Remove a disk. Remove the disk datadg02. 4 Set the volume read policy back to the original read policy: # vxvol -g datadg rdpol select vol01

15-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Forced Removal
To forcibly remove a disk and not evacuate the data:

Use vxdiskadm option 4, Remove a disk for replacement. VxVM handles the drive as if it has already failed. Use vxdiskadm option 5, Replace a failed or removed disk.

FOS35_Sol_R1.0_20020930

15-21

Forced Removal If volumes are performing writes, and each write is taking a long time to succeed because of the intermittent failures, then the system may slow down significantly and fall behind in its work. If this scenario occurs, you may need to forcibly remove the disk and not evacuate the data: 1 Use vxdiskadm option 4, Remove a disk for replacement. With this option, VxVM treats the drive as though it has already failed. The problem with using this command is that all volumes that have only two mirrors (or that have a RAID-5 layout for redundancy) and that are using this drive are no longer redundant until you replace the drive. During this period, if a bad block occurs on the remaining disk, you cannot easily recover and may have to restore from backup. You must also restore all nonredundant volumes using the drive from backup. 2 After you remove the drive, you must replace the drive in the same way as when a drive completely fails. To replace a drive, you can use vxdiskadm option 5, Replace a failed or removed disk. Note: The state of the disk is set to REMOVED when you use vxdiskadm option 4. In terms of fixing the drive, the REMOVED state is the same as NODEVICE. You must use vxdiskadm option 5 to replace the drive.

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-25

The failing Flag


When an unrecoverable error occurs on a drive, VxVM sets the failing flag to on . This flag stops top-down utilities from allocating further space on the drive. When the drive is fixed, you can turn the flag off:
# vxedit set failing=off datadg02

FOS35_Sol_R1.0_20020930

15-22

The failing Flag If the failing flag is set for a disk, then the disk space is neither used as free space nor used by the hot-relocation facility. All remaining free space on the logical disk is unavailable until the flag is cleared. The purpose of the failing flag is to prevent hot relocation from moving a failed drive to a drive that may be failing soon. You can continue using the data on the drive, but you cannot use the drive for new data until you fix the drive. VxVM sets the failing flag to on when unrecoverable errors occur on a drive. The flag stops top-down utilities from allocating further space on the drive. When the drive is fixed, you can turn the failing flag off:
# vxedit set failing=off datadg02

15-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to: Identify and interpret I/O failure through console messages, disk records, and volume states. Describe three types of disk failure. Resolve permanent disk failures by using VxVM commands. Resolve temporary disk failures by using VxVM commands. Resolve intermittent disk failures by using VxVM commands.
FOS35_Sol_R1.0_20020930 15-23

Summary
This lesson described various disk failures that you may experience and how VERITAS Volume Manager (VxVM) reacts to the failures. This lesson also provided step-by-step solutions for each disk failure scenario. Next Steps The next lesson introduces VxVM object states and the tools you can use to solve data consistency problems by modifying these states. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VxVM. VERITAS Volume Manager Troubleshooting Guide This guide provides information about how to recover from hardware failure, and how to understand and deal with VxVM error messages. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager.

Lesson 15: Disk Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

15-27

Lab 15
Lab 15: Disk Problems and Solutions This lab simulates temporary, permanent, and intermittent disk failures. In each scenario, you must recover all of the redundant and nonredundant volumes that were on the failed drive. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

15-24

Lab 15: Disk Problems and Solutions


Goal This lab simulates temporary, permanent, and intermittent disk failures. In each scenario, you must recover all of the redundant and nonredundant volumes that were on the failed drive. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Review Answers and Lab Solutions.

15-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16

Plex Problems and Solutions

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

16-2

16-2

Introduction
Overview This lesson introduces the various states in which Volume Manager (VxVM) objects, such as volumes and plexes, can exist. This lesson also describes the tools that you can use to solve problems related to data consistency by analyzing and changing these states. Importance By understanding how VxVM represents plex, volume, and kernel states, you can troubleshoot and recover from problems with plexes. Outline of Topics Displaying State Information for VxVM Objects Interpreting Plex States Interpreting Volume States Interpreting Kernel States Resolving Plex Problems Analyzing Plex Problems

16-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Display state information for VxVM objects. Interpret plex states and condition flags. Interpret volume states. Interpret kernel states. Fix plex and volume failures by using VxVM tools. Resolve data consistency problems by analyzing and changing plex and volume states.

FOS35_Sol_R1.0_20020930

16-3

Objectives After completing this lesson, you will be able to: Display state information for VxVM objects. Interpret plex states and condition flags. Interpret volume states. Interpret kernel states. Fix plex and volume failures by using VxVM tools. Resolve data consistency problems by analyzing and changing plex and volume states.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-3

How Volumes Are Created


vxassist is a top down utilitythat is, you only specify the properties of the volume you want to createthat creates volumes bottom up: Create subdisks. Associate subdisks to plexes. Associate plexes to a volume. Initialize the volumes plexes. Start the volume.

FOS35_Sol_R1.0_20020930

16-4

Displaying State Information for VxVM Objects


How Volumes Are Created In order to troubleshoot and solve problems associated with mirrors, you must understand how volumes are created. The vxassist utility is a top-down utility, which means that you specify only the properties of the volume that you want to create. However, vxassist actually creates the volumes using a bottom-up approach, which means that subdisks are created first and used to build volumes. To create a volume, vxassist follows this process: 1 Decide which disks to place the data onto and create subdisks on those drives. 2 Create mirrors and associate each of the subdisks to the mirrors that will be used in the volume. 3 Create the volume and associate the mirrors to the volume. The result is a volume with one or more plexes. 4 Initialize the volumes plexes by selecting the plex that represents the data for the volume. You perform this action by using the vxvol init command. Initializing a volume is like a low-level format command on a disk drive: it states how to get to the data. (By default, vxassist creates both plexes as having the data and copies them togetherusing read-writeback synchronization.) 5 Start the volume. Starting a volume involves enabling the area that the volume represents on disk, and enabling its object in the disk group configuration database, to accept user and system I/O.

16-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Initializing a Volumes Plexes


vxvol init init_type volume [plexes] zero active clean enable Example # vxvol init clean vol01 vol01-01

FOS35_Sol_R1.0_20020930

16-5

Initializing a Volumes Plexes The vxvol init command performs an initialization action on a volume:
vxvol init init_type volume [plexes]

The action to perform is specified by the init_type operand, which can have one of the following values: zero: Using the zero option sets all plexes to a value of 0, which means that all bytes are null. This command automatically starts the volume, because the only way to zero out the data is to start the volume. Only started volumes can perform I/O. active: Setting the volume to active sets all plexes to active and enables the volume and its plexes. Use this option to initialize a single- or multipleplex volume where all plexes are known to have identical contents. Because both the volume and the plexes are already enabled, you do not need to issue the vxvol start command. clean: If you know that one of the plexes has the correct data, you can select that particular plex to represent the data of the volume. In this case, all other plexes will copy their content from the clean plex when the volume is started. enable: Use the enable option to temporarily enable the volume so that data can be loaded onto it to make the plexes consistent. After all of the content of the volume has been loaded, you use init active to fully enable the volume.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-5

Identifying Plex Problems


To identify and solve plex problems, use the following information: Plex states Volume states Plex kernel states Volume kernel states Object condition flags

Commands to display plex, volume, and kernel states: # vxprint -g diskgroup -ht [volume] # vxinfo -p -g diskgroup [volume]
FOS35_Sol_R1.0_20020930 16-6

Identifying Plex Problems You can use STATE fields in the output of the vxprint and vxinfo commands to determine that a problem has occurred, and to assist in determining how to fix the problem. VxVM displays state information for: Plex states Volume states Plex kernel states Volume kernel states The plex and volume state fields are not always accurate, because administrators can change them. However, kernel state flags are absolute, that is, only VxVM can change them. Therefore, kernel state flags are always accurate. A particular plex state does not necessarily mean that the data is good or bad. The plex state represents VxVMs perception of the data in a plex. VxVM is usually conservative, that is, if VxVM has any reason to believe that data is not synchronized, then the plex states are set accordingly. Displaying State Information To display plex, volume, and kernel states, you can use the vxprint and vxinfo commands: vxprint -g diskgroup -ht [volume] vxinfo -p -g diskgroup [volume]

16-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Displaying Object States


# vxinfo -p -g datadg vol01
vol plex plex vol01 vol01-01 vol01-02 fsgen ACTIVE ACTIVE Started

# vxprint -g datadg -ht vol01


V PL SD SV v pl sd pl sd NAME NAME NAME NAME vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 RVG VOLUME PLEX PLEX vol01 vol01-01 vol01 vol01-02 KSTATE KSTATE DISK VOLNAME ENABLED ENABLED datadg01 ENABLED datadg02 STATE STATE DISKOFFS NVOLLAYR ACTIVE ACTIVE 0 ACTIVE 0 LENGTH LENGTH LENGTH LENGTH 204800 205200 205200 205200 205200 READPOL LAYOUT [COL/]OFF [COL/]OFF SELECT CONCAT 0 CONCAT 0 PREFPLEX NCOL/WID DEVICE AM/NM c1t1d0 c1t2d0 UTYPE MODE MODE MODE fsgen RW ENA RW ENA
16-7

FOS35_Sol_R1.0_20020930

If you do not specify the volume name on the command line for the vxprint or vxinfo commands, information on all the volumes within the specified disk group is displayed:
# vxinfo -p -g datadg vol01
vol plex plex vol01 vol01-01 vol01-02 fsgen ACTIVE ACTIVE Started

# vxprint -g datadg -ht vol01


V PL SD SV v pl sd pl sd NAME NAME NAME NAME vol01 vol01-01 datadg01-01 vol01-02 datadg02-01 RVG VOLUME PLEX PLEX vol01 vol01-01 vol01 vol01-02 KSTATE KSTATE DISK VOLNAME ENABLED ENABLED datadg01 ENABLED datadg02 STATE STATE DISKOFFS NVOLLAYR ACTIVE ACTIVE 0 ACTIVE 0 LENGTH LENGTH LENGTH LENGTH 204800 205200 205200 205200 205200 READPOL LAYOUT [COL/]OFF [COL/]OFF SELECT CONCAT 0 CONCAT 0 PREFPLEX NCOL/WID DEVICE AM/NM c1t1d0 c1t2d0 UTYPE MODE MODE MODE fsgen RW ENA RW ENA

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-7

Plex States and Condition Flags


EMPTY EMPTY CLEAN (SNAPDONE) CLEAN (SNAPDONE) ACTIVE (SNAPDONE) ACTIVE (SNAPDONE)
vxvol init enable V vxvol init zero V P1:ENABLED/EMPTY P1:ENABLED/EMPTY P2:ENABLED/EMPTY P2:ENABLED/EMPTY [V:ENABLED/EMPTY] [V:ENABLED/EMPTY] P1:DISABLED/CLEAN P1:DISABLED/CLEAN P2:DISABLED/STALE P2:DISABLED/STALE [V:DISABLED/CLEAN] [V:DISABLED/CLEAN] vxvol start V

Volume Created
P1:DISABLED/EMPTY P1:DISABLED/EMPTY P2:DISABLED/EMPTY P2:DISABLED/EMPTY [V:DISABLED/EMPTY] [V:DISABLED/EMPTY] vxvol init clean V P1

Key

vxvol init active V

V P1 P2
FOS35_Sol_R1.0_20020930

P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:ENABLED/ACTIVE P2:ENABLED/ACTIVE [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE] vxvol start V vxvol stop V
16-8

P1: 1st Plex States P2: 2nd Plex States [V: Volume States]

P1:DISABLED/CLEAN P1:DISABLED/CLEAN P2:DISABLED/CLEAN P2:DISABLED/CLEAN [V:DISABLED/CLEAN] [V:DISABLED/CLEAN]

FOS35_Sol_R1.0_20020930

16-8

Interpreting Plex States


Plex States EMPTY When you create a volume, all of the plexes and the volume are set to the EMPTY state. This state indicates that you have not yet defined which plex has the good data (CLEAN), and which plex does not have the good data (STALE). You can only achieve the EMPTY state by creating a new volume by using vxmake, or by using related administrative commands. CLEAN The CLEAN state is normal and indicates that the plex has a copy of the data that represents the volume (in the example earlier, it has a VxFS file system). CLEAN also means that the volume is not started and is not currently able to handle I/O (by the administrators control). ACTIVE The ACTIVE state is the same as CLEAN, but the volume is or was currently started, and the volume is or was able to perform I/O.

16-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

SNAPDONE The SNAPDONE state is the same as ACTIVE or CLEAN, but SNAPDONE is a plex that has been synchronized with the volume as a result of a vxassist snapstart operation. After a reboot or a manual start of the volume, a plex in the SNAPDONE state is removed along with its subdisks.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-9

Plex States and Condition Flags


STALE (SNAPATT) STALE (SNAPATT) OFFLINE OFFLINE TEMP TEMP
vxmend off P2
P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:ENABLED/ACTIVE P2:ENABLED/ACTIVE [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE]

System Crash

P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:DISABLED/OFFLINE P2:DISABLED/OFFLINE [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE]

P1:DISABLED/ACTIVE P1:DISABLED/ACTIVE P2:DISABLED/ACTIVE P2:DISABLED/ACTIVE [V:DISABLED/ACTIVE] [V:DISABLED/ACTIVE]

Key

V P1 P2
P1: 1st Plex States P2: 2nd Plex States FOS35_Sol_R1.0_20020930 [V: Volume States]
FOS35_Sol_R1.0_20020930

vxmend on P2

vxrecover
The volume state is SYNC during synchronization.

vxrecover -n -s

P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:DISABLED/STALE P2:DISABLED/STALE [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE]

P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:ENABLED/ACTIVE P2:ENABLED/ACTIVE [V:ENABLED/NEEDSYNC] [V:ENABLED/NEEDSYNC]

16-9

16-9

STALE The STALE state indicates that VxVM has reason to believe that the data in the plex is not synchronized with the data in the CLEAN plexes. This state is usually caused by taking the plex offline (I/O can still be going to the other plexes, making them unsynchronized) or by a disk failurewhich means that the plex could not be updated when new writes came into the volume. SNAPATT The SNAPATT state indicates that the object is a snapshot that is currently being synchronized but does not yet have a complete copy of the data. OFFLINE The OFFLINE state indicates that the administrator has issued the vxmend off command on the plex. The plex does not participate in any I/O when it is offline, so the contents become outdated if the volume is actively written to. When the administrator brings the plex back online using the vxmend on command, the plex changes to the STALE state. TEMP The TEMP state flags (TEMP, TEMPRM, TEMPRMSD) usually indicate that the data was never a copy of the volumes data, and you should not use these plexes. These temporary states indicate that the plex is currently involved in a synchronization operation with the volume.

16-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Plex States and Condition Flags


NODEVICE NODEVICE REMOVED REMOVED IOFAIL IOFAIL RECOVER RECOVER
I/O failure on P2 due to bad blocks P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:DETACHED/IOFAIL P2:DETACHED/IOFAIL [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE] P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:ENABLED/ACTIVE P2:ENABLED/ACTIVE [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE] Disk failure

vxdiskadm
(option 5)

vxdiskadm
(option 4)

P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:DISABLED/NODEVICE P2:DISABLED/NODEVICE [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE]

P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:DISABLED/REMOVED P2:DISABLED/REMOVED [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE]

Key

V P1 P2
P1: 1st Plex States FOS35_Sol_R1.0_20020930 P2: 2nd Plex States [V: Volume States]

vxrecover

vxreattach

vxdg -k adddisk

P1:ENABLED/ACTIVE P1:ENABLED/ACTIVE P2:DISABLED/IOFAIL P2:DISABLED/IOFAIL [V:ENABLED/ACTIVE] [V:ENABLED/ACTIVE]

Note: If the volume is nonredundant at the time that you reattach the drive, the plex state will change from NODEVICE to RECOVER instead of IOFAIL.

16-10

FOS35_Sol_R1.0_20020930

16-10

Condition Flags If a plex is not synchronized with the volume, and VxVM has information about why it is not synchronized, then a condition flag is displayed. Multiple condition flags can be set on the same plex at the same time. Only the most informative flags are displayed in the state field of the vxprint output. For example, if a disk fails during an I/O operation, the NODEVICE, IOFAIL, and RECOVER flags are all set for the plex, but only the NODEVICE flag is displayed in the state field. NODEVICE
NODEVICE indicates that the disk drive below the plex has failed.

REMOVED
REMOVED has the same meaning as NODEVICE, but the system administrator has requested that the device appear as if it has failed (for example, by using vxdiskadm option 4, Remove a disk for replacement).

IOFAIL
IOFAIL is similar to NODEVICE, but indicates that an unrecoverable failure occurred on the device, and VxVM has not yet verified whether the disk is actually bad. (I/O to both the public and the private regions must fail to change the state from IOFAIL to NODEVICE.)

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-11

RECOVER The RECOVER flag is set on a plex when the following two conditions are met: A failed disk has been fixed (by using vxreattach or vxdiskadm option 5, Replace a failed or removed disk). The plex was in the ACTIVE state prior to the failure. This flag indicates that even after fixing the volume, additional action may be required. The data may be lost and must be recovered from backup, or the administrator must verify that the data on the disk is current by using utilities provided by the application that uses that volume.

16-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume States
EMPTY CLEAN ACTIVE SYNC NEEDSYNC Plexes are involved in read-writeback or RAID-5 parity synchronization. This state is the same as SYNC, but the internal read thread has not been started. None of the plexes have currently accessible disk devices underneath the volume.
16-11

These volume states have the same meanings as they do for plexes.

NODEVICE
FOS35_Sol_R1.0_20020930

Interpreting Volume States


Volume States EMPTY, CLEAN, and ACTIVE The EMPTY, CLEAN, and ACTIVE volume states have the same meanings as they do for plexes. SYNC The SYNC volume state indicates that the plexes are involved in read-writeback or RAID-5 parity synchronization: Each time that a read occurs from a plex, it is written back to all the other plexes that are in the ACTIVE state. An internal read thread is started to read the entire volume (or, after a system crash, only the dirty regions if dirty region logging (DRL) is being used), forcing the data to be synchronized completely. On a RAID-5 volume, the presence of a RAID-5 log speeds up a SYNC operation. Starting an empty mirrored volume by using the vxvol start command places the volume in SYNC mode. NEEDSYNC The NEEDSYNC volume state is the same as SYNC, but the internal read thread has not been started. This state exists so that volumes that use the same disk are not synchronized at the same time, and head thrashing is avoided.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-13

NODEVICE The NODEVICE volume state indicates that none of the plexes have currently accessible disk devices underneath the volume.

16-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Kernel States
Kernel states represent VxVMs ability to transfer I/O to the volume or plex. ENABLED DETACHED DISABLED The object can transfer both system I/O and user I/O. The object can transfer system I/O, but not user I/O (maintenance mode). No I/O can be transferred.

FOS35_Sol_R1.0_20020930

16-12

Interpreting Kernel States


Kernel States Kernel states represent VxVMs ability to transfer I/O to the object: Volume kernel state: VxVMs ability to transfer I/O to the volume Plex kernel state: VxVMs ability to transfer I/O to the plex ENABLED The ENABLED kernel state indicates that the object is currently able to transfer system I/O to the private region and user I/O to the public region. DETACHED The DETACHED kernel state indicates that the object can currently transfer system I/O, but not user I/O. This state is also considered the maintenance mode where internal plex operations and ioctl functions are accepted. DISABLED The DISABLED state is the offline state for the volume or the plex. When an object is in this state, no I/O is transferred.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-15

Solving Plex Problems


Commands used to fix plex problems: vxrecover vxvol -f start vxmend fix vxmend off|on

FOS35_Sol_R1.0_20020930

16-13

Resolving Plex Problems


When resolving disk and plex problems, after you fix the underlying disk drives by using the disk commands, you must fix plex problems by using the following commands: vxrecover vxvol -f start vxmend fix vxmend off|on

16-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The vxrecover Command


# vxrecover -s [volume] Recovers and resynchronizes all plexes in a started volume Runs vxvol start and vxplex att commands (and sometimes vxvol resync) Works in normal situations Resynchronizes all volumes that need recovery if a volume name is not included Examples: # vxrecover -s # vxrecover -s vol01
FOS35_Sol_R1.0_20020930 16-14

The vxrecover Command The vxrecover command recovers and resynchronizes all plexes in a started volume according to the volumes layout (striped, mirrored, RAID-5, layered, and so on). Volumes that contain a single plex (except for RAID-5) are not affected by vxrecover. When vxrecover is executed, VxVM notes the state of the plexes in the volume. If both ACTIVE and STALE plexes exist, the ACTIVE plexes issue unconditional block writes over the STALE plexes. If there are only ACTIVE plexes, the read-writeback copy procedure is performed. Recovery is performed only on volumes that require recovery (such as volumes marked as dirty before a sudden system failure). During the recovery process, the volume remains online and started. When the synchronization process is complete, the volume and all of its plexes are ACTIVE and ENABLED. The underlying commands issued by vxrecover are: vxvol start vxplex att The vxrecover command also sometimes invokes vxvol resync. Running vxrecover without specifying a volume name can cause a synchronization operation to be started in parallel on all volumes that need recovery. One synchronization operation runs on each drive (if necessary), and volumes on different drives are synchronized in parallel.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-17

Synchronization can affect system performance if you have many volumes that need to be recovered. You may prefer to: 1 Start the volumes without recovery by using vxrecover -sn. 2 Recover individual volumes or recover all of the volumes when I/O traffic is low by using vxrecover. Note: As long as one CLEAN or ACTIVE, non-volatile plex (a plex with no flags set) is available inside a volume, you can start the volume using that plex. The administrator can recover any other plexes in the volume immediately, or defer recovery to a later time.

16-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The vxvol start Command


# vxvol -f start volume_name This command ignores problems with the volume and starts the volume. Only use this command on nonredundant volumes. If used on redundant volumes, data can be corrupted, unless all mirrors have the same data. Example: # vxvol -f start vol01

FOS35_Sol_R1.0_20020930

16-15

The vxvol start Command You can use the vxvol start command to enable a volume that has been disabled or detached.
vxvol start volume

If a volume does not start with this command, it usually indicates that there is a problem with the underlying plexes. Forcing a Volume to Start If you add the -f flag, VxVM ignores the underlying problem and forces the volume to start:
vxvol -f start volume

When you force a volume to start: If all plexes have the same state, then read-writeback synchronization is performed. If the plexes do not have the same state, then atomic-copy resynchronization is performed. Important: Force-starting a volume can have catastrophic results. Use extreme caution when force-starting a mirrored volume after a disk failure and replacement. Forcing a mirrored volume to start can unconditionally synchronize the volume using a read-writeback method of alternating between plex blocks. NULL plex blocks may overwrite good data in the volume, corrupting the data. You should only perform a forced start on a nonredundant volume.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-19

The vxmend Command


# vxmend fix option stale clean active empty
(only used on a volume)

object

Examples: # vxmend fix stale vol01-01 # vxmend fix clean vol01-01


FOS35_Sol_R1.0_20020930 16-16

The vxmend Command To manually reset or change the state of a plex or volume, you can use the vxmend fix command. This command enables you to specify what the condition of the data is in the plex:
vxmend fix option object

You should use this command if you know more about a plexs data than VxVM does. You can only set plex states with vxmend fix when the host volume of the plex is stopped. Important: Use caution and discretion when issuing the vxmend fix command and its options. The vxmend fix command changes states set and cleared automatically by the vxconfigd daemon. If used incorrectly, this command can make the plex, its volume, and its data inaccessible, and you may have to restore the data from backup.

16-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxmend fix stale


vxmend fix stale plex This command changes a CLEAN or ACTIVE (RECOVER) state to STALE. The volume that the plex is associated with must be in DISABLED mode. Use this command as an intermediate step to the final destination for the plex state.

FOS35_Sol_R1.0_20020930

16-17

vxmend fix stale


With a mirrored volume, you can set a plex to the STALE state to indicate that the data inside that plex is either bad or outdated:
vxmend fix stale plex

The STALE state implies that: Another plex (mirror) in the volume has good or updated data. Some type of failure occurred in which VxVM was not able to mark the plex with the proper state. Caution: Ensure that the plex you mark as STALE has the bad or outdated data before you start its volume. The volume must be in the DISABLED kernel state (stopped) to use this command. You typically set a plex to the STALE state prior to selecting one of the plexes to be put into a CLEAN state. Then, you start the volume by using the vxrecover -s or vxvol start commands. When the volume is started, VxVM automatically recovers the volume by overwriting the data in the STALE plex with the good or updated data in the ACTIVE or CLEAN plex. When this process is complete, the target plex is assigned an ACTIVE state.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-21

vxmend fix clean


vxmend fix clean plex This command changes a STALE plex to CLEAN. Only run this command if:
The associated volume is in the DISABLED state. There is no other plex that has a state of CLEAN. All of the plexes are in the STALE or OFFLINE states.

After you change the state of a plex to CLEAN, recover the volume by using: vxrecover -s
FOS35_Sol_R1.0_20020930 16-18

vxmend fix clean


After you determine which plex has the most recent copy of the data, you can change the plex that contains the good data from STALE to CLEAN by using the command:
vxmend fix clean plex

Only run this command if: The associated volume is in the DISABLED state. There is no other plex that has a state of CLEAN. All of the plexes are in the STALE or OFFLINE states. After you change the state of a plex to CLEAN, recover the volume by using vxrecover -s.

16-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxmend fix active


vxmend fix active plex This command changes a STALE plex to ACTIVE. The volume that the plex is associated with must be in DISABLED mode. When you run vxvol start:
ACTIVE plexes are synchronized (SYNC) together. RECOVER plexes are set to STALE and are synchronized from the ACTIVE plexes.

FOS35_Sol_R1.0_20020930

16-19

vxmend fix active


You cannot set a plex to the CLEAN state if another CLEAN plex already exists in the same volume. If you have a volume with more than two mirrors, you cannot specify that two plexes have recent data, and you want to synchronize both plexes over the remaining STALE plex. However, you can set more than one plex in the same volume to an ACTIVE state:
vxmend fix active plex

The volume must be in the DISABLED kernel state (stopped) to use this command. By setting two plexes to ACTIVE, VxVM makes the data in both plexes available during and after volume recovery and overwrites any additional STALE plexes in the same volume. When you use this command, the volume changes to a state of NEEDSYNC to indicate that recovery is required. Before you start the volume, you should ensure that the plexes you mark as ACTIVE have the most recent or good data. When you start the volume, two separate synchronization operations are executed by vxconfigd: The two ACTIVE plexes are synchronized through read-writeback synchronization. One of these plexes is used to overwrite any STALE plexes remaining in the volume. In the recovery of a mirrored volume, ACTIVE plexes are always synchronized first, and then any STALE plexes are synchronized.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-23

Note: If you are not an expert in this product and command, it is recommended that you use the safer alternative method: 1 Change the state of all plexes to STALE. 2 Change the state of the plex that contains the good data to CLEAN. 3 Start the volume.

16-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vxmend fix empty


vxmend fix empty volume Sets all plexes and the volume to the EMPTY state Requires the volume to be in DISABLED mode Runs on the volume, not on a plex Returns to the same state as bottom-up creation

FOS35_Sol_R1.0_20020930

16-20

vxmend fix empty


You may need to run many vxmend fix commands to change plex states to produce a volume that is startable. At times, you may prefer to set the volume to appear as if it is a newly created volume. To set a volume to appear as if it is a newly created volume, you can set all of the plexes in a volume to the EMPTY state:
vxmend fix empty volume

This command runs on the volume, and not on a plex. The volume must be in the DISABLED state to use this command. After running this command, you: 1 Set the plex that should have the clean data by using the vxvol init command. 2 Start the volume by using the vxvol start command. VxVM synchronizes all of the other plexes with the plex that you marked CLEAN. Note: Only use vxmend fix empty when you do not know which plex in a mirrored volume has data in better condition than the other, and when you prefer to set all plexes back to EMPTY, rather than to create the volume again.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-25

vxmend off|on
When analyzing plexes, you can temporarily offline plexes while validating the data in another plex. To take a plex offline, use the command: vxmend off plex To take the plex out of the offline state, use: vxmend on plex

FOS35_Sol_R1.0_20020930

16-21

vxmend off|on
When analyzing plex problems to determine which plex has the correct data, you may need to take some plexes offline temporarily while you are testing a particular plex. To place a plex into an offline state, you use the command:
vxmend off plex

To take a plex or volume out of the offline state, you use the command:
vxmend on plex

16-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Fixing Layered Volumes


For layered volumes, vxmend functions the same as with nonlayered volumes. However, when starting the volume, you should use vxrecover -s:
vxrecover -s starts both the top-level volume and the subvolumes. vxvol start starts only the top-level volume, and not the subvolumes.

FOS35_Sol_R1.0_20020930

16-22

Fixing Layered Volumes vxmend commands function the same with layered volumes as with nonlayered volumes. However, for layered volumes, you should start the volume by using the vxrecover -s command rather than vxvol start: vxvol start only starts the top-level volume, and not the subvolumes. vxrecover -s starts both the top-level volume and the subvolumes.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-27

If the Good Plex Is Known


If you know which plex has the good data: Set all plexes to STALE. Set the good plex to CLEAN. Run vxrecover -s.

FOS35_Sol_R1.0_20020930

16-23

Analyzing Plex Problems


If the Good Plex Is Known If you have a CLEAN plex and a STALE plex inside a mirrored volume, the plex that has the most recent data is usually the CLEAN plex. If you know which plex has the good data: 1 Set all the plexes to STALE (or the volume and the plexes to EMPTY). 2 Set the good plex to CLEAN. 3 Run vxrecover -s.

16-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

If the Good Plex Is Known: Example


Example:
For plex vol01-01, the disk was turned off and back on and still has data. Plex vol01-02 has been offline for several hours. State summary:

vol01: vol01: DISABLED/ACTIVE DISABLED/ACTIVE

P1
vol01-01: vol01-01: DISABLED/RECOVER DISABLED/RECOVER
FOS35_Sol_R1.0_20020930

P2
vol01-02: vol01-02: DISABLED/STALE DISABLED/STALE
16-24

If the Good Plex Is Known: Example In the example, you can make the following observations: The drive failure was temporary, so the data is still on the drive. Because the state of the plex vol01-01 is RECOVER, this plex was in the ACTIVE state prior to the failure. Because the state of the plex vol01-02 is STALE, vol01-01 was the plex with the good data prior to the failure. Since the failure, no I/O could have occurred. You can conclude that plex vol01-01 has the good data. To resolve this problem, you run the following sequence of commands:
# vxmend fix stale vol01-01 # vxmend fix clean vol01-01 # vxrecover -s vol01

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-29

If the Good Plex Is Not Known


If you do not know which plex has the good data: Set all plexes to STALE. Offline all but one plex. Set one plex to CLEAN. Run vxrecover -s. Verify data on the volume. Run vxvol stop. Repeat for each plex until you identify the plex with the good data.
16-25

FOS35_Sol_R1.0_20020930

If the Good Plex Is Not Known What if both plexes are in the STALE state? Regardless of what happened to the plexes or the disks underneath, it is not safe to guess which plex has the more recent (or good) data and start the volume. If you are not sure which plex has good data, then the safest solution is to test each plex one by one. 1 Set all plexes to STALE. 2 Offline all but one plex. 3 Set one plex to CLEAN. 4 Run vxrecover -s. 5 Verify data on the volume. 6 Run vxvol stop. 7 Repeat for each plex until you identify the plex with the good data. This process requires step-by-step attention to all volume and plex object details. Use vxprint -ht to monitor any volume and plex state changes that occur as a result of your vxmend commands. Without a method to test the validity of the data, then you must restore the data from backup. For example, if your application is starting, can you guarantee that the data it contains is the most recent? With a file system, is fsck enough to guarantee that the data in a file is there? Even if you can mount the file system, you can lose the data in some files in the process.

16-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

If the Good Plex Is Not Known: Example


Example:
The volume is disabled and not startable, and you do not know what happened. There are no plexes in the CLEAN state. State summary:

vol01: vol01: DISABLED/CLEAN DISABLED/CLEAN

P1
vol01-01: vol01-01: DISABLED/STALE DISABLED/STALE
FOS35_Sol_R1.0_20020930

P2
vol01-02: vol01-02: DISABLED/STALE DISABLED/STALE
16-26

If the Good Plex Is Not Known: Example In the example, you can resolve the problem by using the following commands: # vxmend off vol01-02 # vxmend fix clean vol01-01 Verify that data is on the plex by using the volume: # vxrecover -s vol01 # vxvol stop vol01 # vxmend -o force off vol01-01 (last clean plex in the volume) # vxmend on vol01-02 # vxmend fix clean vol01-02 Verify that data is on the plex by using the volume: # vxrecover -s vol01 If the current plex (vol01-02) has the correct data: # vxmend on vol01-01 # vxrecover vol01 If vol01-01 had the correct data: # vxvol stop vol01 # vxmend fix stale vol01-02 # vxmend on vol01-01 # vxmend fix clean vol01-01 # vxrecover -s vol01

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-31

Summary
You should now be able to: Display state information for VxVM objects. Interpret plex states and condition flags. Interpret volume states. Interpret kernel states. Fix plex and volume failures by using VxVM tools. Resolve data consistency problems by analyzing and changing the plex and volume states.

FOS35_Sol_R1.0_20020930

16-27

Summary
This lesson introduced the various states in which Volume Manager (VxVM) objects, such as volumes and plexes, can exist. This lesson also described the tools that you can use to solve problems related to data consistency by analyzing and changing these states. Next Steps You have learned basic recovery techniques and how to apply those techniques to resolve data disk failures and plex problems. The next lesson describes how to protect your root disk through disk encapsulation and root disk mirroring. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VxVM. VERITAS Volume Manager Troubleshooting Guide This guide provides information about how to recover from hardware failure, and how to understand and deal with VxVM error messages. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager.

16-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 16
Lab 16: Plex Problems and Solutions This lab simulates disk failure scenarios. By using the vxmend command, you select the plex that has the correct data and recover the volumes by using the clean plex. If you select the wrong plex as the clean plex, then the script states that you corrupted your data. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

16-28

Lab 16: Plex Problems and Solutions


Goal This lab simulates disk failure scenarios. By using the vxmend command, you select the plex that has the correct data and recover the volumes by using the clean plex. If you select the wrong plex as the clean plex, then the script states that you corrupted your data. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Review Answers and Lab Solutions.

Lesson 16: Plex Problems and Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

16-33

16-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17

Encapsulation and Boot Disk Mirroring

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

17-2

17-2

Introduction
Overview This lesson describes the disk encapsulation process and how to encapsulate the root disk on your system. Methods for creating an alternate boot disk and unencapsulating a root disk are covered. Boot disk encapsulation also affects the method that you use to upgrade VxVM software. The last topics in this lesson describe methods for upgrading to new versions of VxVM and VxFS. Importance Disk encapsulation enables you to preserve data on a disk when you place the disk under VxVM control. Encapsulation can be used to create alternate boot disks. Outline of Topics What Is Disk Encapsulation? Encapsulating the Root Disk Viewing Encapsulated Disks Creating an Alternate Boot Disk Unencapsulating a Root Disk Upgrading to a New VxVM Version Upgrading to a New VxFS Version

17-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Identify the benefits of disk encapsulation. Encapsulate the root disk. View encapsulated disks. Create an alternate boot disk. Unencapsulate a root disk. Upgrade to a new VxVM or Solaris version. Upgrade to a new VxFS version.

FOS35_Sol_R1.0_20020930

17-3

Objectives After completing this lesson, you will be able to: Identify the benefits of disk encapsulation. Encapsulate the root disk. View encapsulated disks. Create an alternate boot disk. Unencapsulate a root disk. Upgrade to a new VxVM or Solaris version. Upgrade to a new VxFS version.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-3

What Is Encapsulation?
Encapsulation is the process of converting partitions into volumes. If a system has three partitions on the disk drive, there will be three volumes in the disk group.
VTOC VTOC 0 0 1 1 2 backup 2 backup 3 3 4 4 5 /home 5 /home 6 /eng 6 /eng 7 /dist 7 /dist

Boot

Solaris Disk c0t1d4


6 0 1 2 3 4 5

Unused Unused space space home eng dist


17-4

Available Available partitions partitions

Partitions

FOS35_Sol_R1.0_20020930

What Is Disk Encapsulation?


Disk Encapsulation Encapsulation is a method of placing a disk under VxVM control in which the data that exists on a disk is preserved. Encapsulation converts existing partitions into volumes, which provides continued access to the data on the disk after a reboot. After a disk has been encapsulated, the disk is handled in the same way as an initialized disk. For example, suppose that a system has three partitions on the disk drive. When you encapsulate the disk to bring it under VxVM control, there will be three volumes in the disk group. On a Solaris system, VxVM uses the volume table of contents (VTOC) to determine disk size (partition 2), then creates two partitions on the physical disk: One partition contains the private region. The private region stores VxVM information, such as disk headers, configuration copies, and kernel logs. Tag 15 is always associated with the private region. When a disk is encapsulated, tag 15 is always associated to a slice other than slice 3. The other partition contains the public region. The public region is used for storage space allocation and is always associated with tag 14.

17-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

What Is Root Encapsulation?


Placing root disk under VxVM control and creating Placing root disk under VxVM control and creating volumes equivalent to root disk partitions volumes equivalent to root disk partitions
Encapsulated root disk Partitions are converted to subdisks that are used to create the volumes that replace the Solaris partitions.
Private region / /usr /var swap rootvol usr

var

swapvol

FOS35_Sol_R1.0_20020930

/etc/system updated to force booting on root volume /etc/system updated to force booting on root volume /etc/vfstab updated to mount volumes /etc/vfstab updated to mount volumes

17-5

FOS35_Sol_R1.0_20020930

17-5

What Is Root Encapsulation? Root encapsulation is the process by which VxVM converts existing partitions of the root disk into VxVM volumes. After you encapsulate the root disk, the system mounts the standard root disk file systems (that is, /, /usr, and so on) from volumes instead of disk partitions. When the root disk is encapsulated, VxVM: Places directives into the /etc/system file to force booting on the root volume Updates the /etc/vfstab file to mount volumes instead of partitions

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-5

Why Encapsulate Root?


By encapsulating root, you can mirror root. Benefits of mirroring root:
Enables high availability Fixes bad blocks automatically (for reads) Improves performance

FOS35_Sol_R1.0_20020930

17-6

Why Encapsulate Root? It is highly recommended that you encapsulate and mirror the root disk. Some of the benefits of encapsulating and mirroring root include: High availability Encapsulating and mirroring root sets up a high availability environment for the root disk. If the boot disk is lost, the system continues to operate on the mirror disk. Bad block revectoring If the boot disk has bad blocks, then VxVM reads the block from the other disk and copies it back to the bad block to fix it. SCSI drives automatically fix bad blocks on writes, which is called bad block revectoring. Improved performance By adding additional mirrors with different volume layouts, you can achieve better performance. Mirroring alone can also improve performance if the root volumes are performing more reads than writes, which is the case on many systems.

17-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

When Not to Encapsulate Root


Encapsulating root increases the complexity of Solaris upgrades. Do not encapsulate root:
If you do not plan to mirror root If you do not need a high availability environment A system cannot boot from root that spans multiple devices. VxVM RAID-5 cannot be used for system volumes. You should never grow or change the layout of root volumes (rootvol, usr, var, opt, swapvol, and so on). These volumes map to a physical underlying partition on disk and must be contiguous.
17-7

Limitations of encapsulating root:

FOS35_Sol_R1.0_20020930

When Not to Encapsulate Root If you do not plan to mirror root, then you should not encapsulate it. Encapsulation adds a level of complexity to system administration, which increases the complexity of upgrading the Solaris operating system. Limitations of Root Disk Encapsulation Limitations in Volume Layout A system cannot boot from root that spans multiple devices. VxVM RAID-5 cannot be used for system volumes. Limitations in Volume Resizing You should never expand or change the layout of root volumes. No volume associated with an encapsulated boot disk (rootvol, usr, var, opt, swapvol, and so on) should be expanded or shrunk, because these volumes map to a physical underlying partition on the disk and must be contiguous. If you attempt to expand these volumes, the system can become unbootable if it becomes necessary to revert back to slices in order to boot the system. Expanding these volumes can also prevent a successful Solaris upgrade, and a fresh install can be required. Additionally, the upgrade_start script (used in upgrading VxVM to a new version) might fail. Note: You can add a mirror of a different layout, but the mirror is not bootable.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-7

File System Requirements


For root, usr, var, and opt volumes:
Use UFS file systems. (VxFS is not available until later in the boot process.) Use contiguous disk space. (Volumes cannot use striped, concatenated pro, or striped pro volume layouts.) Do not use dirty region logging for root or usr. (You can use DRL for the opt and var volumes.) The first swap volume must be contiguous, and, therefore, cannot use striped or layered layouts. Other swap volumes can be noncontiguous and can use any layout. However, there is an implied 2-GB limit of usable swap space per device for 32-bit operating systems.
17-8

For swap volumes:


FOS35_Sol_R1.0_20020930

File System Requirements for Root Volumes To boot from volumes, you should follow these requirements and recommendations for the file systems on root volumes: Root, usr, var, and opt Volumes For the root, usr, var, and opt volumes: Use UFS file systems: You must use UFS file systems for these volumes, because the VERITAS File System (VxFS) package is not available until later in the boot process when the scripts in /etc/rc2.d (multiuser mode) are executed. Use contiguous disk space: These volumes must be located in a contiguous area on disk, as required by the operating system. For this reason, these volumes cannot use striped, concatenated pro, or striped pro volume layouts. Do not use dirty region logging for root or usr: You cannot use dirty region logging (DRL) on the root and usr volumes. If you attempt to add a dirty region log to the root and usr volumes, you receive an error. Note: The opt and var volumes can use dirty region logging.

17-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Solaris Swap Space Considerations Solaris requires an area of contiguous disk space to be provided on the root disk for swap usage. The first swap volume (as listed in the /etc/vfstab file) must be contiguous and, therefore, cannot use striped or layered layouts. Additional swap volumes can be noncontiguous and can use any layout. Note: You can add noncontiguous swap space through Volume Manager. However, Solaris automatically uses swap devices in a round-robin method, which may reduce expected performance benefits of adding striped swap volumes. For 32-bit operating systems, usable space per swap device is limited to 2 GB. For 64-bit operating systems, this limit is much higher (up to 263 - 1 bytes).

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-9

Encapsulation Requirements
Two available partition table entries:
One for public region One for private region

An s2 slice that represents the full disk 2048 contiguous sectors of unpartitioned free space at the beginning or end of the disk for the private region:
The private region is created as a slice from unused space at the beginning or end of the disk. VxVM uses 2048 sectors by default. Root Disk Only: When a root disk is encapsulated, if no free space is available, then the private region is created from swap space.
FOS35_Sol_R1.0_20020930 17-9

Encapsulation Requirements Data Disk Encapsulation Requirements Encapsulating a disk has these requirements: At least two partition table entries must be available on the disk. One partition is used for the public region. One partition is used for the private region. The disk must contain an s2 slice that represents the full disk (The s2 slice cannot contain a file system.) 2048 sectors of unpartitioned free space, rounded up to the nearest cylinder boundary, must be available, either at the beginning or at the end of the disk. Encapsulation cannot occur if these requirements are not met. Boot Disk Encapsulation Requirements Boot disk encapsulation has the same requirements as data disk encapsulation, with one important distinction: when encapsulating the root disk, the private region can be created from the swap area, which reduces the swap area by the size of the private region. The private region is created at the beginning of the swap area, and the swap partition begins one cylinder from its original location. When creating new boot disks, you should start the partitions on the new boot disks on the next cylinder beyond the 2048 default used for the private region.

17-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Encapsulating Root: VEA


Highlight a disk and select Actions>Add Disk to Dynamic Highlight a disk and select Actions>Add Disk to Dynamic Disk Group. Disk Group.

Specify the rootdg Specify the rootdg disk group. disk group.

Specify the disk Specify the disk to add. to add. When prompted, When prompted, select Encapsulate select Encapsulate and reboot now. and reboot now.

FOS35_Sol_R1.0_20020930

17-10

FOS35_Sol_R1.0_20020930

17-10

Encapsulating the Root Disk


Encapsulating Root: VEA When you place an uninitialized disk under VxVM control by adding the disk to a disk group, you are prompted to either initialize or encapsulate the disk. To encapsulate a root disk: 1 Follow the procedure for adding an uninitialized disk to a disk group. Highlight an uninitialized disk and select Actions>Add Disk to Dynamic Disk Group. 2 Specify rootdg in the Dynamic disk group name field, select the disk to encapsulate, and click Next. 3 When prompted, specify that you want to encapsulate the disk and reboot the system.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-11

Encapsulating Root: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select option 2:


Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 . . . Add or initialize one or more disks Encapsulate one or more disks Remove a disk

Follow the prompts by specifying: Name of the device to add (for example, c0t0d0) Name of the disk group to which the disk will be added (rootdg)
FOS35_Sol_R1.0_20020930 17-11

Encapsulating Root: vxdiskadm To encapsulate the root disk using the vxdiskadm interface: 1 Invoke the vxdiskadm main menu and select option 2, Encapsulate one or more disks. 2 When prompted, type the disk device name for the disks to be encapsulated: Select disk devices to encapsulate: [<pattern-list>,all,list,q,?] c0t0d0 If you do not know the device name of the disk to be encapsulated, type list at the prompt for a complete listing of available disks. 3 To add the disk to the rootdg disk group, press Return at the following prompt: Which disk group [<group>,list,q,?] (default: rootdg) 4 When prompted, confirm that you want to encapsulate the disk. 5 A message confirms that the disk is being encapsulated and states that you should reboot your system at the earliest possible opportunity. 6 At the following prompt, indicate whether you want to encapsulate more disks (y) or return to the vxdiskadm main menu (n): Encapsulate other disks? [y,n,q,?] (default: n) n

17-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VTOC: Before/After Root Encapsulation


# prtvtoc /dev/rdsk/c0t0d0s2 # prtvtoc /dev/rdsk/c0t0d0s2 Before . . . . . . * First Sector Last * First Sector Last * Partition Tag Flags Sector Count * Partition Tag Flags Sector Count Sector Mount... Sector Mount... 0 2 00 0 4916016 0 2 00 0 4916016 4916015 4916015 / / 1 3 01 4916016 1 3 01 4916016 2048256 2048256 6964271 6964271 2 5 00 0 2 5 00 0 17801280 17801279 17801280 17801279 6 4 00 6964272 4301136 11265407 /usr 6 4 00 6964272 4301136 11265407 /usr 7 7 00 11265408 7 7 00 11265408 4505760 15771167 /var 4505760 15771167 /var
. . . . . . * First Sector * First Sector * Partition Tag Flags Sector Count * Partition Tag Flags Sector Count 0 2 00 0 4916016 0 2 00 0 4916016 1 3 01 4916016 2048256 1 3 01 4916016 2048256 2 5 00 0 2 5 00 0 17801280 17801280 3 14 01 0 3 14 01 0 17801280 17801280 FOS35_Sol_R1.0_20020930 4 15 01 3024 4 15 01 17798256 17798256 3024 6 4 00 6964272 4301136 6 4 00 6964272 4301136 7 7 00 11265408 4505760 7 7 00 11265408 4505760
FOS35_Sol_R1.0_20020930

Before

# prtvtoc /dev/rdsk/c0t0d0s2 # prtvtoc /dev/rdsk/c0t0d0s2

After After
Last Last Sector Mount... Sector Mount... 4916015 4916015 6964271 6964271 17801279 17801279 17801279 17801279 17-12 17801279 17801279 11265407 11265407 15771167 15771167
17-12

Viewing Encapsulated Disks


Review: Viewing Disk Information To view information about encapsulated disks, you use the same methods as for viewing initialized disks: VEA Actions>Properties vxdiskadm The list option vxdisk list and prtvtoc CLI VTOC: Before and After Encapsulating Root Disk The example displays the output of the prtvtoc command before and after encapsulating the root disk. After encapsulating the root disk: Tag 14 is used for the public region. Tag 15 is used for the private region. Note: The partitions for the root, swap, usr, and var partitions are still on the disk, unlike on data disks where all partitions are removed. The root disk is a special case, and the partitions are kept to make upgrading easier.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-13

VTOC: Before/After Data Disk Encapsulation


# prtvtoc /dev/rdsk/c1t0d0s2 # prtvtoc /dev/rdsk/c1t0d0s2
. . . . . . * First * First * Partition Tag Flags Sector * Partition Tag Flags Sector 0 0 00 0 0 0 00 0 2 5 00 0 2 5 00 0 5 0 00 468720 5 0 00 468720 Sector Sector Count Count 205200 205200 8380800 8380800 205200 205200

Before Before
Last Last Sector Mount... Sector Mount... 205199 /home 205199 /home 8380799 8380799 673919 /home2 673919 /home2

* First * First * Partition Tag Flags * Partition Tag Flags Sector Sector 2 5 00 0 2 5 00 0 3 14 01 0 3 14 01 0 4 15 01 8378640 4 15 01 8378640 FOS35_Sol_R1.0_20020930

# # . .

prtvtoc /dev/rdsk/c1t0d0s2 prtvtoc /dev/rdsk/c1t0d0s2 . . . .

After After
Sector Sector Count Count 8380800 8380800 8380800 8380800 3591 3591 Last Last Sector Mount... Sector Mount... 8380799 8380799 8380799 8380799 8380799 8380799 17-13

FOS35_Sol_R1.0_20020930

17-13

VTOC: Before and After Data Disk Encapsulation The example displays the output of the prtvtoc command before and after encapsulating a data disk. After encapsulating a data disk: Tag 14 is used for the public region. Tag 15 is used for the private region. Notice that in this example the private region is placed at the end of the disk. This placement occurred because a partition existed on the first sectors of the drive. Even though the physical partitions are removed in the process of data disk encapsulation, the original partition table is preserved at:
/etc/vx/reconfig.d/disk.d/device

The files in this directory are used to reset the VTOC in case the disk ever needs to be accessed outside of VxVM.

17-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The /etc/system File


The following two lines are added to /etc/system as part of the root encapsulation process:
rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1

FOS35_Sol_R1.0_20020930

17-14

/etc/system
As part of the root encapsulation process, the /etc/system file is updated to include information that tells VxVM to boot up on the encapsulated volumes. The following two lines are added to the /etc/system file:
rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-15

vfstab: Before Root Encapsulation


#device #device #to mount #to mount # # fd fd /proc /proc /dev/dsk/c0t3d0s1 /dev/dsk/c0t3d0s1 /dev/dsk/c0t3d0s0 /dev/dsk/c0t3d0s0 /dev/dsk/c0t3d0s6 /dev/dsk/c0t3d0s6 /dev/dsk/c0t3d0s7 /dev/dsk/c0t3d0s7 /dev/dsk/c0t3d0s5 /dev/dsk/c0t3d0s5 swap swap device device to fsck to fsck /dev/fd /dev/fd mount FS mount FS point type point type fd fd fsck mount fsck mount pass at boot pass at boot no no no no no no 1 1 1 1 1 1 2 2 no no no no no no yes yes yes yes

/proc proc /proc proc swap swap /dev/rdsk/c0t3d0s0 / ufs /dev/rdsk/c0t3d0s0 / ufs /dev/rdsk/c0t3d0s6 /usr ufs /dev/rdsk/c0t3d0s6 /usr ufs /dev/rdsk/c0t3d0s7 /var ufs /dev/rdsk/c0t3d0s7 /var ufs /dev/rdsk/c0t3d0s5 /opt ufs /dev/rdsk/c0t3d0s5 /opt ufs /tmp /tmp tmpfs tmpfs

FOS35_Sol_R1.0_20020930

17-15

/etc/vfstab: Before Root Encapsulation


When you encapsulate the root disk, VxVM updates the /etc/vfstab file to mount volumes instead of partitions. The example displays the /etc/vfstab file before root encapsulation.

17-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

vfstab: After Root Encapsulation


#device device mount FS #device device mount FS #to mount to fsck point type #to mount to fsck point type # # fd /dev/fd fd fd /dev/fd fd /proc /proc proc /proc /proc proc /dev/vx/dsk/swapvol swap /dev/vx/dsk/swapvol swap /dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / /dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs ufs fsck mount fsck mount pass at boot pass at boot no no no no 1 1 no no no no

/dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no /dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no /dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no /dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no /dev/vx/dsk/opt /dev/vx/rdsk/opt /opt ufs 2 yes /dev/vx/dsk/opt /dev/vx/rdsk/opt /opt ufs 2 yes swap /tmp tmpfs yes swap /tmp tmpfs yes #NOTE: volume rootvol (/) encapsulated partition c0t3d0s0 #NOTE: volume rootvol (/) encapsulated partition c0t3d0s0 #NOTE: volume swapvol (swap) encapsulated partition c0t3d0s1 #NOTE: volume swapvol (swap) encapsulated partition c0t3d0s1 #NOTE: volume opt (/opt) encapsulated partition c0t3d0s5 #NOTE: volume opt (/opt) encapsulated partition c0t3d0s5 #NOTE: volume usr (/usr) encapsulated partition c0t3d0s6 #NOTE: volume usr (/usr) encapsulated partition c0t3d0s6 #NOTE: volume var (/var) encapsulated partition c0t3d0s7 #NOTE: volume var (/var) encapsulated partition c0t3d0s7
FOS35_Sol_R1.0_20020930 17-16

/etc/vfstab: After Root Encapsulation


The example displays the /etc/vfstab file after root encapsulation.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-17

Creating an Alternate Boot Disk


An alternate boot disk is a mirror of the entire root disk. An alternate boot disk preserves the boot block in case the initial boot disk fails. Creating an alternate boot disk requires:
The boot disk must be encapsulated by VxVM. Another disk must be available with enough space to contain all of the root partitions. All disks must be in the rootdg disk group.

The root mirror places the private region at the beginning of the disk. The remaining partitions are placed after the private region. You can add additional mirrors, as needed.
FOS35_Sol_R1.0_20020930 17-17

Creating an Alternate Boot Disk


Mirroring the Root Disk To protect against boot disk failure, you can create an alternate boot disk. An alternate boot disk is a mirror of the entire root disk. You can use the alternate boot disk to boot the system if the primary boot disk fails. Requirements for Mirroring the Root Disk The boot disk must be encapsulated by VxVM in order to be mirrored. To mirror the root disk, you must provide another disk with enough space to contain all of the root partitions (/, /usr, /var, /opt, and swap). You can only use disks in the rootdg disk group for the boot disk and alternate boot disks. The root mirror places the private region at the beginning of the disk, and the remaining partitions are placed after the private region. Each disk contains all of the data, but the data is not necessarily placed at the exact same location on each disk. You can add additional mirrors to increase redundancy, as needed. All bootable mirrors must follow the same rules as the initial mirror. Note: Whenever you create an alternate boot disk, you should always verify that the root mirror is bootable.

17-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Alternate Boot Disk Usage


Original rootdisk
Boot

Mirror
rootvol-02
Boot

rootvol-01
7

disk01
0 6 1 2 3 4 5 7

0 1 2 3 4 5 6

rootdisk-02 rootdisk-01 rootdisk-03 Stale root volume? Stale root volume? rootdisk-04 VxVM header errors? VxVM header errors? rootdisk-05 Hardware failure?

disk01-02 disk01-01 disk01-03 disk01-04 disk01-05

Hardware failure?

FOS35_Sol_R1.0_20020930

17-18

Why Create an Alternate Boot Disk? Creating a mirror of a system boot disk makes the system less vulnerable to failure. If one disk fails, the system can function with the mirror. An alternate boot disk is used if the root disk becomes unbootable due to: A stale root volume Errors in VxVM header information Hardware failure on the boot disk

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-19

Boot Disk Error Messages


Stale root volume Stale root volume vxvm: vxconfigd: Warning: Plex rootvol-01 for root volume is stale or unusable Failed startup Failed startup vxvm: vxconfigd: Error: System startup failed: Root plex not valid Root plex not valid vxvm: vxconfigd: Error: System boot disk does not have a valid root plex Please boot from one of the following disks: Disk: disk01 Device: c0t1d0s0 Alternate boot disks are listed. Alternate boot disks are listed.

FOS35_Sol_R1.0_20020930

17-19

Possible Boot Disk Errors If you encounter a problem with the rootvol plex of the primary boot disk, you may see one of the following error messages: Root plex is stale or unusable vxvm:vxconfigd: Warning: Plex rootvol-01 for root volume is stale or unusable System startup failed vxvm:vxconfigd: ERROR: System startup failed System boot disk does not have a valid root plex vxvm:vxconfigd: ERROR: System boot disk does not have a valid root plex Please boot from one of the following disks: Disk: diskname Device: device ... In the third message, alternate boot disks containing valid root mirrors are listed as part of the error message. Try to boot from one of the disks named in the error message. You may be able to boot using a device alias for one of the named disks. For example, use this command:
ok> boot vx-diskname

17-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Booting from an Alternate Mirror


To boot the system using an alternate boot disk after failure of the primary boot disk:
1. Set the eeprom variable use-nvramrc? to true: ok> setenv use-nvramrc? true ok> reset This variable must be set to true to enable the use of alternate boot disks. 2. Check for available boot disk aliases: ok> devalias vx-rootdisk Output displays the name of the root disk and available root mirrors. vx-diskname 3. Boot from an available boot disk alias: ok> boot vx-diskname
FOS35_Sol_R1.0_20020930 17-20

Booting from Alternate Mirror If the root disk is encapsulated and mirrored, you can use one of its mirrors to boot the system if the primary boot disk fails. To boot the system after failure of the primary boot disk on a SPARC system: 1 Check to ensure that the eeprom variable use-nvramrc? is set to true: ok> printenv use-nvramrc? This variable must be set to true to enable the use of alternate boot disks. To set the value of use-nvramrc? to true: ok> setenv use-nvramrc? true ok> reset 2 Check for available boot disk aliases: ok> devalias The devalias command displays the names of the root disk and root mirrors. For example: vx-rootdisk vx-diskname Mirrors of the root disk are listed in the form vx-diskname. 3 Boot from an available boot disk alias: ok> boot vx-diskname

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-21

Notes on Booting from an Alternate Mirror You should always test the alternate boot mirror immediately after creating it to ensure that it is bootable before you experience a boot disk failure. If use-nvramrc? is set to false, the system fails to boot from the devalias and displays an error message such as the following: Rebooting with command: boot vx-mirdisk Boot device: /pci@1f,4000/scsi@3/disk@0,0 File and args:vx-mirdisk boot: cannot open vx-mirdisk Enter filename [vx-mirdisk]: If a selected disk contains a root mirror that is stale, vxconfigd displays an error stating that the mirror is unusable and lists any nonstale alternate bootable disks. To check the output of devalias at a terminal window, use the command: # eeprom |grep devalias To set use-nvramrc? to true at a terminal window, use the command: # eeprom use-nvramrc?=true If the system is already up and running and use-nvramrc? is set to true, you can define an alternate boot disk using the command: # eeprom nvramrc=devalias vx-diskname If vx-diskname does not appear in the devalias command, then you may need to run vxbootsetup from the UNIX prompt: # /etc/vx/bin/vxbootsetup [medianame...] The vxbootsetup utility configures physical disks so that they can be used to boot the system. Before vxbootsetup is called to configure a disk, mirrors of the root, swap, /usr, and /var volumes (if they exist) should be created on the disk. These mirrors should be restricted mirrors of the volume. The vxbootsetup utility configures a disk by writing a boot track at the beginning of the disk and by creating physical disk partitions in the UNIX VTOC that match the mirrors of the root, swap, /usr, and /var. With no medianame arguments, all disks that contain usable mirrors of the root, swap, /usr, and /var volumes are configured to be bootable. If medianame arguments are specified, only the named disks are configured. vxbootsetup requires that the root volume is named rootvol and has a usage type of root. The swap volume is required to be named swapvol and to have a usage type of swap. The volumes containing /usr and /var (if any) are expected to be named usr and var, respectively. This utility also invokes vxeeprom, which populates the nvramrc table at PROM level, so you can boot by the vx-diskname name. You must run this command if the devalias is not set up. Running the command more than once does not harm the system.

17-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Alternate Boot Disk: VEA


Boot disk Boot disk Alternate boot disk Alternate boot disk

FOS35_Sol_R1.0_20020930

17-21

FOS35_Sol_R1.0_20020930

17-21

Creating an Alternate Boot Disk: VEA To create an alternate boot disk in VEA, mirror the entire root disk as follows: 1 Add a disk to rootdg by using Actions>Add Disk to Dynamic Disk Group. 2 In the main window, highlight the boot disk (rootdisk) in the rootdg disk group, then select Actions>Mirror Disk. 3 In the Mirror Disk dialog box, verify the name of the root disk, and specify the target disk to use as the alternate boot disk. 4 Click Yes in the Mirror Disk dialog box to complete the mirroring process. 5 After the root mirror is created, verify that the root mirror is bootable.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-23

Alternate Boot Disk: vxdiskadm


Menu 1 2

At the vxdiskadm main menu, select option 6:


Volume Manager Support Operations Menu: VolumeManager/Disk . . . 5 6 7 . . . Replace a failed or removed disk Mirror volumes on a disk Move volumes from a disk

Follow the prompts and specify:


Name of the disk containing the volumes to be mirrored (for example, rootdisk) Name of the destination disk (for example, disk01)
FOS35_Sol_R1.0_20020930 17-22

Creating an Alternate Boot Disk: vxdiskadm 1 In the vxdiskadm main menu, select option 6, Mirror volumes on a disk. 2 When prompted, type the disk media name for the root disk. At the prompt below, supply the name of the disk containing the volumes to be mirrored. Enter disk name [<disk>,list,q,?] rootdisk 3 If necessary, select a specific disk for the mirror. Enter destination disk [<disk>,list,q,?] (default: any) disk01 4 A summary of the action is displayed, and you are prompted to confirm the operation. The requested operation is to mirror all volumes on disk rootdisk in disk group rootdg onto available disk space on disk disk01. NOTE: This operation can take a long time to complete. Continue with operation? [y,n,q,?] (default: y) y
Mirror volume opt ... Mirror volume rootvol ...

5 After the root mirror is created, verify that the root mirror is bootable prior to using it.

17-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Alternate Boot Disk: CLI

#_

To mirror the root volume only:


# /etc/vx/bin/vxrootmir alternate_disk

vxrootmir mirrors the root volume and installs the boot block needed to boot the system. To mirror all other unmirrored, concatenated volumes on the boot disk to the alternate disk:
# /etc/vx/bin/vxmirror boot_disk alternate_disk

Other volumes can also be mirrored to the alternate boot disk, or to other disks, by using:
# vxassist mirror volume_name alternate_disk
FOS35_Sol_R1.0_20020930 17-23

Creating an Alternate Boot Disk: CLI To mirror your boot disk from the command line: 1 Select a disk that is at least as large as your boot disk. 2 Use the vxdiskadd command to add the selected disk as a new disk (if it is not already added). 3 To create a mirror for the root volume only, use the vxrootmir command: # /etc/vx/bin/vxrootmir alternate_disk where alternate_disk is the disk name assigned to the other disk. vxrootmir invokes vxbootsetup (which invokes installboot), so that the disk is partitioned and made bootable. (The process is similar to using vxmirror and vxdiskadm.) Other volumes on the boot disk can be mirrored separately using vxassist. For example, if you have a /home file system on a volume homevol, you can mirror it to alternate_disk using the command:
# vxassist mirror homevol alternate_disk

If you do not have space for a copy of some of these file systems on your alternate boot disk, you can mirror them to other disks. You can also span or stripe these other volumes across other disks attached to your system. To mirror all of the concatenated volumes on the primary boot disk to your alternate boot disk, use the command:
# /etc/vx/bin/vxmirror boot_disk alternate_disk

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-25

Which Root Disk Is Booting? How do you determine the physical root disk from which the system is booting? On a Sun Ultra system that is set up with an encapsulated root disk and mirrored to an alternate device, you sometimes need to determine exactly which half of the mirror the machine is booting from. This can be done in multiuser mode by executing the following command. Note: This example is from a Sun E3500 with two internal Sun fiber disks as the encapsulated root drives.
# prtconf -vp | grep bootpath bootpath: '/sbus@3,0/SUNW,socal@d,10000/sf@0,0/ ssd@w2100002037590098,0:a' #

During root disk encapsulation, the systems nvramrc is modified with device aliases that expand to the fully qualified device path of each encapsulated bootable root disk. These aliases are only usable once the user has set the eeprom variable, use-nvramrc?, to true. This can be done as root by executing either of the following commands from a Solaris prompt:
# /usr/sbin/eeprom use-nvramrc?=true

or
# /etc/vx/bin/vxeeprom enable

However, the aliases are not valid until the next system reset. Therefore, you should set eeprom variables while at the eeprom ok> prompt, prior to system initialization. The following is an example of an eeprom session:
ok> printenv use-nvramrc? use-nvramrc? = false ok> setenv use-nvramrc? true ok> setenv auto-boot?=false # keep system from booting to Solaris when I reset ok> reset # reset eeprom only ok> devalias # show Solaris aliases plus the Volume Manager generated devaliases from nvramrc <solaris info deleted> vx-disk01 /sbus@3,0/SUNW,socal@d,10000/sf@0,0/ ssd@w21000020374fe71f,0:a' vx-rootdisk /sbus@3,0/SUNW,socal@d,10000/sf@0,0/ ssd@w2100002037590098,0:a' ok> setenv auto-boot? true ok> reset # system will now continue booting to default boot-device

17-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Once booted, the new devaliases can be seen in the prtconf output.
# prtconf -vp | grep vx vx-disk01: '/sbus@3,0/SUNW,socal@d,10000/sf@0,0/ ssd@w21000020374fe71f,0:a' vx-rootdisk: '/sbus@3,0/SUNW,socal@d,10000/sf@0,0/ ssd@w2100002037590098,0:a' # prtconf -vp | grep boot-device boot-device: '/sbus@3,0/SUNW,socal@d,10000/sf@0,0/ ssd@w2100002037590098,0'

The output shows the boot device equal to the devalias vx-rootdisk. boot-device is another eeprom setting, which is set to the default boot path (or devalias). To boot off the alternate root disk from the ok> prompt, run:
ok> boot vx-disk01

This command loads Solaris from the alternate root, subsequently changing the value of the boot path. To display the device access name (cXtYdZ) for this path: Run format at the root prompt and search for the path in the label output. or List the /device tree and search for the path using the grep utility.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-27

Unencapsulating a Root Disk


To unencapsulate a root disk, use vxunroot. Requirements:
Remove all but one plex of rootvol, swapvol, usr, var, opt, and home. You must have one disk in addition to the root disk in rootdg.

Use vxunroot when you need to:


Boot from physical system partitions. Change the size or location of the private region on the root disk. Upgrade both Solaris and VxVM.

Do not use vxunroot if you are only upgrading VxVM packages, including the VEA package.
FOS35_Sol_R1.0_20020930 17-24

Unencapsulating a Root Disk


The vxunroot Command To convert the root, swap, usr, var, opt, and home file systems back to being accessible directly through disk partitions instead of through volume devices, you use the vxunroot utility. Other changes that were made to ensure the booting of the system from the root volume are also removed so that the system boots with no dependency on VxVM. For vxunroot to work properly, the following conditions must be met: All but one plex of rootvol, swapvol, usr, var, opt, and home must be removed (using vxedit or vxplex). One disk in addition to the root disk must exist in rootdg. If none of these conditions is met, the vxunroot operation fails, and volumes are not converted back to disk partitions. When to Use vxunroot Use vxunroot when you need to: Boot from physical system partitions. Change the size or location of the private region on the root disk. Upgrade both Solaris and VxVM. You do not need to use vxunroot if you are only upgrading VxVM packages, including the VEA package.

17-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The vxunroot Command


1. Ensure that the root volumes only have one plex each: # vxprint -ht rootvol swapvol usr var 2. If root volumes have more than one plex each, remove the unnecessary plexes: # vxplex -o rm dis plex_name 3. Run the vxunroot utility: # /etc/vx/bin/vxunroot

FOS35_Sol_R1.0_20020930

17-25

To convert a root volume back to partitions: 1 Ensure that the rootvol, swapvol, usr, and var volumes have only one associated plex each. The plex must be contiguous, nonstriped, nonspanned, and nonsparse. For information about the plex, use the following command: # vxprint -ht rootvol swapvol usr var 2 If any of these volumes have more than one associated plex, remove the unnecessary plexes using the command: # vxplex -o rm dis plex_name 3 Run the vxunroot program using the following command: # /etc/vx/bin/vxunroot This command changes the volume entries in /etc/vfstab to the underlying disk partitions for the rootvol, swapvol, usr, and var volumes. The command also modifies /etc/system and prompts for a reboot so that disk partitions are mounted instead of volumes for the root, swap, usr, and var volumes.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-29

Notes on Upgrading VxVM


General notes:
Follow all release notes and documentation. A new license key is not required to upgrade VxVM only. However, you must install the new licensing package for VxVM 3.5.x. Your existing VxVM configuration is retained. Remove the SUNW packages before adding the VRTS packages (to upgrade from SUNW VM 2.x to VxVM 3.x). Importing a pre-3.x VxVM disk group does not automatically upgrade the disk group version. The DMP driver must always be present on the system (for VxVM 3.1.1 and later). DMP can coexist with Alternate Pathing (AP) driver from Sun (for VxVM 3.1.1 and later).
17-26

Special situations:

DMP notes:

FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

17-26

Upgrading to a New VxVM Version


General Notes on Upgrades When performing an upgrade of the VxVM software, follow these guidelines: Determine what you are upgrading: Before you upgrade, determine whether you need to upgrade VxVM only, both VxVM and Solaris, or Solaris only. Follow documentation: When upgrading, always follow the Solaris and VxVM release notes and other documentation to determine proper installation procedures and required patches. Install appropriate patches: You should install appropriate patches before adding new VxVM packages. For the latest patch information, visit the VERITAS Technical Support Web site. License is not required to upgrade VxVM only: If you are already running an earlier release of VxVM, you do not need a new license key to upgrade to VxVM release 3.5.x. However, you must install the new licensing package, VRTSvlic, which uses your existing licensing information. VRTSvlic recognizes keys created in the previous format, and the new utilities in the VRTSvlic package report on, test, and install keys of both formats. Your existing VxVM configuration is retained: The upgrade procedures allow you to retain your existing VxVM configuration. After upgrading, you can resume using VxVM without running the vxinstall program. Upgrading VxVM does not upgrade existing disk group versions: Importing a pre-3.x VxVM disk group does not automatically upgrade the disk group version to the VxVM 3.x level. You may need to manually upgrade each of your disk groups after a VxVM upgrade.
17-30 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Other Notes on Upgrades The vxdmp Driver: Starting with VxVM release 3.1.1, the vxdmp driver must always be present on the system for VxVM to function. Upgrading to this release of VxVM enables vxdmp, even if it was disabled prior to the upgrade. Sun Alternate Pathing: Starting with VxVM release 3.1.1, dynamic multipathing (DMP) can coexist with the Alternate Pathing (AP) driver from Sun. This feature requires the latest AP driver from Sun. Upgrade your version of AP and install the appropriate patches before upgrading VxVM. SUNW Packages: When upgrading from SUNW Volume Manager 2.x to VxVM 3.x, remove the SUNW packages before adding the new VRTS packages.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-31

Scripts Used in Upgrades


The upgrade_start and upgrade_finish scripts preserve your VxVM configuration. upgrade_start
Checks system Converts volumes to partitions Preserves files Updates system files Saves upgrade information in VXVM3.5-UPGRADE

upgrade_finish
Corrects mistakes due to abnormal termination of upgrade_start Checks licenses Converts partitions to volumes Reloads drivers Restores systems and configuration files Verifies VxVM installation
17-27

FOS35_Sol_R1.0_20020930

To check for potential problems before any upgrade, run: # upgrade_start -check

FOS35_Sol_R1.0_20020930

17-27

Scripts Used in VxVM Upgrades To upgrade to a new version of VxVM, you use the upgrade_start and upgrade_finish scripts, which are available in the scripts directory on the VERITAS CD-ROM. These scripts preserve your Volume Manager configuration information while you upgrade the system. Ensure that you use the upgrade_start and upgrade_finish scripts included with VxVM 3.5 (not versions of the scripts provided with earlier versions of VxVM) when upgrading from an earlier release. Before any upgrade, you should run the upgrade_start -check command to find any problems that exist which could prevent a successful upgrade:
# upgrade_start -check

This script enables you to determine if any changes are needed to your configuration before you perform an upgrade. This script reports errors, if any are found. Otherwise, it reports success and you can proceed with running the upgrade_start script. What Does the upgrade_start Script Do? The upgrade_start script prepares the previous version of VxVM for its removal: Checks your system for problems that may prevent a successful upgrade Checks to determine if you have previously run the upgrade scripts Verifies that either VRTSvxvm or SUNWvxvm is installed Preserves files that need to be restored after a Solaris or a VxVM upgrade
17-32 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Updates the /etc/system and /etc/vfstab files Saves information in VXVM3.5-UPGRADE Converts key file systems from volumes to physical disk partitions and checks your running Solaris version Touches /VXVM3.5-UPGRADE/.start_runed to prevent Volume Manager from starting after reboot

What Does the upgrade_finish Script Do? The upgrade_finish script: Corrects any mistakes made due to an abnormal termination of upgrade_start Checks for appropriate licenses Converts key file systems from physical disk partitions back to volumes Reloads vxdmp, vxio, and vxspec drivers Restores saved configuration files and VxVM state files Restores the /etc/system and /etc/vfstab files Rebuilds the volboot file Starts VxVM daemons Verifies a successful installation of VxVM

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-33

Upgrading VxVM Only with pkgadd


1 2

3 4

5
FOS35_Sol_R1.0_20020930

Bring the system to single-user mode: # init S Stop the vxconfigd and vxiod daemons: # vxdctl stop # vxiod -f set 0 Remove the VMSA software (optional): # pkgrm VRTSvmsa Add the new VxVM packages: # pkgadd -a /CD_path/scripts VRTSobadmin -d /CD_path/pkgs VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman VRTSob VRTSobgui VRTSvmpro VRTSfspro Perform a reconfiguration reboot: # reboot -- -r

17-28

FOS35_Sol_R1.0_20020930

17-28

Upgrading Volume Manager Only If you are already running a version of Solaris that is supported with the new version of VxVM, then you can upgrade Volume Manager only. You do not need to run vxunroot with this method. You can use two methods to upgrade VxVM only on an encapsulated root disk: The pkgadd Method: Use the pkgadd command to install the new version of VxVM on top of your existing software. The advantage of this method is that only one reboot is required. The Upgrade Scripts Method: Use the upgrade_start and upgrade_finish scripts to install the VxVM software. The advantage of this method is that VxVM configuration data is backed up and the root disk is unencapsulated during the upgrade procedure. However, multiple reboots are required. Upgrading VxVM Only Using pkgadd To upgrade Volume Manager only by using the pkgadd command: 1 Bring the system to single-user mode. # init S 2 Stop the vxconfigd and vxiod daemons. # vxdctl stop # vxiod -f set 0 3 Remove the VMSA software. This step is optional. You should not remove the VMSA package if you still have clients running an old version of VxVM.
17-34 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

However, remember that VMSA does not run with VxVM 3.5 and later versions of vxconfigd. # pkgrm VRTSvmsa 4 Add the new VxVM packages. You must add the new licensing package first on the command line. # pkgadd -a /CD_path/scripts VRTSobadmin -d /CD_path/pkgs VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman VRTSob VRTSobgui VRTSvmpro VRTSfspro 5 Perform a reconfiguration reboot. # reboot -- -r

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-35

Upgrading VxVM Only with the Upgrade Scripts


1 2 3 4 5 6 7

# cd /CD_path/scripts # ./upgrade_start # reboot -- -s # mount -F ufs /dev/dsk/c0t0d0s5 /opt # pkgrm VRTSvmsa VRTSvmdoc VRTSvmman VRTSvmdev VRTSvxvm # reboot (to multiuser)
# pkgadd -a /CD_path/scripts VRTSobadmin -d /CD_path/pkgs VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman VRTSob VRTSobgui VRTSvmpro VRTSfspro

FOS35_Sol_R1.0_20020930

# /CD_path/scripts/upgrade_finish

17-29

FOS35_Sol_R1.0_20020930

17-29

Upgrading VxVM Only Using the Upgrade Scripts To upgrade Volume Manager only by using the upgrade scripts: 1 Mount the VERITAS CD-ROM and change to the scripts directory: # cd /CD_path/scripts 2 Run the upgrade_start script: # ./upgrade_start 3 Reboot the system to single-user mode: # reboot -- -s 4 When the system comes up, mount the /opt partition (if it is not part of the root file system): # mount -F ufs /dev/dsk/c0t0d0s5 /opt 5 Remove the VxVM package and other related VxVM packages: # pkgrm VRTSvmsa VRTSvmdoc VRTSvmman VRTSvmdev VRTSvxvm Note: Do not remove the VRTSvmsa package if you still have clients running old versions of VxVM. 6 Reboot the system to multiuser mode: # reboot 7 Verify that /opt is mounted, and then install the new VxVM packages: # pkgadd -a /CD_path/scripts VRTSobadmin -d /CD_path/pkgs VRTSvlic VRTSvxvm VRTSob VRTSobgui VRTSvmpro VRTSfspro VRTSvmman VRTSvmdoc

17-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8 Change to the scripts directory, and run the upgrade_finish script: # /CD_path/scripts/upgrade_finish Upgrading VxVM from SUNWvxvm When upgrading from SUNWvxvm to VERITAS Volume Manager, you must remove all SUNWvxvm and SUNWvxva packages and patches before adding the new VxVM packages. For more information on this procedure, contact VERITAS Technical Support.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-37

Upgrading Solaris Only


To prepare:
1 Detach any boot disk mirrors. 2 Check alignment of rootdisk volumes. 3 Ensure that /opt is not a symbolic link.

To upgrade:
1 Bring system to single-user mode. 2 Load VERITAS CD-ROM. 3 Check for upgrade issues. 4 Run upgrade_start. 5 Reboot to single-user mode. 6 Upgrade your operating system. 7 Reboot to single-user mode. 8 Load VERITAS CD-ROM.
FOS35_Sol_R1.0_20020930

Command Sequence: # init S # /etc/init.d/volmgt start # upgrade_start -check # upgrade_start # reboot -- -s (Upgrade your OS.) # reboot -- -s # /etc/init.d/volmgt start # /CD_path/upgrade_finish # /etc/shutdown
17-30 17-30

9 Run upgrade_finish.

10 Reboot to multiuser mode.


FOS35_Sol_R1.0_20020930

Upgrading Solaris Only To upgrade Solaris only: Prepare for the Upgrade 1 If the boot disk is mirrored, detach the mirror. 2 Check the alignment of volumes on the rootdisk. If any of the file systems /, /usr, /var, or /opt are defined on volumes, ensure that at least one plex for each of those volumes is formed from a single subdisk that begins on a cylinder boundary. This is necessary because part of the upgrade process involves temporarily converting file systems on volumes back to using direct disk partitions, and Solaris requires that disk partitions start on cylinder boundaries. The upgrade scripts automatically convert file systems on volumes back to using regular disk partitions, as necessary. If the upgrade scripts detect any problems (such as lack of cylinder alignment), an explanation of the problem is displayed, and the upgrade does not proceed. 3 If you plan to install any documentation or manual pages, ensure that the /opt directory exists, is writable, and is not a symbolic link. The volumes that are not converted by the upgrade_start script will not be available during the upgrade process. If you have a symbolic link from /opt to one of the unconverted volumes, the symbolic link will not function during the upgrade and items in /opt will not be installed.

17-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Perform the Upgrade 1 Bring the system down to single-user mode: # init S 2 Load and mount the VERITAS CD-ROM by starting the volmgt daemon: # /etc/init.d/volmgt start 3 Locate the scripts directory and run the command to check for any problems that may exist that can prevent a successful upgrade: # /CD_path/scripts/upgrade_start -check If no problems are discovered, you can proceed with the upgrade. 4 Run the upgrade_start script: # /CD_path/scripts/upgrade_start 5 Reboot to single user mode: # reboot -- -s 6 Upgrade your operating system. Refer to your Solaris installation documentation to install the operating system and any required patches. 7 Reboot to single user mode: # reboot -- -s 8 Load and mount the VERITAS CD-ROM by starting the volmgt daemon, and locate the scripts directory: # /etc/init.d/volmgt start 9 Complete the upgrade by running the upgrade_finish script: # /CD_path/scripts/upgrade_finish 10 Reboot to multiuser mode by using a command such as /etc/shutdown.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-39

Upgrading VxVM and Solaris


To prepare:
1 Install license keys if needed. 2 Detach any boot disk mirrors. 3 Check alignment of rootdisk volumes. 4 Ensure that /opt is not a symbolic link.

To remove old version:


1 Bring system to single-user mode. 2 Load VERITAS CD-ROM. 3 Check for upgrade issues. 4 Run upgrade_start. 5 Reboot to single-user mode. 6 Remove VxVM packages.

To install new version:


1 Reboot to single-user mode. 2 Upgrade your operating system.
FOS35_Sol_R1.0_20020930

5 Add new licensing and

VxVM packages.

3 Reboot to single-user mode. 4 Load VERITAS CD-ROM.

6 Run upgrade_finish.
17-31 7 Perform reconfiguration reboot. 8 Add additional packages. 17-31

FOS35_Sol_R1.0_20020930

Upgrading VxVM and Solaris To upgrade Volume Manager and Solaris, follow these steps: Prepare for the Upgrade 1 If you are upgrading VxVM from a version earlier than 3.0.2, or if you have a SUNW version of Volume Manager, you must obtain and install a VxVM license key. 2 If the boot disk is mirrored, detach the mirror. 3 Check the alignment of volumes on the rootdisk. 4 If you plan to install any documentation or manual pages, ensure that the /opt directory exists, is writable, and is not a symbolic link. The volumes that are not converted by the upgrade_start script will not be available during the upgrade process. If you have a symbolic link from /opt to one of the unconverted volumes, the symbolic link will not function during the upgrade and items in /opt will not be installed. Remove the Old Packages 1 Bring the system down to single-user mode: # init S 2 Load and mount the VERITAS CD-ROM by starting the volmgt daemon: # /etc/init.d/volmgt start 3 Locate the scripts directory and run the command to check for any problems that may exist that can prevent a successful upgrade: # /CD_path/scripts/upgrade_start -check
17-40 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

If no problems are discovered, you can proceed with the upgrade. 4 Run the upgrade_start script: # /CD_path/scripts/upgrade_start 5 Reboot to single user mode: # reboot -- -s 6 Remove the old VxVM package and other related VxVM packages: # pkgrm VRTSvmsa VRTSvmdoc VRTSvmman VRTSvmdev VRTSvxvm Note: Do not remove the VRTSvmsa package if you still have clients running old versions of VxVM. Upgrade the Operating System and VxVM 1 Reboot to single user mode: # reboot -- -s 2 Upgrade your operating system. Refer to your Solaris installation documentation to install the operating system and any required patches. 3 Reboot to single user mode: # reboot -- -s 4 Load and mount the VERITAS CD-ROM by starting the volmgt daemon: # /etc/init.d/volmgt start 5 Locate the directory that contains the VxVM packages and add the new VxVM licensing and software packages: # pkgadd -d /CD_path/pkgs VRTSvlic VRTSvxvm 6 Complete the upgrade by running the upgrade_finish script: # /CD_path/scripts/upgrade_finish 7 Perform a reconfiguration reboot: # reboot -- -r 8 Install any additional packages by using the pkgadd command: # pkgadd -a /CD_path/scripts VRTSobadmin -d /CD_path/pkgs VRTSvmman VRTSob VRTSobgui VRTSfspro VRTSvmpro

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-41

After Upgrading
After completing the upgrade and rebooting: 1. Confirm that key VxVM processes (vxconfigd, vxnotify, and vxrelocd) are running by using the command: # ps -ef | grep vx 2. Verify the existence of the boot disks volumes: # vxprint -ht
Note: To perform an upgrade without using the upgrade scripts, you can use vxunroot to convert volumes back to partitions. For more information, see the VERITAS Volume Manager Installation Guide and visit http://support.veritas.com.
FOS35_Sol_R1.0_20020930 17-32

After Upgrading After completing the upgrade and rebooting, confirm the following: 1 Confirm that key VxVM processes (vxconfigd, vxnotify, and vxrelocd) are running by using the command: # ps -ef | grep vx 2 Verify the existence of the boot disks volumes by using vxprint: # vxprint -ht At this point, your preupgrade configuration is in effect, and any file systems previously defined on volumes are defined and mounted. Note: If you prefer to perform an upgrade without using the upgrade_start and upgrade_finish scripts, you can use the vxunroot command to convert volumes back to partitions. See the VERITAS Volume Manager Installation Guide and visit http://support.veritas.com for more information.

17-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Upgrading VxFS Only


1. Unmount any mounted VERITAS file systems. 2. Remove old VxFS packages. Specify the optional package first: # pkgrm VRTSfsdoc VRTSvxfs 3. Comment out VxFS file systems in /etc/vfstab, then reboot to flush VxFS kernel hooks. 4. Install new VxFS packages: # pkgadd -d /cdrom/CD_name/product_name/pkgs VRTSvlic VRTSvxfs VRTSfsdoc 5. Undo any changes made to /etc/vfstab. 6. Reboot.
FOS35_Sol_R1.0_20020930 17-33

Upgrading to a New VxFS Version


Upgrading the VxFS Version If you are already running a previous version of VxFS, you can upgrade to the current version. Depending on the version of Solaris and VxFS that you are running, you may need to upgrade: VxFS only VxFS and Solaris Solaris only Before You Upgrade You must uninstall any previous version of the VRTSvxfs package before installing a new version. You do not need to remove existing VERITAS file systems, but all of them must remain unmounted throughout the upgrade process. Before upgrading, ensure that the new version of VxFS is compatible with the Solaris version you are running. If the Solaris version needs upgrading, refer to your Solaris installation guide for instructions on upgrading the operating system. If the VxFS version to which you want to upgrade is compatible with your Solaris version, then you can update VxFS without updating Solaris. Note: The procedure for upgrading to a new VxFS version assumes that the file system layout is the same. To upgrade to a new file system layout, for example, from a Version 2 layout to a Version 4 layout, you use the vxupgrade command.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-43

Upgrading VxFS Only 1 Unmount any mounted VERITAS file system. You cannot remove the VRTSvxfs package if any VERITAS file system remains mounted. 2 Remove all VxFS packages using the pkgrm command. Specify the optional packages before the VRTSvxfs package on the command line: # pkgrm VRTSfsdoc VRTSvxfs Note: If you are upgrading from versions VxFS 3.3.3 or earlier, then you may also need to remove the VRTSqio and VRTSqlog packages. 3 If you have VxFS file systems specified in the /etc/vfstab file, comment them out, and then reboot to flush VxFS kernel hooks still in RAM to avoid possible system panics. 4 Install the new version of VxFS by following standard installation procedures. a Load and mount the VERITAS CD-ROM. b Add the VxFS packages of the new version using the pkgadd command: # pkgadd -d /cdrom/CD_name/product_name/pkgs VRTSvlic VRTSvxfs VRTSfsdoc c Enter new license keys if necessary. You do not need to enter new license keys if you are upgrading from VxFS 3.2.x to 3.5. 5 Undo the changes that you made to the /etc/vfstab. 6 Reboot the system to mount any VxFS file systems.

17-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Upgrading VxFS and Solaris


To upgrade VxFS and Solaris, follow this sequence:
1. Unmount any mounted VERITAS file systems. 2. Remove old VxFS packages. 3. Comment out VxFS file systems in /etc/vfstab, then reboot to flush VxFS kernel hooks. 4. Upgrade Solaris. 5. Add the new VxFS packages. 6. Undo any changes made to /etc/vfstab. 7. Reboot.

If you need to upgrade Solaris only, you must deinstall and reinstall the VxFS packages.
FOS35_Sol_R1.0_20020930 17-34

Upgrading VxFS and Solaris Note: Always read product release notes and follow the installation documentation when performing any upgrade. To upgrade VxFS and Solaris, follow this sequence of steps. 1 Unmount all mounted VxFS file systems. 2 Remove the VxFS packages, starting with the optional package: # pkgrm VRTSfsdoc VRTSvxfs 3 If you have VxFS file systems specified in the /etc/vfstab file, comment them out, and then reboot to flush VxFS kernel hooks still in RAM to avoid possible system panics. 4 Upgrade the operating system to Solaris 2.6, 7, 8, or 9. Refer to your Solaris installation documentation for instructions on how to upgrade Solaris. 5 Mount the VERITAS software CD-ROM. Then, add the VxFS packages: # pkgadd -d /cdrom/CD_name/product_name/pkgs VRTSvlic VRTSvxfs VRTSfsdoc 6 Undo the changes that you made to the /etc/vfstab. 7 Reboot the system to mount any VxFS file systems. Upgrading Solaris Only If you are upgrading Solaris only and VxFS 3.5 is already installed, you must uninstall and reinstall the VxFS 3.5 packages. Refer to your Solaris installation documentation for instructions on how to upgrade Solaris.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-45

Summary
You should now be able to: Identify the benefits of disk encapsulation. Encapsulate the root disk. View encapsulated disks. Create an alternate boot disk. Unencapsulate a root disk. Upgrade to a new VxVM or Solaris version. Upgrade to a new VxFS version.

FOS35_Sol_R1.0_20020930

17-35

Summary
This lesson described the disk encapsulation process and how to encapsulate the root disk on your system. Methods for creating an alternate boot disk and unencapsulating a root disk were covered. Next Steps The next lesson describes how to troubleshoot and recover from boot disk failure. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Installation Guide This guide provides information on installing and initializing VxVM and the VERITAS Enterprise Administrator graphical user interface.

17-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 17
Lab 17: Encapsulation and Root Disk Mirroring In this lab, you create a root mirror, disable the root disk, and boot up from the mirror. Then, you boot up again from the root disk, break the mirror, and remove the boot disk from rootdg. Finally, you reencapsulate the root disk and re-create the mirror. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

17-36

Lab 17: Encapsulation and Root Disk Mirroring


Goal In this lab, you create a root mirror, disable the root disk, and boot up from the mirror. Then, you boot up again from the root disk, break the mirror, and remove the boot disk from rootdg. Finally, you reencapsulate the root disk and re-create the mirror. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

Lesson 17: Encapsulation and Boot Disk Mirroring


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17-47

17-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18

VxVM, Boot Disk, and rootdg Recovery

Overview
Boot Disk Recovery Boot Disk Mirroring Plex Problems Disk Problems Recovery Architecture File System File System Intent Logging Administration Administration Defragmentation VxFS Administration File System Setup Disk and Volume Disk and Volume Volume Maintenance Administration Administration Configuring Volumes Creating Volumes Managing Disk Groups Managing Disks Interfaces Introduction Introduction FOS35_Sol_R1.0_20020930 Installation Virtual Objects
FOS35_Sol_R1.0_20020930

Recovery and Recovery and Troubleshooting Troubleshooting

18-2

18-2

Introduction
Overview This lesson describes how VERITAS Volume Manager (VxVM) integrates into the Solaris boot process, the key scripts and files used in the boot process, and tips on troubleshooting the boot process. This lesson also provides procedures for creating an emergency boot disk and recovering from various boot disk failures. Importance Being able to recover from boot disk and other failures is essential to protect your system and your data. Outline of Topics Solaris Boot Process Troubleshooting the Boot Process Root Disk Encapsulation Creating an Emergency Boot Disk Recovering rootdg

18-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Describe the phases of the Solaris boot process. Troubleshoot the boot process. Describe root disk encapsulation scenarios. Create an emergency boot disk. Recover rootdg for different boot disk failure scenarios.

FOS35_Sol_R1.0_20020930

18-3

Objectives After completing this lesson, you will be able to: Describe the phases of the Solaris boot process. Troubleshoot the boot process. Describe root disk encapsulation scenarios. Create an emergency boot disk. Recover rootdg for different boot disk failure scenarios.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-3

Solaris Boot Process


Boot PROM Phase

Boot Program Phase

Kernel Initialization Phase

/sbin/init Phase

FOS35_Sol_R1.0_20020930

18-4

Solaris Boot Process


Solaris Boot Process Overview In order to troubleshoot and resolve boot disk problems, you must have a conceptual understanding of the Solaris boot process and the associated scripts and files that are involved in booting the system and starting VxVM. The Solaris boot process can be divided into four main phases: Phase 1: Boot PROM Phase Phase 2: Boot Program Phase Phase 3: Kernel Initialization Phase Phase 4: The /sbin/init Phase In the next sections, each of these four phases is described in detail.

18-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Phase 1: Boot PROM


Boot PROM Phase
1. 1. 2. 2. 3. 3. 4. 4. Runs self tests Runs self tests Reads the boot disk label Reads the boot disk label Reads the boot block Reads the boot block Loads the bootblk program Loads the bootblk program

Boot Program Phase

Kernel Initialization Phase

/sbin/init Phase
FOS35_Sol_R1.0_20020930 18-5

Phase 1: Boot PROM Phase When you boot a Solaris system, the first phase of the boot process is the boot PROM phase. In this phase: 1 The programmable read-only memory (PROM) chip runs self-test diagnostics to identify system information, such as hardware and memory. 2 When you type boot at the OK prompt, the system reads the boot disk label at sector 0. 3 The system then reads the boot block at sectors 0 through 15. 4 The PROM loads the bootblk program from the boot block. The bootblk program is a UFS file system reader that is placed on disk by the installboot program.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-5

Phase 2: Boot Program


Boot PROM Phase
1. 1. 2. 2. 3. 3. 4. 4. Runs self tests Runs self tests Reads the boot disk label Reads the boot disk label Reads the boot block Reads the boot block Loads the bootblk program Loads the bootblk program

Boot Program Phase

1. Loads ufsboot 1. Loads ufsboot 2. Loads the kernel 2. Loads the kernel

Kernel Initialization Phase

/sbin/init Phase
FOS35_Sol_R1.0_20020930 18-6

Phase 2: Boot Program Phase The second phase in the Solaris boot process, the boot program phase, begins after the PROM successfully loads the bootblk program from the boot block. 1 The bootblk program loads the secondary boot program, ufsboot, by invoking the command: /platform/`uname -m`/ufsboot 2 The ufsboot program loads the kernel.

18-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Phase 3: Kernel Initialization


Boot PROM Phase
1. 1. 2. 2. 3. 3. 4. 4. Runs self tests Runs self tests Reads the boot disk label Reads the boot disk label Reads the boot block Reads the boot block Loads the bootblk program Loads the bootblk program

Boot Program Phase

1. Loads ufsboot 1. Loads ufsboot 2. Loads the kernel 2. Loads the kernel 1. 1. 2. 2. 3. 3. 4. 4. Loads kernel modules Loads kernel modules Reads /etc/system Reads /etc/system Initializes the kernel Initializes the kernel Begins /sbin/init Begins /sbin/init

Kernel Initialization Phase

/sbin/init Phase
FOS35_Sol_R1.0_20020930 18-7

Phase 3: Kernel Initialization Phase The next phase in the Solaris boot process is the kernel initialization phase. After the ufsboot program loads the kernel: 1 The kernel begins to load the kernel modules. 2 The kernel reads the /etc/system file, including the following entries: A rootdev entry A rootdev entry specifies an alternate root device. The default rootdev value is the physical path name of the device on which the boot program (bootblk) is located. forceload entries
forceload entries force modules to be loaded at boot time. 3 The kernel initializes itself and begins the /sbin/init process. After the kernel loads the modules needed to read the root partition, the ufsboot program is unmapped from memory. The kernel continues initializing the system using its own resources.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-7

Phase 4: /sbin/init
Boot PROM Phase
1. 1. 2. 2. 3. 3. 4. 4. Runs self tests Runs self tests Reads the boot disk label Reads the boot disk label Reads the boot block Reads the boot block Loads the bootblk program Loads the bootblk program

Boot Program Phase

1. Loads ufsboot 1. Loads ufsboot 2. Loads the kernel 2. Loads the kernel 1. 1. 2. 2. 3. 3. 4. 4. Loads kernel modules Loads kernel modules Reads /etc/system Reads /etc/system Initializes the kernel Initializes the kernel Begins /sbin/init Begins /sbin/init

Kernel Initialization Phase

/sbin/init Phase
FOS35_Sol_R1.0_20020930

Invokes run control scripts: Invokes run control scripts: 1. Single-user scripts 1. Single-user scripts 2. Multiuser scripts 2. Multiuser scripts

18-8

Phase 4: The /sbin/init Phase In the final phase of the Solaris boot process, the /sbin/init process invokes the run control scripts that are used to start VxVM. Single-user startup scripts, located in /etc/rcS.d, include: S25vxvm-sysboot S30rootusr (standard Solaris script) S35vxvm-startup1 S40standardmounts (standard Solaris script) S50devfsadm (standard Solaris script) S70buildmnttab (standard Solaris script) S85vxvm-startup2 S86vxvm-reconfig Multiuser startup scripts, located in /etc/rc2.d, include: S94vxnm-host_infod S94vxnm-vxnetd S95vxvm-recover Note: Scripts added by VxVM are highlighted in bold. The function of each script is described in the next section.

18-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Startup Scripts: Single User


Boot PROM

Boot Program

Kernel Initialization

/etc/rcS.d/ /etc/rcS.d/ S25vxvm-sysboot S25vxvm-sysboot S30rootusr S30rootusr S35vxvm-startup1 S35vxvm-startup1 S40standardmounts S40standardmounts S50devfsadm S50devfsadm S70buildmnttab S70buildmnttab S85vxvm-startup2 S85vxvm-startup2 S86vxvm-reconfig S86vxvm-reconfig

S25vxvm-sysboot
Checks root and /usr Starts the restore daemon Starts vxconfigd in boot mode Creates disk access records Scans the volboot file Imports rootdg Starts rootvol and usr volumes

S30rootusr
/sbin/init
FOS35_Sol_R1.0_20020930

Mounts /usr as read-only using /etc/vfstab Checks /usr


18-9

VxVM Startup: Single-User Scripts

/etc/rcS.d/S25vxvm-sysboot
The S25vxvm-sysboot script: Checks to determine if rootdev and /usr are volumes If rootdev and /usr are volumes, then vxconfigd must successfully start for / and /usr to be accessible. Starts the VxVM restore daemon by invoking the command: vxdmpadm start restore options Note: By default, the restore daemon checks the health of disabled device node paths (policy=check_disabled) at a polling interval of 300 seconds (interval=300). By using options to the vxdmpadm start restore command, you can change the polling interval (interval=seconds) or change the policy to check all paths (policy=check_all). Starts vxconfigd in boot mode by invoking the command: vxconfigd -m boot The script includes example option strings to enable different aspects of vxconfigd logging. Creates disk access records for all devices Scans /etc/volboot to determine disk ownership tag information and to determine if any simple disks exist in rootdg Locates and imports the rootdg disk group Starts the rootvol and usr volumes
Lesson 18: VxVM, Boot Disk, and rootdg Recovery
Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-9

/etc/rcS.d/S30rootusr
The S30rootusr script: Mounts /usr as read-only Checks for any problems with /usr The /etc/vfstab file is used to mount the /usr file system. If /etc/vfstab includes /dev/vx/dsk/usr and /dev/vx/rdsk/usr as the devices for the /usr file system, then VxVM must be running for the mount to succeed. If /usr fails to mount, then utilities, such as fsck and ls, are not available for use in other scripts.

18-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Startup Scripts: Single User


Boot PROM

Boot Program

Kernel Initialization

/etc/rcS.d/ /etc/rcS.d/ S25vxvm-sysboot S25vxvm-sysboot S30rootusr S30rootusr S35vxvm-startup1 S35vxvm-startup1 S40standardmounts S40standardmounts S50devfsadm S50devfsadm S70buildmnttab S70buildmnttab S85vxvm-startup2 S85vxvm-startup2 S86vxvm-reconfig S86vxvm-reconfig

S35vxvm-startup1
Starts special volumes, such as swap and /var Sets up dump devices

S40standardmounts
Mounts /proc Adds physical swap device Remounts root and /usr

S50devfsadm
Configures /dev and /devices trees

/sbin/init
FOS35_Sol_R1.0_20020930

S70buildmnttab
Mounts file systems, such as /var, /var/adm, and /var/run
18-10

FOS35_Sol_R1.0_20020930

18-10

/etc/rcS.d/S35vxvm-startup1
The S35vxvm-startup1 script: Starts special volumes, such as swap, /var, /var/adm, and /usr/kvm, by invoking the command: vxrecover -n -s -g rootdg $startvols Note: These volumes must be in rootdg in order to be started. Volumes are started without recovery, mirror resynchronization, or parity resynchronization. Sets up dump devices Note: If the first swap device is a volume, then it is used as the dump device. Dump devices are used to store core files that are created when the system panics. Core file creation and recovery are performed completely outside of VxVM. The swap device must be the first swap device listed in /etc/vfstab and must be in rootdg. The dump device requires a physical partition underneath the swap volume. VxVM does not have hooks for dumping, therefore the swap device must be created to enable the dump device to be created. The first swap device is registered as the dump device. The dump device is registered by adding and removing the swap device.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-11

/etc/rcS.d/S40standardmounts
The S40standardmounts script: Mounts /proc Adds the physical swap device Volume Manager handles all volumes in the /etc/vfstab file that have a file system type of swap as swap volumes. Checks and remounts the root file system as read-write Checks and remounts /usr as read-write

/etc/rcS.d/S50devfsadm
The S50devfsadm script configures the /dev and /devices trees.

/etc/rcS.d/S70buildmnttab
The S70buildmnttab script mounts file systems that are required to be available in single-user mode: /var, /var/adm, and /var/run. Note: A swap device is mounted as /var/run.

18-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Startup Scripts: Single User


Boot PROM

Boot Program

Kernel Initialization

/etc/rcS.d/ /etc/rcS.d/ S25vxvm-sysboot S25vxvm-sysboot S30rootusr S30rootusr S35vxvm-startup1 S35vxvm-startup1 S40standardmounts S40standardmounts S50devfsadm S50devfsadm S70buildmnttab S70buildmnttab S85vxvm-startup2 S85vxvm-startup2 S86vxvm-reconfig S86vxvm-reconfig

S85vxvm-startup2
Starts I/O daemons Changes vxconfigd to enabled mode Imports disk groups marked for autoimport Initializes DMP Reattaches drives Starts all volumes

S86vxvm-reconfig
Performs operations defined by vxinstall and vxunroot Uses flag files to determine actions 18-11 Adds new disks Performs encapsulation
18-11

/sbin/init
FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

/etc/rcS.d/S85vxvm-startup2
The S85vxvm-startup2 script: Starts I/O daemons by invoking the command: vxiod set 10 Changes vxconfigd from boot mode to enabled mode: vxdctl enable The dev_info_tree is scanned for new entries. Imports all disk groups marked for autoimport Initializes DMP by invoking the command: vxdctl initdmp Reattaches drives that were inaccessible when vxconfigd first started: vxreattach Starts (but does not recover) all volumes: vxrecover -n -s

/etc/rcS.d/S86vxvm-reconfig
The S86vxvm-reconfig script is used to perform operations defined by vxinstall and vxunroot and is used as part of upgrade procedures. This script: Uses flag files to determine actions The /etc/vx/reconfig.d/state.d directory contains entries of prior actions.
Lesson 18: VxVM, Boot Disk, and rootdg Recovery
Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-13

The encapsulation process requires a reboot and creates flag files for further actions. If encapsulation is incomplete, you must remove the flag files manually. The root_done flag file provides information on whether the root disk is encapsulated and can exit without any action. Adds new disks Disks selected for initialization by vxinstall are initialized. Performs encapsulation A reboot is required if the root file system or a mounted file system is encapsulated.

18-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Startup Scripts: Multiuser


Boot PROM

Boot Program

Kernel Initialization

/etc/rcS.d/ /etc/rcS.d/ S25vxvm-sysboot S25vxvm-sysboot S30rootusr S30rootusr S35vxvm-startup1 S35vxvm-startup1 S40standardmounts S40standardmounts S50devfsadm S50devfsadm S70buildmnttab S70buildmnttab S85vxvm-startup2 S85vxvm-startup2 S86vxvm-reconfig S86vxvm-reconfig /etc/rc2.d/ /etc/rc2.d/ S94vxnm-host_infod S94vxnm-host_infod S94vxnm-vxnetd S94vxnm-vxnetd S95vxvm-recover S95vxvm-recover

S94vxnm-host_infod
Spawns the RPC server (for VERITAS Volume Replicator) Requires a VVR license

S94vxnm-vxnetd
Starts the vxnetd process (for VERITAS Volume Replicator) Requires a VVR license

S95vxvm-recover
Starts volume recovery and resynchronization Starts hot relocation daemons
18-12

/sbin/init
FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

18-12

VxVM Startup: Multiuser Scripts

/etc/rc2.d/S94vxnm-host_infod
The S94vxnm-host_infod script is used with VERITAS Volume Replicator (VVR) to spawn off the remote procedure call (RPC) server. Note: This process requires a valid VVR license in /etc/vx/licenses/lic.

/etc/rc2.d/S94vxnm-vxnetd
The S94vxnm-vxnetd script is used with VERITAS Volume Replicator to start the vxnetd process, which is required for replication on a secondary replicated volume group. Note: This process requires a valid VVR license in /etc/vx/licenses/lic.

/etc/rc2.d/S95vxvm-recover
The S95vxvm-recover script: Starts recovery and resynchronization on all volumes Starts hot-relocation daemons To enable hot-relocation notification for an account other than the local root account, you must modify this script. Disabling the hot-relocation daemon vxrelocd results in no hot relocation and no notification.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-15

Files Used in the Boot Process


/etc/system
Boot PROM

Boot Program

Kernel Initialization

/etc/rcS.d/ /etc/rcS.d/ S25vxvm-sysboot S25vxvm-sysboot S30rootusr S30rootusr S35vxvm-startup1 S35vxvm-startup1 S40standardmounts S40standardmounts S50devfsadm S50devfsadm S70buildmnttab S70buildmnttab S85vxvm-startup2 S85vxvm-startup2 S86vxvm-reconfig S86vxvm-reconfig /etc/rc2.d/ /etc/rc2.d/ S94vxnm-host_infod S94vxnm-host_infod S94vxnm-vxnetd S94vxnm-vxnetd S95vxvm-recover S95vxvm-recover

Contains VxVM entries

/etc/vfstab
Maps mount points to devices

/etc/vx/volboot
Contains disk ownership data

/etc/vx/licenses/lic
Contains license files

/var/vxvm/tempdb
Stores data about disk groups

/etc/vx/reconfig.d/ state.d/install-db
Indicates VxVM is not initialized

/sbin/init
FOS35_Sol_R1.0_20020930

/VXVM#.#.#-UPGRADE/ .start_runed

18-13

FOS35_Sol_R1.0_20020930

Indicates VxVM upgrade is not complete

18-13

Troubleshooting the Boot Process


Files Used in the Boot Process During the boot process, the VxVM startup scripts use information contained in specific files. If any of these files are missing, misplaced, or misconfigured, then problems can occur. Troubleshooting the boot process depends on your knowledge of the function and use of each of these files: /etc/system Contains VxVM entries indicating if the root disk has been encapsulated and if the root file system is configured as a volume /etc/vfstab Maps file system mount points to actual device names /etc/vx/volboot Contains disk ownership tag information that matches rootdg with a system /etc/vx/licenses/lic Contains the files that represent installed VERITAS license keys /var/vxvm/tempdb Stores temporary information about currently imported disk groups /etc/vx/reconfig.d/state.d/install-db Indicates that VxVM software packages have been added, but that VxVM has not been initialized with vxinstall /VXVM#.#.#-UPGRADE/.start_runed Indicates that a VxVM upgrade has been started but not completed

18-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: The Boot Device Cannot Be Opened


Boot PROM

Error

Problem: Boot device cannot be opened. Possible causes:


The boot disk is not powered on. The boot disk has failed. The SCSI bus is not terminated. The controller failure has occurred. The disk is failing and locking the bus. Check SCSI bus connections. Use probe-scsi-all. Boot from an alternate boot disk.
18-14

Boot Program

Kernel Initialization

To resolve:

/sbin/init
FOS35_Sol_R1.0_20020930

Troubleshooting: The Boot Device Cannot Be Opened In the boot PROM phase, if the boot device cannot be opened, you receive a message similar to the following:
SCSI device 0,0 is not responding Can't open boot device

This message indicates that the system PROM is unable to read the boot program from the boot disk. Common causes for this problem include: The boot disk is not powered on. The boot disk has failed. The SCSI bus is not terminated. There is a controller failure. A disk is failing and locking the bus, preventing any disks from identifying themselves to the controller, and making the controller assume that there are no disks attached. To troubleshoot the problem: Check carefully that everything on the SCSI bus is properly connected and use the probe-scsi-all command. If disks are powered off or the SCSI bus is unterminated, correct the problem and reboot the system. If one of the disks has failed, remove the disk from the SCSI bus and replace it.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-17

If no hardware problems are found, the error is probably due to data errors on the boot disk. Attempt to boot from an alternate boot disk that contains a mirror of the root volume.

If you are unable to boot from an alternate boot disk, then you may still have some type of hardware problem. Similarly, if switching the failed boot disk with an alternate boot disk does not allow the system to boot, this condition also indicates hardware problems.

18-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: Invalid UNIX Partition


Boot PROM

Problem: UNIX partition is not valid. Possible causes:


Partition information is damaged. The disk partition is not valid. The system cannot read the kernel file. Attempt to reattach the disk. Boot from an alternate boot disk.

Boot Program

Error

To resolve:
Kernel Initialization

/sbin/init
FOS35_Sol_R1.0_20020930 18-15

Troubleshooting: Invalid UNIX Partition In the boot program phase of the boot process, the bootblk program is loaded and attempts to access the boot disk through UNIX partition information. If the partition information is damaged, the boot program fails, and you receive an error message similar to the following: File just loaded does not appear to be executable If this message is displayed during a boot attempt, you should boot the system from an alternate boot disk. If the system detects an invalid disk partition while booting, most disk drivers display errors on the console about invalid UNIX partition information on a failing disk. For example: WARNING: unable to read label WARNING: corrupt label_sdo These messages indicate that the failure is due to an invalid disk partition. To resolve the problem, you can attempt to reattach the disk. However, if the reattach process fails, then you should replace the disk. If the system cannot read the kernel file, error messages are similar to: boot: cannot find misc/sparcv9/krtld boot: error loading interpreter (misc/sparcv9/krtld) Elf64 read error boot failed Enter filename [/platform/sun4u/kernel/sparcv9/unix] These messages may vary, depending on which directory or file is unreadable.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-19

Troubleshooting: Startup Scripts Exit


Boot PROM

Problem: VxVM startup scripts exit without initialization. Possible causes:


Either of the following files are present:
/etc/vx/reconfig.d/state.d/install-db

Boot Program

Kernel Initialization

This file indicates that VxVM software packages have been added, but VxVM has not been initialized with vxinstall. Therefore, vxconfigd is not started. /VXVM#.#.#-UPGRADE/.start_runed

/sbin/init
FOS35_Sol_R1.0_20020930

Error

This file indicates that a VxVM upgrade has been started but not completed. Therefore, vxconfigd is not started.

18-16

Troubleshooting: VxVM Startup Scripts Exit Without Initialization In the /sbin/init phase of the boot process, the VxVM startup scripts will exit without initializing VxVM if either of the following flag files are present: /etc/vx/reconfig.d/state.d/install-db The presence of this file indicates that VxVM software packages have been added, but VxVM has not been initialized with vxinstall. This file is installed when you add the VxVM software packages and is removed by the S86vxvm-reconfig script after the configuration specified by vxinstall has been performed. The existence of this file communicates to the VxVM device drivers that VxVM has not yet been initialized (vxinstall), and therefore vxconfigd will not be started. Therefore, if this file is present on the system, then the VxVM startup scripts exit without performing any initialization. /VXVM#.#.#-UPGRADE/.start_runed The presence of this file indicates that a VxVM upgrade has been started but not completed. This file is created by the upgrade_start script specific to a particular VxVM version (for example, /VXVM3.5-UPGRADE/.start_runed) and is removed by the upgrade_finish script when an upgrade is completed. If a file with this path is present, then the VxVM startup scripts exit without performing any initialization, and vxconfigd is not started.

18-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: Invalid or Missing /etc/system File


Boot PROM

Boot Program

Problem: The /etc/system file is invalid or missing. The /etc/system file is used in the kernel initialization and /sbin/init phases of the boot process. This file is a standard Solaris system file to which VxVM adds entries to:

Kernel Initialization

Error

Specify drivers to be loaded. Specify root encapsulation.

If the file or these entries are missing, you encounter problems in the boot process.
/sbin/init

Error

Always maintain backup copies of this file.


18-17

FOS35_Sol_R1.0_20020930

Troubleshooting: Invalid or Missing /etc/system File The /etc/system file is used in the kernel initialization phase as well as in the /sbin/init phase of the boot process. If this file is missing, or if its entries are missing, then you encounter problems at boot time. The /etc/system file is a standard Solaris system file. VxVM adds entries to this file that are placed between the tags: *vxvm START (do not remove) . . . *vxvm END (do not remove) Saving the /etc/system File It is strongly recommended that you maintain a backup copy of the /etc/ system file so that you can recover your root volume if the system becomes unbootable. If the system cannot read the /etc/system file, then you can use the boot -a command to specify a different copy of the /etc/system file to use on booting. VxVM Entries in the /etc/system File There are several types of VxVM entries that can be contained in the /etc/system file: Entries that specify drivers to be loaded Entries that specify root encapsulation

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-21

Entries That Specify Drivers to Load VxVM entries in /etc/system that begin with forceload: specify drivers to be loaded by VxVM. For example: forceload: drv/pci forceload: drv/dad forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec Directives for drivers not used by your system may also exist, so that if you add a particular hardware or software in the future, then the driver is already in place to use with VxVM. The unused driver entries do not cause problems for your system; however, a warning message is displayed at boot time, indicating that the driver does not exist. For example: ... WARNING: forceload of drv/atf failed WARNING: forceload of drv/atf failed WARNING: forceload of drv/pln failed VxVM starting in boot mode... ... Entries That Specify Root Encapsulation VxVM entries in the /etc/system file also include directives for VxVM to change the root device from /dev/dsk/c0t0d0 (or whatever the boot device may be) to /dev/vx/dsk/rootvol: rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 During the boot process, the S25vxvm-sysboot script checks the /etc/ system file for these entries. If these entries exist, then the root disk has been encapsulated, and the root file system is configured as a volume. If these entries do not exist, then the system boots up on the partition. If you save a copy of the /etc/system file after root encapsulation, you should comment out these two lines in the backup copy of the file.

18-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: Invalid or Missing /etc/system File


ok> boot -a
Resetting

When booting from an alternate system file, do When booting from an alternate system file, do not go past the maintenance mode. Boot up on not go past the maintenance mode. Boot up on the alternate system file, fix the VxVM problem, the alternate system file, fix the VxVM problem, and then reboot with the original system file. and then reboot with the original system file.

Rebooting with command: boot -a Boot device: /pci@1f,0/pci@1,1/ide@3/disk@0,0 File and args: -a Enter filename [kernel/unix]: (Press Return.) Enter default directory for modules [/platform/SUNW,Ultra-5_10/kernel /platform/sun4u/kernel /kernel /usr/kernel]: (Press Return.) SunOS Release 5.6 Version Generic_105181-03 [UNIX(R) System V Release 4.0] Copyright 1983-1997, Sun Microsystems, Inc. Name of system file [etc/system]: etc/system.preencap root filesystem type [ufs]: (Press Return.) Enter physical name of root device [/pci@1f,0/pci@1,1/ide@3/disk@0,0:a]: (Press Return.) VxVM starting in boot mode ... Type Ctrl-d to proceed with normal startup, FOS35_Sol_R1.0_20020930 18-18 (or give root password for system maintenance): Entering System Maintenance Mode
FOS35_Sol_R1.0_20020930 18-18

Using an Alternate system File When using an alternate system file, you will probably not be able to boot into multiuser mode and will end in maintenance mode. Note: You should not go past the maintenance mode while booting on this system file. Boot up on the alternate system file, fix the VxVM problem, and then reboot with the original system file. The system will boot on the partition, not on the volume. When you enter into the maintenance mode, you will notice that the volume rootvol is not started. If you need to write into files on the rootvol, you must start the volume and mount it to a temporary location, for example, /mnt. Once you are finished, unmount /mnt and reboot the system under the normal system file.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-23

ok> boot -a Resetting ... Rebooting with command: boot -a Boot device: /pci@1f,0/pci@1,1/ide@3/disk@0,0 File and args: -a Enter filename [kernel/unix]: (Press Return.) Enter default directory for modules [/platform/SUNW,Ultra5_10/kernel /platform/sun4u/kernel /kernel /usr/kernel]: (Press Return.) SunOS Release 5.6 Version Generic_105181-03 [UNIX(R) System V Release 4.0] Copyright 1983-1997, Sun Microsystems, Inc. Name of system file [etc/system]: etc/system.preencap root filesystem type [ufs]: (Press Return.) Enter physical name of root device [/pci@1f,0/pci@1,1/ide@3/disk@0,0:a]: (Press Return.) VxVM starting in boot mode... ... Type Ctrl-d to proceed with normal startup, (or give root password for system maintenance): Entering System Maintenance Mode

18-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: Unusable or Stale Plexes


Boot PROM

Problem: The system is unable to boot from unusable or stale plexes. If one of the plexes for rootvol or usr is unavailable (STALE or OFFLINE), then vxconfigd exits. Possible causes:
The plex is simply stale and requires resynchronization. The private region is corrupted. Reboot from an alternate boot disk or using an alternate /etc/system file. Reattach or replace the disk as needed.
18-19

Boot Program

Kernel Initialization

To resolve:
/sbin/init
FOS35_Sol_R1.0_20020930

Error

Troubleshooting: Unable to Boot from Unusable or Stale Plexes During the boot process, the system first boots off the physical disk partition containing the root volume, then uses information contained in /etc/system and /etc/vfstab to check the status of the rootvol and usr volumes: If the /etc/system file includes the rootdev and vol_rootdev_is_volume entries, then vxconfigd checks the status of the plex of the rootvol volume, which is located on the disk used to boot the system. If the /etc/vfstab file includes an entry for /usr that has devices that include the string /dev/vx/dsk, then vxconfigd checks the status of the plex of the usr volume, which is located on the disk used to boot the system. If one of the plexes for the rootvol or usr volumes is unavailable (for example, the plexes are in a STALE or OFFLINE state), then vxconfigd produces an error message and exits with an exit code of 9. The S25vxvm-sysboot script then halts the system. The plexes on the system disk may also be unavailable if the private region of a disk is corrupted or if a private region partition table entry is deleted. These circumstances prevent Volume Manager from identifying the disk, because the currently running version of the root or /usr file system is not consistent with the version of the system that was running before the reboot.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-25

If the root plex used to boot the system is stale or unusable, you receive a sequence of messages similar to the following:
vxvm:vxconfigd: Warning Plex rootvol-01 for root volume is stale or unusable. vxvm:vxconfigd: Error: System boot disk does not have a valid root plex Please boot from one of the following disks: DISK MEDIA disk01 DEVICE c1t5d0s2 BOOT COMMAND boot vx-disk01

vxvm:vxconfigd: Error: System startup failed syncing file system ... done Program terminated OK

These messages provide information about the stale plex and also specify alternate boot disks that contain a usable copy of the root plex, which can be used for booting. To resolve the problem, you must reboot the system from an alternate boot disk or boot the system using an alternate /etc/system file that does not indicate that rootdev is a volume. After booting the system, you should investigate the problem further to determine why the root plex was stale or unusable. If the plexes were simply STALE, the plexes are automatically resynchronized with any other mirrors. If there was a problem with the private region of the disk or the partition table entry, you must reattach or replace the disk.

18-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: Conflicting Host ID in volboot


Boot PROM

Problem: A conflicting host ID exists in the /etc/vx/volboot file. The volboot file: Contains disk ownership tag information that matches rootdg with a system Contains the host ID that was on the system when you ran vxinstall If you manually edit this file, VxVM does not function.
To change the host ID in the volboot file: vxdctl hostid newhostid To re-create the volboot file: vxdctl init hostid

Boot Program

Kernel Initialization

/sbin/init
FOS35_Sol_R1.0_20020930

Error

18-20

Troubleshooting: Conflicting Host ID in the volboot File The /etc/vx/volboot file contains disk ownership tag information that matches the rootdg disk group with a system The volboot file is the control file that VxVM uses to match the rootdg disk group with a system. This file is scanned during the boot process to determine disk ownership tag information. You receive an error if the volboot file host ID does not match the host ID from the rootdg disk group. An example of a volboot file is as follows:
volboot 3.1 0.2 20 hostid plstr06 end ###################################################### ###################################################### ###################################################### ###################################################### ###################################################### ###################################################### ###################################################### #########################

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-27

Caution: You should never attempt to manually edit the volboot file. If you attempt to manually edit the file, VxVM will not function. The volboot file must be 512 bytes in size. If the file is edited, for example, by using the vi editor, and is not 512 bytes, the system will not boot. As displayed in the example, the volboot file includes the host ID that was on the system when you first ran vxinstall. The host ID in the volboot file is matched against the host ID contained in the disk group header stored on every disk to identify the disks belonging to this host. To modify the volboot file, you use the vxdctl command. To change the host ID in the volboot file: vxdctl hostid newhostid This command places the new host ID in volboot. The new host ID is then flushed to the private region of the disks. To re-create the volboot file: vxdctl init hostid If you must re-create this file, use the same host name that the previous volboot file contained. The volboot File and Simple Disks The volboot file does not contain an explicit list of the disks that compose the rootdg disk group. However, simple disks, if used, are explicitly named in the volboot file. You should ensure that you do not have any simple disks on your system before using the vxdctl hostid or vxdctl init commands. All simple disk information will be lost when you perform these commands. An example of a volboot file that includes simple disk information is as follows:
volboot 3.1 0.4 hostid training disk c1t5d0s3 simple privoffset=1 disk c1t5d0s4 simple privoffset=1 end ###################################################### ###################################################### ###################################################### ###################################################### ######################################################

18-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: File System Corruption


Boot PROM

Problem: The root or /usr file systems are corrupted. The S25vxvm-sysboot script scans the /etc/vfstab file to check whether /usr is configured as a volume. If the root or /usr file systems are corrupt, then the system does not boot. Possible cause: Mirror inconsistency
Error

Boot Program

Kernel Initialization

/sbin/init
FOS35_Sol_R1.0_20020930

To resolve: Restore the file system from backup.


18-21

Troubleshooting: File System Corruption The /etc/vfstab file contains information that maps file system mount points to actual device names. The S25vxvm-sysboot script scans the /etc/vfstab file to check whether /usr is configured with a device path name that begins with /etc/vx/dsk. If so, then /usr has been reconfigured as a volume. This indicates that vxconfigd must start successfully so that /usr can be accessed correctly by the S30rootusr script. If the root or /usr file systems are corrupt, then the system does not boot. File system corruption could result if there is mirror inconsistency. To resolve file system corruption, you should restore the file system from backup. You may be able to repair the corruption by booting from CD-ROM and then performing an fsck on the partition.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-29

Troubleshooting: Read-Only Root File System


Boot PROM

Problem: Root file system is mounted as read-only, and /usr is not mounted. To resolve: Remount the root file system as read-write and mount the /usr file system. For example:
# mount -o remount /dev/vx/dsk/rootvol / # mount /dev/dsk/c0t0d0s6 /usr

Boot Program

Kernel Initialization

/sbin/init
FOS35_Sol_R1.0_20020930

Error
18-22

Troubleshooting: Root File System Mounted As Read-Only In the /sbin/init phase of the boot process, if you end up with a shell prompt, then the root file system is mounted as read-only, and the /usr file system (if it is a separate file system) is not mounted. To resolve this problem, you remount the root file system as read-write and mount the /usr file system. For example, to remount the root file system as a volume:
# mount -o remount /dev/vx/dsk/rootvol /

To remount the root file system as a physical partition:


# mount -o remount /dev/dsk/c0t0d0s0 /

To mount the /usr file system, you must specifically mount it as the physical partition. For example:
# mount /dev/dsk/c0t0d0s6 /usr

Recommendation: Always maintain a record of the physical disk partitions that map to your system-related file systems, so that you can mount them manually for recovery purposes. Also, maintain a record of the offset and size of the partitions, so that you can re-create them if necessary.

18-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: License Problems


Boot PROM

Problem: License keys are corrupted, missing, or expired.


Save /etc/vx/licenses/lic/* to a backup device.

Boot Program

If the license files are removed or corrupted, you can copy the files back.

License problems can occur if:

Kernel Initialization

The /etc/vx/licenses/lic files become corrupted. An evaluation license was installed and not updated to a full license. Installs a new license Starts the I/O daemons Starts the configuration daemon

To resolve license issues:


/sbin/init
FOS35_Sol_R1.0_20020930

Error

vxlicinst vxiod set 10 vxconfigd

18-23

FOS35_Sol_R1.0_20020930

18-23

Troubleshooting: Corrupted, Missing, or Expired License Keys The /etc/vx/licenses/lic directory contains the files representing the installed VERITAS license keys. You can encounter license problems if: The /etc/vx/licenses/lic files become corrupted. An evaluation license was installed and not updated to a full license. During the boot process, if the system encounters a missing or invalid license key, you receive error messages similar to the following:
VxVM starting special volumes (swapvol var) ... vxvm:vol ERROR changing plex var-01 License has expired or is not available for operation vxvm:vol ERROR changing plex swapvol-01 License has expired or is not available for operation The /var file system (/dev/vx/dsk/var) is being checked Can't open /dev/vx/rdsk/var

Replacing an Expired License To replace an expired license, you can enter a new license by using the command:
# vxlicinst If the license expiration message occurs during a reboot, replace the licenses and reboot. You can also replace the licenses, and then continue the boot process uninterrupted by running the commands: # vxiod set 10 # vxconfigd
Lesson 18: VxVM, Boot Disk, and rootdg Recovery
Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-31

If the license expiration occurred while the system was up and running, then you must inform the configuration daemon: # vxdctl enable

Protecting VxVM License Files You should save a copy of your license key files, so that if any license files are removed or corrupted, you can restore the files from backup. Also, keep a hard copy of the original license keys, so that the keys can be regenerated if necessary. Replacing License Files If license files become corrupted, you can replace the license files from backup. If the files are replaced prior to the configuration daemon running (for example, after a reboot), then you must run the commands: # vxiod set 10 # vxconfigd If the files became corrupted while the system was up and running, then you must inform the configuration daemon: # vxdctl enable Starting I/O Daemons The vxiod utility starts, stops, or reports on VxVM I/O daemons. An I/O daemon provides a process context for performing VxVM I/O. VxVM I/O daemons are not required for correct operation, but not having I/O daemons can adversely affect system performance.
vxiod [set count]

When invoked with no arguments, vxiod prints the current number of volume I/O daemons on the standard output. The number of daemons to create for general I/O handling is dependent on system load and usage. If volume recovery seems to proceed slower at times, it may be worthwhile to create more daemons. Each I/O daemon starts in the background and creates an asynchronously running process, which detaches itself from the controlling terminal and becomes a volume I/O daemon. The vxiod utility does not wait for these processes to complete. When invoked with the set keyword, vxiod creates the number of daemons specified by count. If more volume I/O daemons exist than are specified by count, the excess processes terminate. If more than the maximum number of daemons are created, the requested number is silently truncated to that maximum.

18-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting: Missing /var/vxvm/tempdb


Boot PROM

Problem: The /var/vxvm/tempdb directory is missing, misnamed, or corrupted. This directory stores configuration information about imported disk groups. The contents are re-created after a reboot. If this directory is missing, misnamed, or corrupted, vxconfigd does not start. To remove and re-create this directory: # vxconfigd -k -x cleartempdir

Boot Program

Kernel Initialization

/sbin/init
FOS35_Sol_R1.0_20020930

Error
18-24

Troubleshooting: Missing or Misnamed /var/vxvm/tempdb The /var/vxvm/tempdb directory is used to store configuration information about currently imported disk groups. The contents of this directory are re-created after a reboot. If this directory is missing, misnamed, or corrupted (due to disk I/O failure), then vxconfigd does not start, and you receive the following error message:
vxvm:vxconfigd: ERROR: Disk group rootdg: Cannot recover temp database:

To remove and re-create the /var/vxvm/tempdb directory, you can use the command:
vxconfigd -k -x cleartempdir

Caution: You should kill any running operational commands (vxvol, vxsd, or vxmend) before using the -x cleartempdir option. You can use this option while running VMSA, or while VxVM background daemons are running (vxsparecheck, vxnotify, or vxrelocd). Note: If the /var/vxvm directory does not exist, this command will not correct the problem. For more information, see the vxconfigd (1m) manual page.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-33

Troubleshooting: Debugging with vxconfigd


Boot PROM

Running vxconfigd in debug mode:

# vxconfigd -k -m enable -x debug-level


debug-level = 0 debug-level = 9 No debugging (default) Highest debug level Log all console output to the /var/vxvm/vxconfigd.log file. Use the specified log file instead. Direct all console output through the syslog() interface. Attach a date and time-of-day timestamp to all messages. Log all possible tracing information in the given file.
18-25

Boot Program

Some debugging options:


-x log

Kernel Initialization

-x logfile=name -x syslog -x timestamp

/sbin/init

-x tracefile=name
FOS35_Sol_R1.0_20020930

Troubleshooting: Debugging with vxconfigd The vxconfigd Daemon The VxVM configuration daemon, vxconfigd, maintains disk configurations and disk groups and is also responsible for initializing VxVM when the system is booted. VxVM does not start anything if vxconfigd cannot be started during a boot-up. Under normal circumstances, this daemon is automatically started by the VxVM start-up scripts. However, if there is a problem, it may not be possible to start vxconfigd, or the daemon may be running in disabled mode. Running vxconfigd in Debug Mode To identify a problem, you can run vxconfigd in debug mode using the command:
# vxconfigd -k -m mode -x debug_level

In the syntax: To turn on debugging, use the -x option and specify a debug level (09). The default level is 0 (no debugging). The highest debug level is 9. See the vxconfigd (1m) manual page for additional debugging options. If there is already a vxconfigd process running, you must kill it before attempting a restart by using the -k option.

18-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Use the -m mode option to specify the initial operating mode for vxconfigd. -m enable starts vxconfigd fully enabled (default). This mode will use the volboot file to bootstrap and load in the rootdg disk group. This mode will then scan all known disks looking for disk groups to import and will import those disk groups. This mode also sets up the /dev/vx/dsk and /dev/vx/rdsk directories to define all of the accessible Volume Manager devices. If the volboot file cannot be read or if the rootdg disk group cannot be imported, vxconfigd will be started in disabled mode. -m boot handles boot-time startup of VxVM. This mode starts the rootdg disk group and the root and /usr file system volumes. This mode is capable of operating before the root file system is remounted to readwrite. vxdctl enable is invoked later in the boot sequence to trigger vxconfigd to rebuild the /dev/vx/dsk and /dev/vx/rdsk directories. -m disable starts vxconfigd in disabled mode. This mode creates a rendezvous file for utilities that perform various diagnostic or initialization operations. You can use this mode with the -r reset option as part of a command sequence to completely reinitialize the VxVM configuration. Use the vxdctl enable operation to enable vxconfigd.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-35

Root Disk Encapsulation


Normal VxVM initialization:
1. Add the VxVM packages. The file /etc/vx/reconfig.d/state.d/install-db is installed. 2. Run vxinstall to encapsulate the root disk. 3. The S86vxvm-reconfig script: Initializes the volboot file and rootdg Adds the defined disks to rootdg Removes /etc/vx/reconfig.d/state.d/install -db to activate the normal startup procedures on reboot
FOS35_Sol_R1.0_20020930 18-26

Root Disk Encapsulation


Root Disk Encapsulation: Purpose The purpose of encapsulating the root disk is to be able to mirror the root disk. By mirroring the root disk, you provide redundancy that enables you to recover from boot disk failures. Before Encapsulating the Root Disk You should create copies of the /etc/system and /etc/vfstab files before you encapsulate the root disk. Also, keep a record of the physical locations of the system volumes on the root disk and its mirrors. Initializing VxVM: Normal Process Under normal circumstances, initializing VxVM follows this sequence: 1 Add the VxVM packages by using the pkgadd command. The /etc/vx/reconfig.d/state.d/install-db file is installed. 2 Run vxinstall to define the initial contents of rootdg and encapsulate the root disk. 3 The startup script S86vxvm-reconfig performs the actual configuration. This script: Initializes the volboot file and rootdg Adds the defined disks to rootdg Removes the /etc/vx/reconfig.d/state.d/install-db file to activate the normal startup procedures on reboot
18-36 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Root Disk Encapsulation: Free Space at the End of the Drive


Uninitialized Root Disk c0t0d0
Boot

Encapsulated Root Disk rootdisk


7
Boot

6 0 1 2 3 4 5

VTOC VTOC 0 / 0 / 1 /swap 1 /swap 2 backup 2 backup 3 3 4 4 5 /usr 5 /usr 6 /var 6 /var 7 /opt 7 /opt
FOS35_Sol_R1.0_20020930

/ /swap /usr /var /opt Free space Free space

VTOC VTOC 0 / 0 / 1 swap 1 swap 2 backup 2 backup 3 public 3 public 4 private 4 private 5 /usr 5 /usr 6 /var 6 /var 7 /opt 7 /opt

6 0 1 2 3 4 5

rootdisk-02 rootdisk-01 rootdisk-03 rootdisk-04 rootdisk-05

Public Region

Private Region

18-27

Encapsulation Example: Root Disk with Space at the End of the Drive In this example, the uninitialized root drive has free space at the end of the drive. Because the free space is at the end of the disk, the private region is placed in the last cylinders on the disk when the root disk is encapsulated.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-37

Root Disk Encapsulation: Free Space at the End of the Drive


Encapsulated Root Disk rootdisk
Boot

rootvol
rootdisk-B0 rootdisk-02

VxVM Volumes
/dev/vx/dsk/rootvol /dev/vx/dsk/rootvol mounted on / mounted on /

swapvol
7

VTOC VTOC 0 / 0 / 1 swap 1 swap 2 backup 2 backup 3 public 3 public 4 private 4 private 5 /usr 5 /usr 6 /var 6 /var 7 /opt 7 /opt
FOS35_Sol_R1.0_20020930

0 1 2 3 4 5 6

rootdisk-01

/dev/vx/dsk/swapvol /dev/vx/dsk/swapvol

rootdisk-02 rootdisk-01 rootdisk-03 rootdisk-04 rootdisk-05


Ghost Ghost Subdisk Subdisk

usr
rootdisk-03 /dev/vx/dsk/usr /dev/vx/dsk/usr mounted on /usr mounted on /usr

var
rootdisk-04 /dev/vx/dsk/var /dev/vx/dsk/var mounted on /var mounted on /var

opt
rootdisk-05
FOS35_Sol_R1.0_20020930 18-28 /dev/vx/dsk/rootdg/opt /dev/vx/dsk/rootdg/opt mounted on /opt mounted on /opt 18-28

Every partition that was on the disk now has a matching volume, with a subdisk that covers the exact space on the physical disk where the partition resided. When volumes are created, the device nodes are different than the device nodes of nonroot volumes. The device nodes for the system volumes exist in two locations: /dev/vx/[r]dsk/volumename /dev/vx/[r]dsk/rootdg/volumename The partitions are preserved for the system partitions. The list of the actual partitions that are saved can change from one version of VxVM to another. What Is a Ghost Subdisk? The rootvol volume has two subdisks. The rootdisk-B0 subdisk is a ghost subdisk that is only one sector in size. A ghost subdisk is only created when there is no free space at the beginning of a disk. The ghost subdisk, rootdisk-B0, exists to protect overwriting the VTOC. Because the root partition was using the first sector of the disk, VxVM places a subdisk at the first sector of the private region. This subdisk is the replacement for the first sector of the disk. Note: Never remove the ghost subdisk while it is in a volume.

18-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Root Disk Encapsulation: No Free Space on the Disk


Uninitialized Root Disk c0t0d0
Boot

Encapsulated Root Disk rootdisk


7
Boot

6 0 1 2 3 4 5

VTOC VTOC 0 / 0 / 1 /swap 1 /swap 2 backup 2 backup 3 3 4 4 5 /usr 5 /usr 6 /var 6 /var 7 /opt 7 /opt
FOS35_Sol_R1.0_20020930

/ /swap /usr /var /opt No free No free space space

VTOC VTOC 0 / 0 / 1 swap 1 swap 2 backup 2 backup 3 public 3 public 4 private 4 private 5 /usr 5 /usr 6 /var 6 /var 7 /opt 7 /opt

6 0 1 2 3 4 5

Public Region Private Region Public Region

rootdisk-02 rootdiskPriv rootdisk-01 rootdisk-03 rootdisk-04 rootdisk-05

18-29

Encapsulation Example: Root Disk with No Free Space on the Disk In this example, no space is left at the beginning or the end of the disk. If there is no free space at the beginning or end of the disk for the private region, VxVM creates the private region using space taken from the swap region. The private region slice is defined as a partition that overlaps the public region slice, but this slice is never accessible to users. The swap slice on disk decreases in size by the size of the private region. The private region is protected inside the VxVM by a subdisk named disk_namePriv.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-39

Root Disk Encapsulation: No Free Space on the Disk


Encapsulated Root Disk rootdisk
Boot

rootvol
rootdisk-B0 rootdisk-02

VxVM Volumes
/dev/vx/dsk/rootvol /dev/vx/dsk/rootvol mounted on / mounted on /

swapvol
7

VTOC VTOC 0 / 0 / 1 swap 1 swap 2 backup 2 backup 3 public 3 public 4 private 4 private 5 /usr 5 /usr 6 /var 6 /var 7 /opt 7 /opt
FOS35_Sol_R1.0_20020930

0 1 2 3 4 5 6

rootdisk-01

/dev/vx/dsk/swapvol /dev/vx/dsk/swapvol

rootdisk-02 rootdiskPriv rootdisk-01 rootdisk-03 rootdisk-04 rootdisk-05

usr
rootdisk-03 /dev/vx/dsk/usr /dev/vx/dsk/usr mounted on /usr mounted on /usr

var
rootdisk-04 /dev/vx/dsk/var /dev/vx/dsk/var mounted on /var mounted on /var

opt
rootdisk-05
FOS35_Sol_R1.0_20020930 18-30 /dev/vx/dsk/rootdg/opt /dev/vx/dsk/rootdg/opt mounted on /opt mounted on /opt 18-30

The volumes appear to be the same as in the previous examples, but if you display VTOC information using the prtvtoc command, you notice that swap has decreased by the size of the private region.

18-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Root Disk Encapsulation


In some recovery situations, you should not run vxinstall to initialize VxVM. To manually initialize rootdg:
# # # # # vxiod set 10 vxconfigd -k -m disable vxdctl init vxdctl initdmp vxdg init rootdg

Ensure a consistent layout on both the root disk and its mirrors:
1. 2. 3. 4. Mirror the root disk. Remove the plexes on the original root disk. Reinitialize the original root disk. Mirror back from the mirror to the original root disk, so that both mirrors have a consistent physical layout.
18-31

FOS35_Sol_R1.0_20020930

Initializing VxVM: Recovery Process In some recovery situations, you should not run vxinstall to initialize VxVM, because rootdg is reinitialized and you lose any existing configuration information. To manually initialize rootdg, you follow these steps:
vxiod set 10 vxconfigd -k -m disable vxdctl init vxdctl initdmp vxdg init rootdg

Ensuring Consistent Layouts You should ensure a consistent layout on both the root disk and its mirrors. To achieve this: 1 Mirror the root disk. 2 Remove the plexes on the original root disk. 3 Reinitialize the original root disk. 4 Mirror back from the mirror to the original root disk, so that both mirrors have a consistent physical layout.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-41

Note: The following file contains the original partition table entry, in fmthard format, of a device prior to an encapsulation:
/etc/vx/reconfig.d/disk.d/disk/vtoc

In the path name, the disk is the actual node name of the disk, for example, c0t0d0. If you initialize a disk, then no such file is created.

18-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Emergency Boot Disk


rootdg otherdg

V B

V B

V D

D D

Use an emergency boot disk:


Host Host Emergency Boot Disk To repair encapsulated boot failure When there is no backup system file When UNIX will not boot

B = Boot disk B = Boot disk D = Data disk D = Data disk V = Volume V = Volume
FOS35_Sol_R1.0_20020930

c1t14d0

An emergency boot disk boots up on a Volume Manager-knowledgeable disk.

18-32

Creating an Emergency Boot Disk


Why Create an Emergency Boot Disk? Encapsulating and mirroring the boot disk ensures that if your boot disk is lost, the system continues to operate on the mirror disk. You can provide further protection for your system by creating an emergency boot disk that contains the operating system and VxVM software. You can use an emergency boot disk: To repair encapsulated boot failure When there is no backup system file When UNIX will not boot An emergency boot disk boots up on a Volume Manager-knowledgeable disk.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-43

Emergency Boot Disk Creation


1. Format the disk, place a root partition and a swap partition on the disk, and label it. (Make root large enough to hold usr, var, and opt.) 2. Create a file system:
# newfs /dev/rdsk/c0t1d0s0

3. Mount and copy files to the new boot disk:


# mount -F ufs /dev/dsk/c0t1d0s0 /mnt # find / /usr /var /opt -local -mount -print | cpio -pmudv /mnt

4. Place a boot block on the disk:


# /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0
FOS35_Sol_R1.0_20020930 18-33

Emergency Boot Disk Creation To create an emergency boot disk: 1 Format a disk, place a root partition and a swap partition on the disk, and label it. Make root large enough to hold usr, var, and opt. 2 Create a file system: # newfs /dev/rdsk/c0t1d0s0 3 Mount and copy files to the new boot disk: # mount -F ufs /dev/dsk/c0t1d0s0 /mnt # find / /usr /var /opt -local -mount -print | cpio -pmudv /mnt The find utility recursively searches the given directory paths and prints (to the standard output) the path names of all the files that are local to that file system. The cpio -p command reads the standard input to obtain a list of path names of files that will then be created and copied into the destination directory tree, which is the mount point, /mnt. 4 Place a boot block on the disk: # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0 The installboot command installs the specified platform-dependent boot blocks to the given disk partition.

18-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Emergency Boot Disk Creation


5. 6. Edit /mnt/etc/system to comment out the non-forceload lines related to VxVM. Edit /mnt/etc/vfstab, remove references to the root volumes, and place an entry for the emergency boot device as the root device. Create /mnt/tmp, /mnt/proc, and /mnt/mnt: # mkdir /mnt/tmp /mnt/proc /mnt/mnt Unmount /mnt: # umount /mnt Obtain the Solaris device name by using ls -l.

7. 8. 9.

10. Run init 0. 11. Boot from the emergency boot disk. For example: boot /pci@1f,0/pci@1/scsi@3/disk@e,0:a
FOS35_Sol_R1.0_20020930 18-34

5 Edit the /mnt/etc/system file to comment out the non-forceload lines related to VxVM. 6 Edit the /mnt/etc/vfstab file to remove references to the root volumes (rootvol, /usr, /var, /opt, and so on), and place an entry for the emergency boot device as the root device. 7 Create the directories /mnt/tmp, /mnt/proc, and /mnt/mnt: # mkdir /mnt/tmp /mnt/proc /mnt/mnt 8 Unmount /mnt: # umount /mnt 9 Write down the Solaris device name for the emergency boot disk. For example: # ls -l /dev/dsk/c0t1d0s0 /devices/pci@1f,0/pci@1/scsi@3/sd@e,0:a For booting, you will need the device name: /pci@1f,0/pci@1/scsi@3/disk@e,0:a 10 Run the following command: # init 0 11 Boot from the emergency boot disk. For example: boot /pci@1f,0/pci@1/scsi@3/disk@e,0:a

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-45

Booting from an Emergency Boot Disk Once you have an emergency boot disk, you can boot your system on the disk by using the full Solaris device name. Then, you can mount the volume onto a directory:
# vxrecover -s rootvol # mount -F ufs /dev/vx/dsk/rootvol /mnt

Now you can place whatever files are missing. If you have to run vxlicinst, copy the created files in the /etc/vx/licenses/lic directory to the /mnt/etc/vx/licenses/lic directory. When you are finished, unmount the volumes, and reboot the system on the regular boot disk. If VxVM does not come up normally after booting up on the emergency disk due to rootdg failure, you can create the install-db flag file and reboot the system:
# touch /etc/vx/reconfig.d/state.d/install-db # reboot

When the system comes back up, you can start VxVM manually by running the following commands:
# vxiod set 10 # vxconfigd # vxrecover -s

You can also specify debugging options to the vxconfigd command to identify the problem.

18-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Temporarily Importing rootdg


Used with an encapsulated root Used when you do not have a backup system file and emergency boot disk Enables the rootdg to be brought to a working system and repaired there To temporarily import rootdg:
1. 2. 3. 4.
FOS35_Sol_R1.0_20020930

Find the disk group ID. Import and rename the disk group to a new host. Repair files and volumes as needed. Deport the disk group back to the original host.
18-35

Recovering rootdg
Temporarily Importing rootdg By temporarily importing rootdg, you can bring rootdg from a failed system to a working system and repair it there. Use this method when you have an encapsulated root and do not have a backup system file and emergency boot disk. To temporarily import rootdg: 1 Find the disk group ID of rootdg: # vxdisk -s list ... Disk: c1t1d0s2 type: sliced flags: online ready private autoconfig autoimport imported diskid: 954254545.2009.train06 dgname: rootdg dgid: 952435045.1025.train06 hostid: train06 2 On the importing host, import and temporarily rename the disk group: # vxdg -tC -n tmpdg import 952435045.1025.train06 3 Repair files and volumes as needed. 4 Deport the disk group back to the original host: # vxdg -h train06 deport tmpdg

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-47

Temporarily Importing rootdg


Get the disk group ID of rootdg: # vxdisk -s list ... Disk: c1t1d0s2 type: sliced flags: online ready private autoconfig autoimport imported diskid: 954254545.2009.train06 dgname: rootdg dgid: 952435045.1025.train06 hostid: train06 On the importing host, import and temporarily rename the disk group: # vxdg -tC -n tmpdg import 952435045.1025.train06 Fix and replace the files and volumes as necessary. Deport the disk group back to the original host: # vxdg -h train06 deport tmpdg
18-36

FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

18-36

Repairing the Failed Root By temporarily importing rootdg on another host, you can repair the failed root. Mount the volume and replace files as needed:
# vxrecover -g tmpdg -s rootvol # mount /dev/vx/dsk/tmpdg/rootvol /mnt

Notes on Temporary Imports When you deport the disk group, you must place the hostid back to what it was prior to the import. If you do not perform this step, then the system that owns rootdg will still not boot. You will have to bring back the disks, import them again, and deport them with the -h flag. When you temporarily import rootdg to another system, you must bring all the disks in rootdg to the new system, or you may encounter problems when you return rootdg to the other system. You should not use the -f flag with the vxdg command to avoid doing this, because VxVM will consider the disks to be bad, and you will have to resolve additional problems. If the rootdg is still imported on the original host, importing it with the -f flag on another system may cause corruption in the configuration databases.

18-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Scenario 1: Disk Failure in rootdg


rootdg otherdg

V
The only disk The only disk in rootdg fails. in rootdg fails.

V D

V D
Host Host

D D
What is the impact of the failure? What system information or software is lost? What is your recovery strategy?
18-37

System Disk

B = Boot disk B = Boot disk D = Data disk D = Data disk V = Volume V = Volume
FOS35_Sol_R1.0_20020930

c0t0d0

The boot disk is not The boot disk is not encapsulated. encapsulated.

VxVM rootdg Failure and Recovery Scenarios For each of the following rootdg failure scenarios, complete the table to specify the impact of the failure and a recovery strategy. Base your answers on your understanding of recovery procedures, the boot process, and the files associated with booting Solaris and VxVM. Solutions for each recovery scenario are presented at the end of this section. Scenario 1: Disk Failure in rootdg In this scenario: The boot disk is not encapsulated. There is only one disk in rootdg, and that disk fails.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-49

What is the immediate impact of the failure? On the system?


On volumes in rootdg? On other disk groups? On vxconfigd?

What software or configuration data has been lost or is inaccessible? rootdg configuration?
Other disk group configurations? VxVM binaries? License keys? /etc/vx/volboot? /etc/system? /etc/vfstab? Data?

What is your recovery strategy?

18-50

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Scenario 2: Nonencapsulated Boot Disk Failure


rootdg otherdg

V D

V D

V D
Host Host

D D
What is the impact of the failure? What system information or software is lost? What is your recovery strategy?
18-38

The boot disk is not The boot disk is not encapsulated and fails. encapsulated and fails.
Boot Disk

B = Boot disk B = Boot disk D = Data disk D = Data disk V = Volume V = Volume
FOS35_Sol_R1.0_20020930

B
c0t0d0

Scenario 2: Nonencapsulated Boot Disk Failure In this scenario: The boot disk is not encapsulated. The boot disk fails.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-51

What is the immediate impact of the failure? On the system?


On volumes in rootdg? On other disk groups? On vxconfigd?

What software or configuration data has been lost or is inaccessible? rootdg configuration?
Other disk group configurations? VxVM binaries? License keys? /etc/vx/volboot? /etc/system? /etc/vfstab? Data?

What is your recovery strategy?

18-52

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Scenario 3: Encapsulated Boot Disk Failure of Only Disk in rootdg


rootdg otherdg

V B

V D
Host Host

D D
What is the impact of the failure? What system information or software is lost? What is your recovery strategy?
18-39

The boot disk is The boot disk is encapsulated, but encapsulated, but not mirrored, and not mirrored, and fails. No other disks fails. No other disks exist in rootdg. exist in rootdg.
B = Boot disk B = Boot disk D = Data disk D = Data disk FOS35_Sol_R1.0_20020930 V = Volume V = Volume

Scenario 3: Encapsulated Boot Disk Failure of Only Disk in rootdg In this scenario: The boot disk is encapsulated, but not mirrored. The boot disk fails. The boot disk is the only disk in rootdg.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-53

What is the immediate impact of the failure? On the system?


On volumes in rootdg? On other disk groups? On vxconfigd?

What software or configuration data has been lost or is inaccessible? rootdg configuration?
Other disk group configurations? VxVM binaries? License keys? /etc/vx/volboot? /etc/system? /etc/vfstab? Data?

What is your recovery strategy?

18-54

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Scenario 4: Encapsulated Boot Disk Failure with Other Disks in rootdg


rootdg otherdg

V B D

V D

V D
Host Host

D D
What is the impact of the failure? What system information or software is lost? What is your recovery strategy?
18-40

The boot disk is The boot disk is encapsulated, but encapsulated, but not mirrored, and not mirrored, and fails. Other disks fails. Other disks exist in rootdg. exist in rootdg.
B = Boot disk B = Boot disk D = Data disk D = Data disk FOS35_Sol_R1.0_20020930 V = Volume V = Volume

Scenario 4: Encapsulated Boot Disk Failure with Other Disks in rootdg In this scenario: The boot disk is encapsulated. The boot disk fails. Other disks exist in rootdg.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-55

What is the immediate impact of the failure? On the system?


On volumes in rootdg? On other disk groups? On vxconfigd?

What software or configuration data has been lost or is inaccessible? rootdg configuration?
Other disk group configurations? VxVM binaries? License keys? /etc/vx/volboot? /etc/system? /etc/vfstab? Data?

What is your recovery strategy?

18-56

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM rootdg Failure and Recovery Solutions In the following rootdg failure and recovery solutions, the four scenarios are compared to the recommended practice of maintaining an encapsulated and mirrored boot disk. Immediate Impact of the Failure
What is the immediate impact of the failure? Encapsulated and mirrored boot disk Scenario 1: Nonencapsulated boot disk; one disk in rootdg that fails On the system? None Continues to run On volumes in rootdg? None No longer available I/O attempts result in disk failure being recorded. Disk is detached. On other disk groups? None None I/O to currently started volumes continues. State changes to volumes are not possible. On vxconfigd? None Continues to run until a configuration change to rootdg is attempted, and then vxconfigd is disabled System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume.

Scenario 2: Nonencapsulated boot disk failure

System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume.

System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume.

System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume. System fails when I/O is attempted to a boot disk volume.

Scenario 3: Encapsulated, unmirrored boot disk failure; boot disk is the only disk in rootdg Scenario 4: Encapsulated, unmirrored boot disk failure; other disks exist in rootdg

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-57

Lost Software and Configuration Data


What software and configuration data has been lost or is inaccessible? Encapsulated and mirrored boot disk Scenario 1: Nonencapsulated boot disk; one disk in rootdg that fails Scenario 2: Nonencapsulated boot disk failure Scenario 3: Encapsulated, unmirrored boot disk failure; boot disk is the only disk in rootdg Scenario 4: Encapsulated, unmirrored boot disk failure; other disks exist in rootdg rootdg configuration? Other disk group configurations? Unaffected Still stored in private regions on disks within each disk group Still stored in private regions on disks within each disk group Still stored in private regions on disks within each disk group VxVM binaries? License keys?

Unaffected Lost

Unaffected Unaffected

Unaffected Unaffected

Still stored in private regions of disks in rootdg disk group Lost

Lost

Lost

Lost

Lost

Still stored in private regions of other disks in rootdg disk group

Still stored in private regions on disks within each disk group

Lost

Lost

18-58

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lost Software and Configuration Data (continued)


What software and configuration data has been lost or is inaccessible? Encapsulated and mirrored boot disk Scenario 1: Nonencapsulated boot disk; one disk in rootdg that fails Scenario 2: Nonencapsulated boot disk failure /etc/vx/ /etc/system? /etc/vfstab? Data? volboot?

Unaffected Unaffected

Unaffected Unaffected

Unaffected Unaffected

Unaffected Data in volumes in rootdg is lost.

Lost

Lost

Lost

Data in rootdg or other disk groups is still present, but must be checked for integrity before use. Data in rootdg is lost, including system data. Data in other disk groups remains on the disk and is accessible once the volumes can be started. The data requires integrity checking before use. Data stored on the boot disk is lost. Data on other volumes within rootdg, or any other imported disk groups, remains intact. Data requires integrity checking before use.

Scenario 3: Encapsulated, unmirrored boot disk failure; boot disk is the only disk in rootdg

Lost

Lost

Lost

Scenario 4: Encapsulated, unmirrored boot disk failure; other disks exist in rootdg

Lost

Lost

Lost

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-59

Recovery Strategies
What is your recovery strategy? Scenario Encapsulated and mirrored boot disk Scenario 1: Nonencapsulated boot disk; one disk in rootdg that fails Recovery Strategy Replace the failed disk using standard disk replacement procedures. If the system must be shut down to replace the failed disk, then boot off of an alternate boot disk. 1. Physically replace the failed disk. 2. Reinitialize rootdg: vxdg init rootdg 3. Initialize a replacement disk and add it to rootdg. For example: vxdisksetup -i c1t8d0 vxdg -g rootdg adddisk disk01=c1t8d0s2 4. Restart vxconfigd: vxconfigd -k -m enable or, if vxconfigd is in disabled mode: vxdctl enable 5. Re-create any volumes in rootdg. 6. Restore lost data from backup.

18-60

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Scenario 2: Nonencapsulated boot disk failure

1. 2. 3. 4. 5. 6. 7. 8.

9.

10. 11. 12. 13.

Physically replace the failed disk. Reinstall Solaris using the same host name as before the disk failure. Add VxVM packages. Do not run vxinstall, because the rootdg disk group already exists and contains volumes and data. Add VxVM licenses to the system: # vxlicinst Remove the install-db flag file: # rm /etc/vx/reconfig.d/stat.d/install-db Start I/O daemons: # vxiod set 10 Start vxconfigd in disabled mode: # vxconfigd k d Reinitialize the volboot file: # vxdctl init hostname where hostname, if specified, is the same as the original host name before the disk failure. Change vxconfigd to enabled mode to scan disks and import disk groups: # vxdctl enable If the host name used is identical to the host name before the disk failure, all disk groups, including the original rootdg, are located and imported. Edit the /etc/vfstab file to replace any entries for file systems lost through the disk failure. Mount file systems that should be mounted. Reinstall or restore any application binaries and configuration files that normally reside on the boot disk. Reboot the system.

As an alternative to this procedure, you can restore the boot disk file systems from backup. The VxVM binaries and all configuration files are also restored, including the volboot file and VxVM licenses. You can then reboot the system from the replaced system disk. Scenario 3: Encapsulated, unmirrored boot disk failure; boot disk is the only disk in rootdg 1. 2. 3. 4. 5. Physically replace the failed disk. Reinstall Solaris using the same host name as before the disk failure. Add VxVM packages. Run vxinstall to re-create rootdg and encapsulate the boot disk. Leave all other disks alone when running vxinstall. After running vxinstall, reboot the system to perform the encapsulation. All other relevant disk groups are imported, and their volumes are started as part of the reboot process.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-61

Scenario 4: Encapsulated, unmirrored boot disk failure; other disks exist in rootdg

1. 2. 3. 4. 5. 6. 7. 8.

Physically replace the failed disk. Reinstall Solaris using the same host name as before the disk failure. Add VxVM packages. Do not run vxinstall, because the rootdg disk group already exists and contains volumes and data. Add VxVM licenses to the system: # vxlicinst Remove the install-db flag file: # rm /etc/vx/reconfig.d/stat.d/install-db Start I/O daemons: # vxiod set 10 Start vxconfigd in disabled mode: # vxconfigd k d Reinitialize the volboot file: # vxdctl init hostname where hostname, if specified, is the same as the original host name (before the disk failure). Change vxconfigd to enabled mode to scan disks and import disk groups: # vxdctl enable If the host name used is identical to the host name before the disk failure, all disk groups, including the original rootdg, are located and imported. Edit the /etc/vfstab file to replace any entries for file systems lost through the disk failure. Mount file systems that should be mounted. Reinstall or restore any application binaries and configuration files that normally reside on the boot disk. Reboot the system.

9.

10. 11. 12. 13.

As an alternative to this procedure, you can restore the boot disk file systems from backup. The VxVM binaries and all configuration files are also restored, including the volboot file and VxVM licenses. You can then reboot the system from the replaced system disk.

18-62

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to: Describe the phases of the Solaris boot process. Troubleshoot the boot process. Describe root disk encapsulation scenarios. Create an emergency boot disk. Recover rootdg for different boot disk failure scenarios.

FOS35_Sol_R1.0_20020930

18-41

Summary
This lesson described how VERITAS Volume Manager (VxVM) integrates into the Solaris boot process, the key scripts and files used in the boot process, and tips on troubleshooting the boot process. This lesson also provided procedures for creating an emergency boot disk and recovering from various boot disk failures. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VxVM. VERITAS Volume Manager Troubleshooting Guide This guide provides information about how to recover from hardware failure, and how to understand and deal with VxVM error messages.

Lesson 18: VxVM, Boot Disk, and rootdg Recovery


Copyright 2002 VERITAS Software Corporation. All rights reserved.

18-63

Lab 18
Lab 18: VxVM, Boot Disk, and rootdg Recovery This exercise simulates encapsulated system disk failures. You must recover the system disk and boot to multiuser mode. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

18-42

Lab 18: VxVM, Boot Disk, and rootdg Recovery


Goal This exercise simulates encapsulated system disk failures. You must recover the system disk and boot to multiuser mode. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Review Answers and Lab Solutions.

18-64

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19

Administering DMP (Self Study)

Overview
This self-study lesson describes two additional VxVM features:
Device discovery: The device discovery layer of VxVM enables you to dynamically add support for a new type of disk array that is developed by a third-party vendor. Dynamic multipathing: Dynamic multipathing enhances the reliability and performance of your environment by enabling path failover and load balancing.

FOS35_Sol_R1.0_20020930

19-2

Introduction
Overview This lesson describes how to manage device discovery and administer dynamic multipathing. You learn how to administer the device discovery layer (DDL) and manage the dynamic multipathing (DMP) feature of VxVM. Importance The device discovery layer of VxVM enables you to dynamically add support for new types of disk arrays that are developed by third-party vendors. Dynamic multipathing enhances the reliability and performance of your environment by enabling path failover and load balancing. Outline of Topics Discovering Disk Devices Administering the Device Discovery Layer Dynamic Multipathing Preventing Multipathing for a Device Managing DMP Controlling Automatic Restore Processes

19-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to: Describe the VxVM device discovery function. Manage the VxVM device discovery layer by using the vxddladm utility. Define active/active and active/passive disk arrays. Prevent multipathing for a specific device. Manage the VxVM dynamic multipathing feature by using the vxdmpadm command. Control the DMP restore daemon.
FOS35_Sol_R1.0_20020930 19-3

Objectives After completing this lesson, you will be able to: Describe the VxVM device discovery function. Manage the VxVM device discovery layer by using the vxddladm utility. Define active/active and active/passive disk arrays. Prevent multipathing for a specific device. Manage the VxVM dynamic multipathing feature by using the vxdmpadm command. Control the DMP restore daemon.

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-3

Device Discovery Layer (DDL)


Device discovery is the Device discovery is the process of locating and process locating and identifying disks identifying disks attached to a host. attached to host.

vxdiskconfig vxconfigd
Shark DGC Hitachi EMC

DDL
User process level Kernel process level

VxVM Kernel DMP


FOS35_Sol_R1.0_20020930

Prior to VxVM 3.2, device discovery occurred at boot time. With VxVM 3.2 and later, device discovery occurs automatically whenever you add a new disk array.
19-4

Discovering Disk Devices


What Is Device Discovery? Device discovery is the process of locating and identifying the disks that are attached to a host. VxVM features, such as dynamic multipathing (DMP), depend on device discovery. Device discovery enables you to dynamically add support for disk arrays from a variety of vendors. In earlier versions of VxVM, device discovery occurred at boot time. With VxVM 3.2 and later, when you add a new disk array, device discovery occurs automatically if vxconfigd is running. The VxVM device discovery layer (DDL) enables the discovery of devices attached to a host and enables you to add support dynamically for new disk arrays. With VxVM 3.2 and later, you can dynamically add a new disk array to a host and reconfigure VxVM to add new devices. In most cases, this can be done without rebooting the system. Discovering and Configuring Disk Devices To dynamically discover new devices, VxVM uses the vxdiskconfig utility. VxVM invokes vxdiskconfig whenever disks are physically connected to the host, when devices come online, or when Fibre Channel devices are zoned to the host. This utility scans for disks that were added since VxVMs configuration daemon was last started and dynamically configures the disks to be recognized by VxVM. The vxdiskconfig utility invokes the Solaris utility devfsadm to ensure that Solaris recognizes the disks, then invokes vxdctl enable, which rebuilds volume and plex device node directories and the DMP internal database to reflect the new state of the system.
19-4 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding Disk Array Support


To add support for a new type of disk array, add vendorsupplied libraries. For example: # pkgadd -d /cdrom/pkgdir SEAGTda When you connect a disk, vxconfigd invokes vxdiskconfig and includes the new disks in the VxVM device list. If device discovery does not occur automatically, run: # vxdctl enable
This command invokes vxconfigd to scan for all disk devices, updates the device list, and reconfigures DMP. You do not need to reboot the host.

To remove support for an array, remove the library: # pkgrm SEAGTda


19-5

FOS35_Sol_R1.0_20020930

Adding Support for a New Disk Array With VxVM version 3.2 and later, to add support for a new type of disk array that is developed by a third-party vendor, you must add vendor-supplied libraries to a Solaris system by using the pkgadd command. The new disk array does not need to be connected to the system when the package is installed. When any of the disks in the new disk array are subsequently connected, and if vxconfigd is running, vxconfigd immediately invokes vxdiskconfig and includes the new disks in the VxVM device list. For example, to install the vendor-supplied package SEAGTda from a CD-ROM:
# pkgadd -d /cdrom/pkgdir SEAGTda

Scanning for Disks When you add a new disk array, device discovery happens automatically if vxconfigd is running. If vxconfigd is not running: 1 Start the vxconfigd process by using: # vxconfigd -m boot & 2 Then, run the command: # vxdctl enable This command invokes vxconfigd to scan for all disk devices and their attributes, to update the VxVM device list, and to reconfigure DMP with the new device database. There is no need to reboot the host.

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-5

Removing Support for a Disk Array To remove support for a disk array, you remove the vendor-supplied library package by using the pkgrm command. For example, to remove support for the SEAGTda disk array:
# pkgrm SEAGTda

If the arrays remain physically connected to the host after support has been removed, they are listed in the OTHER_DISKS category, and the volumes remain available.

19-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Administering DDL
vxddladm is the administrative interface to DDL.
To list currently supported disk arrays:
# vxddladm listsupport

To exclude support for an array:


# vxddladm excludearray libname=library # vxddladm excludearray vid=ACME pid=X1

To reinclude support for an array:


# vxddladm includearray libname=library # vxddladm includearray vid=ACME pid=X1

To list currently excluded arrays:


# vxddladm listexclude

To list supported JBODs:


# vxddladm listjbod

To add or remove support for JBODs:


19-6

# FOS35_Sol_R1.0_20020930vxddladm addjbod vid=vendor_ID pid=prod_ID # vxddladm rmjbod vid=vendor_ID pid=prod_ID


FOS35_Sol_R1.0_20020930

19-6

Administering the Device Discovery Layer


To administer the device discovery layer (DDL), you can use the vxddladm utility, which is an administrative interface to the DDL. The vxddladm command can perform the following tasks: List the types of arrays that are supported. Add support for an array to DDL. Remove support for an array from DDL. List information about excluded disk arrays. List the supported JBODs. Add JBOD support for disks from different vendors. Remove support for a JBOD. Listing Supported Disk Arrays To list all currently supported disk arrays:
# vxddladm listsupport

Excluding Support for a Disk Array To exclude a particular array library from participating in device discovery, you use the vxddladm excludearray command. For example, to exclude support for a disk array that depends on the library libvxenc.so:
# vxddladm excludearray libname=libvxenc.so

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-7

You can also exclude support for a disk array from a particular vendor by specifying the vendor ID and product ID of the array. For example:
# vxddladm excludearray vid=SUN pid=T300

Reincluding Support for an Excluded Disk Array If you previously excluded support for a particular disk array, and you want to remove the entry from the exclude list, you use vxddladm includearray. For example, to reinclude support for the array that depends on the library libvxenc.so:
# vxddladm includearray libname=libvxenc.so

The array library can then be used in device discovery. If vxconfigd is running, the library is added to the database again. If vxconfigd is not running, use vxdisk scandisk to discover the array and add its details to the database. Listing Excluded Disk Arrays To list all disk arrays that are currently excluded from use by VxVM:
# vxddladm listexclude

Listing Supported JBODs To list supported disks in the JBOD category:


# vxddladm listjbod

Adding Support for JBODs To add support for disks that are in the JBOD category, use the vxddladm addjbod command. For example, to add disks from the vendor Seagate:
# vxddladm addjbod vid=SEAGATE

To add support for T300 disks from Sun:


# vxddladm addjbod vid=SUN pid=T300

Removing Support for JBODs To remove support for disks that are in the JBOD category, use the vxddladm rmjbod command. For example, to remove disks supplied by the vendor Seagate:
# vxddladm rmjbod vid=SEAGATE

To remove support for T300 disks from Sun:


# vxddladm rmjbod vid=SUN pid=T300

For more information, see the vxddladm(1m) manual page.

19-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Dynamic Multipathing (DMP)


DMP: A method VxVM uses to manage two or more hardware paths to a single drive
Benefits:
High availability in case one path fails Improved performance by balancing I/O load c1t1d0 c2t1d0

For VxVM 3.1.1 and later:


DMP is enabled by default. Do not disable DMP. The DMP device driver must always be present on the system for VxVM to function properly.
19-7

FOS35_Sol_R1.0_20020930

Dynamic Multipathing
The dynamic multipathing (DMP) feature of VxVM provides greater reliability and performance for your system by enabling path failover and load balancing. What Is Dynamic Multipathing? Dynamic multipathing is the method that VxVM uses to manage two or more hardware paths to a single drive. For example, the physical hardware can have at least two paths, such as c1t1d0 and c2t1d0, directing I/O to the same drive. VxVM arbitrarily selects one of the two names and creates a single device entry, then transfers data across both paths to spread the I/O. VxVM detects multipath systems by using the Universal World-Wide-Device Identifiers (WWD IDs) and manages multipath targets, such as disk arrays, which define policies for using more than one path. Benefits of DMP Benefits of DMP include: High availability DMP provides greater reliability by providing a path failover mechanism. In the event of a loss of one connection to a disk, the system continues to access the critical data over the other sound connections to the disk, until you replace the failed path. Improved performance DMP provides greater I/O throughput by balancing the I/O load uniformly across multiple I/O paths to the disk device.
Lesson 19: Administering DMP (Self Study)
Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-9

Enabling DMP With VxVM 3.1.1 and later, DMP is enabled by default. The operation of DMP relies on the vxdmp device driver, which must always be present on the system for VxVM to function properly. If you upgrade to VxVM 3.1.1 or later, DMP is automatically enabled, even if it was previously disabled. VxVM features, such as coexistence with third-party multipathing solutions and platform-independent device naming, require the DMP driver to be present on the system. Caution: Do not disable DMP for VxVM version 3.1.1 or later. Running VxVM without the DMP layer is not a supported configuration. For VxVM version 3.1 or earlier, you can either fully enable or fully disable DMP. Identifying DMP-Supported Arrays The DMP feature of VxVM supports multiported disk arrays from various vendors. For a complete list of supported arrays, see the VERITAS Volume Manager Hardware Notes. Available in VxVM 3.1.1 or later, DMP can coexist with Alternate Pathing Driver from Sun. In earlier versions of VxVM, you had to choose one or the other. Now, you can use both. This feature requires the latest AP driver from Sun. See the VERITAS Volume Manager Hardware Notes for more information.

19-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Types of Multiported Arrays


Active/Active Active/Active
c1t1d0 Active Path c2t1d0 Active Path

Active/Passive Active/Passive
c1t1d0 Active Path (Primary) c2t1d0 Passive Path (Secondary)

Used for:

FOS35_Sol_R1.0_20020930

Load balancing Path failover

Used for path failover only


19-8

What Is a Multiported Disk Array? A multiported disk array is an array that can be connected to host systems through multiple paths. The two basic types of multiported disk arrays are: Active/active disk arrays Active/passive disk arrays For each supported array type, VxVM uses a multipathing policy that is based on the characteristics of the disk array. Active/Active Disk Arrays Active/active disk arrays permit several paths to be used concurrently for I/O. With these arrays, DMP provides greater I/O throughput by balancing the I/O load uniformly across the multiple paths to the disk devices. If one connection to an array is lost, DMP automatically routes I/O over the other available connections to the array. VxVM versions 3.x and later use a balanced path policy to distribute I/Os across available paths for active/active arrays. (VxVM 2.5.x uses a round-robin policy.) Sequential I/Os starting within 256K are sent down the same path to optimize I/O throughput using disk track caches. However, large sequential I/Os that do not fall within this range are distributed across multiple paths to take advantage of load balancing. Examples of active/active disk arrays include A5x00 (SENA) disk arrays from Sun, the SPARC Storage Array (SSA), EMC Symmetrix, Hitachi 7700E, and the Winchester FlashDisk array.
Lesson 19: Administering DMP (Self Study)
Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-11

Active/Passive Disk Arrays Active/passive disk arrays permit only one path at a time to be used for I/O. The path that is used for I/O is called the active path, or primary path. An alternate path, or secondary path, is configured for use in the event that the primary path fails. If the primary path to the array is lost, DMP automatically routes I/O over the secondary path or other available primary paths. For active/passive disk arrays, VxVM use the available primary path as long as it is accessible. DMP shifts I/Os to the secondary path only when the primary path fails. This is called the failover or standby mode of operation for I/Os. To avoid the continuous transfer of ownership of LUNs from one controller to another, which results in severe I/O slowdown, load balancing across paths is not performed for active/passive disk arrays. Examples of active/passive disk arrays are: DG Clarion with ATF driver, Hitachi 5700E, Hitachi 5800E, Nike (Model 10, 20), Galaxy, and Purple (T300).

19-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Preventing DMP for a Device


Menu 1 2

If an array cannot support DMP, you can prevent multipathing for the device by using vxdiskadm:
Volume Manager Support Operations Menu: VolumeManager/Disk . . . 17 18 19 Prevent multipathing/Suppress devices from VxVMs view Allow multipathing/Unsuppress devices from VxVMs view List currently suppressed/non-multipathed devices Commands like vxdisk list show duplicate sets of disks as ONLINE, even though only one path is used for I/O. Disk failures can be represented incorrectly.
19-9

Warning If you do not prevent DMP for unsupported arrays: Warning

FOS35_Sol_R1.0_20020930 19-9

FOS35_Sol_R1.0_20020930

Preventing Multipathing for a Device


If you have an array that cannot support the use of DMP, or if you want to use Suns Alternate Pathing driver with VxVM, you can suppress DMP for some or all devices by using the vxdiskadm menu. Suppressing DMP for a device prevents multipathing without removing the DMP layer. It is important for you to suppress DMP for devices that do not support DMP. If you do not prevent DMP for unsupported arrays: VxVM commands, such as vxdisk list, show duplicated sets of disks as ONLINE for each path, even though it is only using one path for I/O. Disk failures can be represented or displayed incorrectly by VxVM if DMP is running with an unsupported, unsuppressed array. To manage the devices that participate in DMP, you can use the following options in the vxdiskadm main menu:
17 18 19 Prevent multipathing/Suppress devices from VxVMs view Allow multipathing/Unsuppress devices from VxVMs view List currently suppressed/non-multipathed devices

To suppress a device from VxVMs view or prevent a device from being multipathed by vxdmp, you select vxdiskadm option 17 from the main menu. To unsuppress a device from VxVMs view or allow a device to be multipathed by vxdmp, you select vxdiskadm option 18 from the main menu. To list currently suppressed or nonmultipathed devices, you select vxdiskadm option 19 from the main menu.

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-13

Preventing DMP for a Device


Menu 1 2

When you select option 17 from the vxdiskadm main menu, you have these choices:
1 2 3 4 5 6 7 8 Suppress all paths through a controller from VxVMs view Suppress a path from VxVMs view Suppress disks from VxVMs view by specifying VID:PID combination Suppress all but one paths to a disk Prevent multipathing of all disks on a controller by VxVM Prevent multipathing of a disk by VxVM Prevent multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices

Similar choices exist when you reinclude devices for DMP.


FOS35_Sol_R1.0_20020930 19-10

Excluding Devices from Multipathing When you select option 17 in the vxdiskadm main menu, the Exclude Devices submenu is displayed:
Exclude Devices Menu: VolumeManager/Disk/ExcludeDevices 1 Suppress all paths through a controller from VxVMs view 2 Suppress a path from VxVMs view 3 Suppress disks from VxVMs view by specifying a VID:PID combination 4 Suppress all but one paths to a disk 5 Prevent multipathing of all disks on a controller by VxVM 6 Prevent multipathing of a disk by VxVM 7 Prevent multipathing of disks by specifying a VID:PID combination 8 List currently suppressed/non-multipathed devices

Using this menu, you can: Exclude all paths through a controller from VxVMs view (option 1) or prevent multipathing of all disks on a controller (option 5). Exclude specific paths from VxVMs view (option 2) or disable multipathing for specified paths (option 6).

19-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Exclude disks that match a specific vendor ID (VID) and product ID (PID) (option 3) or disable multipathing for disks that match a specified VID:PID combination (option 7). Define a path group made up of all paths to a disk to ensure that only one of the paths in the group is visible to VxVM (option 4). Display a list of currently excluded devices from VxVMs view or from multipathing (option 8).

Some of the operations require a system reboot to become effective. You are prompted whenever a reboot is required. What Is the Difference Between Option 1 and Option 5? Both options 1 and 5 send the command vxdmpadm disable to the kernel. Option 1, Suppress all paths through a controller from VxVMs view, continues to allow the I/O to use both paths internally. After a reboot, vxdisk list does not show the suppressed disks. Option 5, Prevent multipathing of all disks on a controller by VxVM, does not allow the I/O to use internal multipathing. The vxdisk list command shows all disks as ONLINE. Option 5 has no effect on arrays that are not performing dynamic multipathing or that do not support VxVM DMP. Including Devices for Multipathing For previously excluded devices, if you later decide that you want to reinclude the device in multipathing, then you select vxdiskadm option 18. A similar set of options is available in the Include Devices submenu.

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-15

Managing DMP
To perform DMP administration, use vxdmpadm.

vxdmpadm listctlr

Lists controllers on a system Displays subpaths of a controller or DMP node Displays DMP nodes for a path or disk array Enables or disables I/O to a controller Starts or stops the DMP restore daemon Displays attributes of an enclosure Renames an enclosure

vxdmpadm getsubpaths vxdmpadm getdmpnode

vxdmpadm enable|disable

vxdmpadm start restore|stop restore vxdmpadm listenclosure vxdmpadm setattr


FOS35_Sol_R1.0_20020930 19-11

Managing DMP
To list DMP database information and perform other administrative tasks, you can use the vxdmpadm utility. This utility is an administrative interface to the VxVM dynamic multipathing facility. Using the vxdmpadm utility, you can: List all controllers connected to disks that are attached to the host. List all the paths connected to a particular controller. List all paths under a DMP device. Retrieve the name of the DMP device that corresponds to a path. Enable or disable a host controller. Rename an enclosure.

19-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The following vxdmpadm command options are described in more detail in the sections that follow.
Option vxdmpadm listctlr vxdmpadm getsubpaths vxdmpadm getdmpnode vxdmpadm enable vxdmpadm disable vxdmpadm start restore vxdmpadm stop restore vxdmpadm listenclosure vxdmpadm setattr Description Lists disk controllers on the system Displays all subpaths of a controller/DMP node Displays the DMP nodes for a path/disk array Enables or disables I/O to a specific host disk controller Starts or stops the DMP restore daemon Displays attributes of a specified enclosure Renames an enclosure

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-17

Listing Controllers
To list all controllers on a system: # vxdmpadm listctlr all # vxdmpadm listctlr all
CTLR-NAME ENCLR-TYPE ENCLR-NAME CTLR-NAME ENCLR-TYPE STATE STATE ENCLR-NAME ============================================ ============================================ c0 c0 c1 c1 c2 c2 OTHER_DISKS ENABLED OTHER_DISKS ENABLED SEAGATE ENABLED SEAGATE ENABLED SEAGATE ENABLED SEAGATE ENABLED other_disks0 other_disks0 seagate0 seagate0 seagate0 seagate0

c0 is connected to disks that are not in a recognized DMP category (OTHER_DISKS). c1 and c2 are connected to an A5x00 (SEAGATE) disk array. All controllers are available for I/O (ENABLED).
19-12

FOS35_Sol_R1.0_20020930

Listing Controllers on a System Using vxdmpadm, you can list all the controllers on the system and display other related information stored in the DMP database. You can use this information to locate system hardware and make decisions about which controllers to enable or disable. To display a list of controllers on a system, you use the command:
vxdmpadm listctlr [all] [enclosure=enclosure] [ctlr=controller] [type=array_type]

In the syntax: listctlr all lists all controllers on the host. You can specify the enclosure, ctlr, and type attributes to display a list of controllers on a particular disk array or on a particular enclosure type. Example: Listing All Controllers For example, to list all controllers connected to disks on the host:
# vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ==================================================== c0 OTHER_DISKS ENABLED other_disks0 c1 SEAGATE ENABLED seagate0 c2 SEAGATE ENABLED seagate0

19-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

For each controller, the output lists the enclosure type, state, and name. In the example: The first controller, c0, is connected to disks that are not in a recognized DMP category. The second and third controllers, c1 and c2, are connected to an A5x00 (SEAGATE) disk array. All of the controllers are in the ENABLED state, which indicates that they are available for I/O operations. If the state is DISABLED, the controller is unavailable for I/O operations. This indicates that either the administrator has disabled the controller or hardware failure has occurred. Example: Listing Controllers of a Specific Type To list controllers that belong to the specific enclosure enc0 and the enclosure type T300:
# vxdmpadm listctlr enclosure=enc0 type=T300 CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ================================================= c2 T300 ENABLED enc0 c3 T300 ENABLED enc0

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-19

Displaying Paths
To display the paths connected to a controller:

# vxdmpadm getsubpaths ctlr=c1 # vxdmpadm getsubpaths ctlr=c1


NAME STATE NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME =============================================================== =============================================================== c1t0d0s2 ENABLED c1t0d0s2 ENABLED c1t1d0s2 ENABLED c1t1d0s2 ENABLED c1t2d0s2 ENABLED c1t2d0s2 ENABLED c1t3d0s2 ENABLED c1t3d0s2 ENABLED c1t4d0s2 ENABLED c1t4d0s2 ENABLED c1t5d0s2 ENABLED c1t5d0s2 ENABLED c1t6d0s2 ENABLED c1t6d0s2 ENABLED FOS35_Sol_R1.0_20020930

c2t0d0s2 c2t0d0s2 c2t1d0s2 c2t1d0s2 c2t2d0s2 c2t2d0s2 c2t3d0s2 c2t3d0s2 c2t4d0s2 c2t4d0s2 c2t5d0s2 c2t5d0s2 c2t6d0s2 c2t6d0s2

SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE SEAGATE

seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0 seagate0
19-13

To display the paths connected to a LUN: # vxdmpadm getsubpaths dmpnodename=c1t0d0s2

FOS35_Sol_R1.0_20020930

19-13

Displaying the Paths Controlled by DMP Node To display the paths that are connected to a particular controller or LUN, you use the vxdmpadm getsubpaths command. In the syntax, you can specify a controller or a DMP node name:
vxdmpadm getsubpaths ctlr=controller vxdmpadm getsubpaths dmpnodename=node_name

The specified DMP node must be a valid node in the /dev/vx/rdmp directory. Example: Displaying Paths for a Controller For example, to display all paths connected to controller c1:
# vxdmpadm getsubpaths ctlr=c1 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ============================================================ c1t0d0s2 ENABLED c2t0d0s2 SEAGATE seagate0 c1t1d0s2 ENABLED c2t1d0s2 SEAGATE seagate0 c1t2d0s2 ENABLED c2t2d0s2 SEAGATE seagate0 c1t3d0s2 ENABLED c2t3d0s2 SEAGATE seagate0 c1t4d0s2 ENABLED c2t4d0s2 SEAGATE seagate0 c1t5d0s2 ENABLED c2t5d0s2 SEAGATE seagate0 c1t6d0s2 ENABLED c2t6d0s2 SEAGATE seagate0

The output displays the paths that are connected to the controller named c1 and includes the state of the path, the DMP node name, enclosure type, and enclosure name.
19-20 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

For example: Path c1t0d0s2 (represented by nodes in the /dev/rdsk and /dev/dsk directories) is in the ENABLED state. Path c1t0d0s2 is represented by the DMP metanode c2t0d0s2, which is represented by device nodes in the /dev/vx/dmp and /dev/vx/rdmp directories. Example: Displaying Paths for a DMP Node You can use the getsubpaths option combined with the dmpnodename attribute to list all paths that are connected to a LUN (represented by a DMP device). For example, to list information about paths that lead to the LUN named c1t0d0s2:
# vxdmpadm getsubpaths dmpnodename=c1t0d0s2 NAME STATE PATH-TYPE CTLR-NAME ENCLR-TYPE ENCLR-NAME ============================================================ c1t0d0s2 ENABLED c1 SEAGATE seagate0 c2t0d0s2 DISABLED c2 SEAGATE seagate0

In the example: The DMP device c1t0d0s2 has two paths: c1t0d0s2 and c2t0d0s2. Only one of these paths is available for I/O operations. Path c1t0d0s2 is available (ENABLED), and path c2t0d0s2 is not available (DISABLED). Both paths are in a SEAGATE disk array. Example: Displaying Path Type for Active/Passive Arrays For active/passive disk arrays, the PATH-TYPE column indicates primary and secondary paths. For example:
# vxdmpadm getsubpaths dmpnodename=c2t1d0s2 NAME STATE PATH-TYPE CTLR-NAME ENCLR-TYPE ENCLR-NAME ============================================================ c2t1d0s2 ENABLED PRIMARY c1 T300 enc0 c3t2d0s2 ENABLED SECONDARY c2 T300 enc0

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-21

Displaying DMP Nodes


To display the DMP node name that controls a path: # vxdmpadm getdmpnode nodename=c3t2d1s2 # vxdmpadm getdmpnode nodename=c3t2d1s2
NAME STATE ENCLR-TYPE PATHS NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ENBL DSBL ENCLR-NAME =========================================================== =========================================================== c3t2d1s2 ENABLED T300 c3t2d1s2 ENABLED T300 2 2 2 2 0 0 enc0 enc0

The physical path c3t2d1s2 is owned by the DMP device c3t2d1s2. The device has two paths (PATHS), both of which are enabled (ENBL).
19-14

FOS35_Sol_R1.0_20020930

Displaying the DMP Node That Controls a Path To display the name of the DMP device that controls a path, you use the vxdmpadm getdmpnode command:
vxdmpadm getdmpnode nodename=node_name

The node_name must be a valid path in the /dev/rdsk directory. For example:
# vxdmpadm getdmpnode nodename=c3t2d1s2
NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ============================================================ c3t2d1s2 ENABLED T300 2 2 0 enc0

The physical path c3t2d1s2 is owned by the DMP device c3t2d1s2, which has two paths to it. Both paths are enabled. You can use the enclosure=enclosure attribute to display a list of all DMP nodes for the specified enclosure:
# vxdmpadm getdmpnode enclosure=enc0 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ============================================================ c3t2d1s2 ENABLED T300 2 2 0 enc0 c3t2d2s2 ENABLED T300 2 2 0 enc0 c3t2d3s2 ENABLED T300 2 2 0 enc0 c3t2d4s2 ENABLED T300 2 2 0 enc0

19-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disabling I/O to a Controller


You can disable I/O to a controller to perform maintenance, for example, to replace a system board.
To disable I/O to a particular controller: # vxdmpadm disable ctlr=c1 To disable I/O to a particular enclosure: # vxdmpadm disable enclosure=enc0 To reenable I/O to a particular controller: # vxdmpadm enable ctlr=c1
In VEA: Select Actions>Disable (or Actions>Enable), and complete the associated dialog box.
FOS35_Sol_R1.0_20020930 19-15

Enabling or Disabling I/O to a Controller By disabling I/O to a host disk controller, you can prevent DMP from issuing I/O through a specified controller. You can disable I/O to a controller to perform maintenance on disk arrays or controllers attached to the host. For example, when replacing a system board, you can stop all I/O to the disk controllers connected to the board before you detach the board. For active/active disk arrays, when you disable I/O to one active path, all I/O shifts to other active paths. For active/passive disk arrays, when you disable I/O to one active path, all I/O shifts to a secondary path or to an active primary path on another controller. Note: You cannot disable the last enabled path to the root disk or any other disk. Disabling I/O to a Controller To disable I/O to a controller, you use the command:
vxdmpadm disable [ctlr=ctlr_name] [enclosure=enclosure] [type=array_type]

To identify the controller, you can specify the controller name. The command also supports the enclosure name and array type attributes. Note: When you disable I/O to a controller, disk, or path, you override the DMP restore daemons ability to reset the path to ENABLED.

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-23

Enabling I/O to a Controller After a maintenance task is completed, you can enable a previously disabled controller to accept I/O operations by using the vxdmpadm enable command:
vxdmpadm enable [ctlr=ctlr_name] [enclosure=enclosure] [type=array_type]

When you enable I/O to a controller: For active/active disk arrays, the controller is used again for load balancing. For active/passive disk arrays, the operation results in failback of I/O to the primary path. Enabling or Disabling a Controller in VEA To disable a controller in VEA: 1 Select the controller to be disabled. 2 Select Actions>Disable. 3 Complete the Disable Controller dialog box by specifying the name of the controller to disable. 4 Click OK to complete the operation. To enable a controller in VEA, highlight the controller, select Actions>Enable, and complete the associated dialog box.

19-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Enclosures
You can use additional vxdmpadm commands to manage enclosures. To display attributes of all enclosures:
# vxdmpadm listenclosure all

To change the name of an enclosure:


# vxdmpadm setattr enclosure orig_name name=new_name

For example, to rename enc0 to sf_lab1:


# vxdmpadm setattr enclosure enc0 name=sf_lab1 In VEA: Highlight an enclosure, and select Actions>Rename Enclosure. Complete the associated dialog box.
FOS35_Sol_R1.0_20020930 19-16

Listing Information About Enclosures To display the attributes of enclosures, you use the vxdmpadm listenclosure command:
vxdmpadm listenclosure [all|enclosure_name]

In the syntax: Use the all attribute to display attributes for all enclosures on a system. Specify the name of an enclosure to display attributes for a specific enclosure. For example:
# vxdmpadm listenclosure all ENCLR_NAME ENCLR_TYPE ENCLR_SNO MODE STATUS ============================================================ others0 OTHER_DISKS OTHER_DISKS PRIVATE CONNECTED seagate0 SEAGATE SEAGATE_DISKS PRIVATE CONNECTED enc0 T300 60020f20000001a90000 PRIVATE CONNECTED

The output lists enclosure name, enclosure type, and enclosure serial number (ENCLR_SNO). PRIVATE in the MODE column indicates that disks in the enclosure have private regions. The STATUS indicates that the enclosures are connected. Renaming an Enclosure To assign a meaningful name to an enclosure, you use the vxdmpadm setattr command:
# vxdmpadm setattr enclosure orig_name name=new_name

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-25

In the syntax, you specify the original name (orig_name) and set the name attribute to the new name (name=new_name). The new enclosure name must be unique within the disk group, and the maximum length of an enclosure name is 25 characters. For example, to change the name of the enclosure enc0 to sf_lab1:
# vxdmpadm setattr enclosure enc0 name=sf_lab1

The disk array is referred to by its new name for all subsequent operations. For example, if you list all enclosures, the new name is displayed:
# vxdmpadm listenclosure all ENCLR_NAME ENCLR_TYPE ENCLR_SNO MODE STATUS ============================================================ others0 OTHER_DISKS OTHER_DISKS PRIVATE CONNECTED seagate0 SEAGATE SEAGATE_DISKS PRIVATE CONNECTED sf_lab1 T300 60020f20000001a90000 PRIVATE CONNECTED

Renaming an Enclosure in VEA To rename an enclosure in VEA: 1 Select the enclosure to be renamed. 2 Select Actions>Rename Enclosure. 3 Complete the Rename Enclosure dialog box by verifying the current name and specifying a new name for the enclosure. 4 Click OK to complete the operation.

19-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Controlling the Restore Daemon


The DMP restore daemon is an internal process that monitors DMP paths. To check its status:

# vxdmpadm stat restored # vxdmpadm stat restored


The The The The The The

number of daemons running: 1 number of daemons running: 1 interval of daemon: 300 interval of daemon: 300 policy of daemon: check_disabled policy of daemon: check_disabled
interval: Frequency of analysis (default: 300 seconds) interval: Frequency of analysis (default: 300 seconds) check_disabled: Only checks disabled paths (default) check_disabled: Only checks disabled paths (default)

To change daemon properties: Stop the DMP restore daemon:


# vxdmpadm stop restore

Restart the daemon with new attributes:


# vxdmpadm start restore interval=400 policy=check_all check_all: All paths are checked. check_all: All paths are checked.
19-17

FOS35_Sol_R1.0_20020930

FOS35_Sol_R1.0_20020930

19-17

Controlling Automatic Restore Processes


DMP Restore Daemon The DMP restore daemon is an internal process that monitors DMP paths and automatically enables paths that were previously disabled due to hardware failures, once the paths are back online. Starting the DMP Restore Daemon To start the DMP restore daemon, you use the start restore option in the vxdmpadm command.
vxdmpadm start restore [interval=interval] [policy=check_disabled|check_all]

The restore daemon analyzes the health of paths every interval seconds. The default interval is 300 seconds. Decreasing the interval can adversely affect performance. You can specify one of two types of policies: If the policy is check_disabled, the restore daemon checks the health of paths that were previously disabled due to hardware failures and revives them if they are back online. If the policy is check_all, the restore daemon analyzes all paths in the system, revives the paths that are back online, and disables the paths that are inaccessible. The default policy is check_disabled.

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-27

Checking the Status of the Restore Daemon To check the status of the DMP restore daemon, you use the command:
vxdmpadm stat restored

Output similar to the following is displayed:


The number of daemons running: 1 The interval of daemon: 300 The policy of daemon: check_disabled

Stopping the DMP Restore Daemon In order to change the interval or policy, you must stop the restore daemon and restart it with the new attributes specified. To stop the DMP restore daemon, you use the command:
vxdmpadm stop restore

Example: Changing Restore Daemon Properties To change the restore daemon interval to 400 seconds and to change the policy to check_all, you use the following sequence of commands:
# vxdmpadm stop restore # vxdmpadm start restore interval=400 policy=check_all

19-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to:
Describe the VxVM device discovery function. Manage the VxVM device discovery layer by using the vxddladm utility. Define active/active and active/passive disk arrays. Prevent multipathing for a specific device. Manage the VxVM dynamic multipathing feature by using the vxdmpadm command. Control the DMP restore daemon.
FOS35_Sol_R1.0_20020930 19-18

Summary
This lesson described how to manage device discovery and administer dynamic multipathing. You learned how to administer the device discovery layer (DDL) and manage the dynamic multipathing (DMP) feature of VxVM. Additional Resources VERITAS Volume Manager Administrators Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator. VERITAS Volume Manager Hardware Notes This document provides hardware support information for VERITAS Volume Manager.

Lesson 19: Administering DMP (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

19-29

Lab 19 (Optional)
Lab 19: Administering DMP
In this lab, you explore the performance and redundancy benefits of Volume Managers dynamic multipathing (DMP) functionality. You become familiar with the use of:
VxVMs device discovery layer (DDL) utility, vxddladm The DMP management utility, vxdmpadm DMP-related options of vxdiskadm Lab instructions are in Appendix A. Lab solutions are in Appendix B.
FOS35_Sol_R1.0_20020930 19-19

Lab 19: Administering DMP (Optional)


Goal In this lab, you explore the performance and redundancy benefits of Volume Managers dynamic multipathing (DMP) functionality. In this lab, you become familiar with the use of VxVMs device discovery layer (DDL) utility, vxddladm, the DMP management utility, vxdmpadm, and DMP-related options of vxdiskadm. You demonstrate DMPs ability to automatically detect a failed path and manage its I/O accordingly by disabling and reenabling a DMP channel from the command line (to simulate a DMP controller failure) and by observing DMPs actions through benchmarking utility output. In this lab, you also measure the performance benefits of VxVMs DMP by: 1 Setting up volumes with file systems and flooding them with various types of workloads and I/O 2 Recording the results of performance tests 3 Disabling one of the configured DMP paths 4 Running performance tests again, without using DMP, to note the differences To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

19-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20

Controlling Users (Self Study)

Controlling Users
This self-study lesson describes two types of user controls available with VERITAS File System:
Quotas: Quotas enable you to establish limits on the use of file system resources. Access control lists (ACLs): ACLs enable you to secure files from different types of user access.

FOS35_Sol_R1.0_20020930

20-2

Introduction
Overview When administering a file system, it is important to have the ability to control users by establishing limits on the use of file system resources and by securing files from different types of user access. VERITAS File System supports the use of quotas for limiting disk usage for users and groups and the use of access control lists (ACLs) for enhancing file security. In this lesson, you set user quotas and ACLs for a VERITAS file system. Importance By setting quotas on disk usage and restricting access to files through ACLs, you can more effectively manage and protect data against users who may, knowingly or unknowingly, overextend their disk usage or corrupt a file. Outline of Topics Who Uses Quotas? Quota Limits Quota Commands Setting Quotas Controlling User Access Setting ACLs Viewing ACLs

20-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Objectives
After completing this lesson, you will be able to:
Describe environments in which quotas are beneficial. Define types of quota limits. Identify common VxFS quota commands. Set user and group quotas for a VERITAS file system. Describe situations in which ACLs are beneficial. Set an ACL for a file using the setfacl command. View ACLs for a file using the getfacl command.

FOS35_Sol_R1.0_20020930

20-3

Objectives After completing this lesson, you will be able to: Describe environments in which user quotas are beneficial. Define the types of quota limits for establishing user and group quotas. Identify uses of common VxFS quota commands. Set user and group quotas for a VERITAS file system by using quota commands. Describe situations in which ACLs are beneficial. Set an ACL for a file using the setfacl command. View ACLs for a file using the getfacl command.

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-3

Who Uses Quotas?


ISPs Universities Government agencies
100 blocks 100 blocks 10 inodes 10 inodes 100 blocks 100 blocks 10 inodes 10 inodes 100 blocks 100 blocks 10 inodes 10 inodes 100 blocks 100 blocks 10 inodes 10 inodes
FOS35_Sol_R1.0_20020930

100 blocks 100 blocks 10 inodes 10 inodes 100 blocks 100 blocks 10 inodes 10 inodes

100 blocks 100 blocks 10 inodes 10 inodes 100 blocks 100 blocks 10 inodes 10 inodes
20-4

Who Uses Quotas?


Benefits of Quotas Quotas are beneficial in environments in which: Users are not personally accountable to the organization administering the system. The organization requires the rationing of access to storage resources. Examples: Organizations That Use Quotas Internet service providers (ISPs) typically set quotas on the amount of disk space that subscribers can use. When providing disk resources to the general public, quotas are a necessary precaution, because the system administrators have little control over the actions of the users. Colleges and universities also frequently establish quotas for student users. Quotas ensure that system resources are rationed on an equitable basis.

20-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Quota Limits
Hard Limit Time Limit Soft Limit Time Limit Soft Limit Hard Limit

Data blocks
FOS35_Sol_R1.0_20020930

Files
20-5

Quota Limits
Types of Quota Limits VERITAS File System supports the use of Berkeley Software Distribution (BSD) style quotas that limit file usage and data block usage on a file system. For each of these resources, the system administrator can assign per-user or per-group quotas. Each quota consists of the following types of limits for each resource: Hard limit The hard limit represents an absolute limit on files or data blocks. The user or group can never exceed the hard limit under any circumstances. Soft limit The soft limit is a flexible limit on files or data blocks that can be exceeded for a limited amount of time. This enables users or groups to temporarily exceed limits as long as they fall under those limits before the allotted time expires. The soft limit must be lower than the hard limit. Time limit The time limit can be configured on a per-file system basis and applies only to the soft quota limit. There are separate time limits for files and data blocks. However, modified time limits apply to the entire file system and cannot be set for an individual user or group. The default time limit is seven days. Only a privileged user, such as the system administrator, can assign hard and soft limits.

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-5

Effect of Quota Limits When users or groups reach the soft limit, they receive a warning, but can continue to use file system resources until they reach the hard limit or until the time limit is reached. You can use a soft limit when the user or group needs to run applications that generate large temporary files. In this case, quota limit violations can be permitted for a limited time. However, if the user or group continuously exceeds the soft limit, further allocations are not permitted after the expiration of the time limit.

20-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

The Quota Files


To use quota commands:
Files named quotas and quotas.grp must exist in in the root directory of the file system. These files are known as the external quota files. VxFS also maintains internal quota files for internal use. quotas

quotas.grp

FOS35_Sol_R1.0_20020930

20-6

Quota Commands
The Quota Files Two files must exist in the root directory of the file system for the quota commands to work: quotas (for user quotas) quotas.grp (for group quotas) These files store usage limits for each user (quotas) or for each group (quotas.grp). The use of these files follows a BSD requirement that applies also to VxFS quotas. The files in the root directory are referred to as the external quota files. VxFS also maintains internal quota files for its internal use. Internal vs. External Quota Files The quota administration commands read and write the external quota files to get or change usage limits. The internal quota files are used to maintain counts of blocks and inodes used by each user or group. When quotas are turned on, the quota limits are copied from the external quota files into the internal quota files. While quotas are on, all the changes in the usage information as well as changes to quotas are registered in the internal quota files. When quotas are turned off, the contents of the internal quota files are flushed into the external quota files so that all data is synchronized between the files.

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-7

API for Manipulating Disk Quotas VxFS 3.4 and later versions implement the quota API documented in the Solaris quotactl(7I) manual page. Users who have written their own quota tools based on the Q_QUOTACTL ioctl can use those tools on VxFS file systems.

20-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Quota Commands
vxedquota vxrepquota vxquot vxquota vxquotaon vxquotaoff
Edit Summarize quotas

Summarize ownership View limits and usage Turn on Turn off Turn on at mount

quotas

mount option: -o quota

FOS35_Sol_R1.0_20020930

20-7

VxFS Quota Commands In general, quota administration for VxFS is performed using commands similar to UFS quota commands. On Solaris, the available quota commands are UFSspecific; that is, the commands work only on UFS file systems. For this reason, VxFS supports a similar set of commands that work only for VxFS file systems:
Command Use Edit quota limits. Display a summary of quotas and disk usage. Display a summary of ownership and usage. View quota limits and usage. Turn quotas on for a mounted VERITAS file system. Turn quotas off for a mounted VERITAS file system.

vxedquota vxrepquota vxquot vxquota vxquotaon vxquotaoff

Quota mount Option The VxFS mount command supports a special mount option, -o quota, which you can use to turn on quotas for a file system at mount time.

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-9

Setting Quotas
1. Create a quotas file and a quotas.grp file in the root directory of the file system.
# touch /root_directory/quotas # touch /root_directory/quotas.grp

2. Turn on quotas.
After mounting a file system: # vxquotaon [-u|-g] mount_point When mounting a file system: # mount -F vxfs -o quota|usrquota|grpquota...

3. Invoke the quota editor.


# vxedquota username|UID|groupname|GID

A temporary file is opened in a default editor.


FOS35_Sol_R1.0_20020930 20-8

Setting Quotas
Overview: How to Set User and Group Quotas To set user or group quotas, you follow these steps: 1 Create the quotas and quotas.grp files in the root directory using the touch command. 2 Turn on quotas: For a mounted file system, use the vxquotaon command. At mount time, use the -o quota mount option. 3 Invoke the quota editor for a specific user or group using the vxedquota command. 4 Modify soft and hard limit quota entries in the quota editor. 5 Edit the time limit, if desired, by using the vxedquota -t command. 6 Confirm your changes by viewing the quotas that you set using the vxquota -v command. Step 1: Create the quotas and quotas.grp Files You can use the touch command to create the quotas and quotas.grp files:
# touch /mnt/quotas # touch /mnt/quotas.grp

The touch command creates an empty file if the file does not already exist. If the file already exists, the command updates the modification time of the file to the current date and time.
20-10 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Step 2: Turn On Quotas You can enable quotas at mount time or after a file system is mounted. To turn on quotas for a mounted file system, you use the syntax:
vxquotaon mount_point

Both user and group quotas are turned on for the file system. To turn on user quotas only, you can use the -u option:
vxquotaon -u mount_point

To turn on group quotas only, you can use the -g option:


vxquotaon -g mount_point

To mount a file system and turn on quotas at the same time, you use the syntax:
mount -F vxfs -o quota|usrquota|grpquota special mount_point

In the syntax: If you specify -o quota, both user and group quotas are enabled. If you specify -o usrquota, only user quotas are enabled. If you specify -o grpquota, only group quotas are enabled. For example, to mount a VERITAS file system on the device /dev/dsk/c0t5d0s2 at the mount point /mnt, and enable both user and group quotas, you type:
# mount -F vxfs -o quota /dev/dsk/c0t5d0s2 /mnt

Step 3: Invoke the Quota Editor To invoke the quota editor to modify the quota limits, you use the vxedquota command with the appropriate user or group name or ID:
vxedquota username|UID vxedquota groupname|GID

For example, to invoke the quota editor for the user with the username rsmith, you type:
# vxedquota rsmith

For each user or group, a temporary file is created with an ASCII representation of the current disk quotas for each mounted VxFS file system that has a quotas file in the root directory. The temporary file is invoked in an editor, with which you can modify existing quotas and add new quotas. After you exit the editor, vxedquota reads the temporary file and modifies the contents of the binary quota file to reflect the new quota limits. The editor invoked is vi unless the environment variable EDITOR specifies another editor. Unassigned UIDs or GIDs can be specified to create quota limits for future users or groups. This can be useful for establishing default quotas for users or groups who are later assigned a UID or GID. Unassigned user or group names cannot be used similarly.

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-11

Modifying Quota Limits


4. Modify soft and hard limits.
For example, to specify a hard limit of 200 blocks and 20 inodes and a soft limit of 100 blocks and 10 inodes: fs /mnt1 blocks (soft=100, hard=200) inodes (soft=10, hard=20) Edit the quota limits and save the changes.

5. Edit the time limit.


# vxedquota -t For example, to specify a time limit of one hour: fs /mnt1 blocks time limit = 1 hour, files time limit = 1 hour
FOS35_Sol_R1.0_20020930 20-9

Step 4: Modify Quota Limits The vxedquota command creates a temporary file for a specific user or group. This file contains on-disk quotas for each mounted VxFS file system that has an internal quotas file or quotas.grp file. The temporary file has one or more lines similar to:
fs /mnt blocks (soft=0, hard=0) inodes (soft=0, hard=0) fs /mnt1 blocks (soft=100, hard=200) inodes (soft=10, hard=20)

You can edit the soft or hard limits for blocks (data block usage) and for inodes (file usage). Step 5: Edit the Time Limit To modify the time limit for a file system, you use the syntax:
vxedquota -t

The temporary file created has one or more lines in the form:
fs mount_point blocks time limit=time, files time limit=time

The time consists of a number and a time unitfor example, 12 hours. The time unit can be month, week, day, hour, min, or sec. Characters appended to these keywords are ignored, so, for example, months or minutes is accepted. Time limits are printed in the greatest possible time unit such that the value is greater than or equal to one.

20-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

If the time limit is zero, the default time limits in /usr/include/sys/fs/ vx_quota.h are used. If the default time limit is zero, then the first time you edit the time limit, the temporary file should contain a line similar to:
fs /mnt1 blocks time limit = 0 (default), files time limit = 0 (default)

To set the time limit to one hour, you edit the line as follows:
fs /mnt1 blocks time limit = 1 hour, files time limit = 1 hour

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-13

Confirming Quota Limits


6. Confirm the changes made to quota limits.
# vxquota -v username|groupname For example:
# vxquota -v rsmith Disk quotas for rsmith (uid 1001) Filesystem usage quota limit timeleft files quota limit... /mnt1 0 100 200 0 10 20 Usage, soft limit, and hard limit for data blocks (When usage exceeds the soft limit and a time limit is set, the time left is displayed.) Usage, soft limit, hard limit, and time left for inodes

To turn quotas off: # vxquotaoff [-u|-g] mount_point FOS35_Sol_R1.0_20020930

20-10

Step 6: Confirm Quota Changes To view quotas for a given user or group, you use the syntax:
vxquota -v username|groupname

This displays the user or group quotas and disk usage on all mounted VERITAS file systems where the quotas file or quotas.grp file exists. For example, to view the quotas for the user rsmith, you type:
# vxquota -v rsmith

The output displayed contains lines similar to:


Disk quotas for rsmith (uid 1001): Filesystem usage quota limit timeleft files quota limit timeleft /mnt1 0 100 200 0 10 20

Turning Off Quotas You can turn off quotas for a mounted file system using the vxquotaoff command. To turn off quotas for a file system, you use the syntax:
vxquotaoff mount_point

To turn off user quotas only, you use the -u option:


vxquotaoff -u mount_point

To turn off group quotas only, you use the -g option:


vxquotaoff -g mount_point

20-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

What Are ACLs?


Access control lists (ACLs) enable you to define file permissions for:
File owner File group owner Other users Specific users Specific groups
r rw r ACL

r
FOS35_Sol_R1.0_20020930

r
20-11

Controlling User Access


What Are ACLs? Traditional UNIX file protection enables you to specify read, write, and execute permissions for three classes of users: the file owner, the file group, and other users. An access control list (ACL) extends file protection by enabling you to define file permissions for specific users and groups. An ACL stores a series of entries that identify specific users or groups and their access privileges for a particular file. A file can have its own ACL or can share an ACL with other files in the same directory. Using ACLs, you can specify detailed access permissions for multiple users and groups. VERITAS File System versions 3.2 and later and Version 4 file system layout support the use of ACLs. Example: Using ACLs For example, standard UNIX file protection enables you to give a group read permission to a particular file. Using an ACL, you can give one specific member of that group write permission to that file as well, without granting write permission to the entire group.

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-15

Setting ACLs
To set or modify an ACL for a file:
# setfacl options acl_entries filename

To give user bob read access to myfile:


# setfacl -m user:bob:r-- myfile

To remove access to myfile for user scott:


# setfacl -d user:scott myfile
FOS35_Sol_R1.0_20020930 20-12

Setting ACLs
The setfacl Command To set or modify an ACL for a file, you use the setfacl command. The setfacl command enables you to replace an existing ACL or to add, modify, or delete ACL entries. The syntax for the setfacl command takes one of the following formats:
setfacl [-r] -s acl_entries file setfacl [-r] -md acl_entries file setfacl [-r] -f acl_file file

In the syntax, you specify the command, followed by the option representing the type of operation, one or more ACL entries, and the name of the file for which you are setting the ACL. Options The -s option sets an ACL for a file. All old ACL entries are removed and replaced with the new ACL. You must specify ACL entries for the file owner, the file group, and others. The -m option adds new ACL entries to a file or modifies existing entries to a file. If an entry already exists, then the permissions you specify replace the current permissions. If an entry does not exist, it is created. You use the -d option to remove an ACL entry for a user.

20-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

You use the -f option to set an ACL for a file with ACL entries contained in the named file acl_file. If you use - for the acl_file, then standard input is used to set the ACL for the file. You use the -r option to recalculate the permissions for the ACL mask entry. You can use the # character in an ACL file to indicate a comment. All characters from # to the end of the line are ignored.

ACL Entries An ACL entry consists of three elements:


entry_type:[UID|GID]:permissions The entry type is the type of ACL entry on which to set file permissions, and can be user, group, other, or mask. The UID represents a user name or identification number. For group permissions, you specify the GID, which represents a group name or identification number. The permissions variable is where you specify read, write, and execute permissions as indicated by the symbolic characters rwx or by a permission number as used with the chmod command.

For example, an ACL entry that grants read/write permissions for the user bob is:
user:bob:rw-

For more details about the syntax of ACL entries, see the setfacl(1) manual pages. Examples: Setting ACLs Add one ACL entry to a file called myfile and give user bob read permission only: # setfacl -m user:bob:r-- myfile Delete the ACL entry for the user scott from the file myfile: # setfacl -d user:scott myfile Note: When deleting an ACL entry, you do not specify permissions. Replace the entire ACL for the file myfile, with these specifications: Give the file owner read, write, and execute permissions. Give the file group owner read access only. Give the user maria read access only. Do not give access to any other users. # setfacl -s user::rwx,group::r--,user:maria:r--, mask:rw-,other:--- myfile

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-17

Viewing ACLs
To view ACLs for a file:
# getfacl filename

To view ACLs for the file myfile:


# getfacl myfile

To set the same ACLs on newfile as on the existing file myfile:


# getfacl myfile | setfacl -f - newfile

FOS35_Sol_R1.0_20020930

20-13

Viewing ACLs
The getfacl Command If you want to verify that an ACL was set for a file or to check if a file has an associated ACL, you use the getfacl command. The getfacl command displays ACL entries for a file. The syntax for the getfacl command is:
getfacl filename

If you specify multiple file names in the command, then the ACL entries for each file are separated by a blank line. Note: If you want to find out if an ACL exists for a file, but do not need to know what the ACL is, you can also use ls -l. Example: Viewing ACLs To view the ACLs for the file myfile, you type:
# getfacl myfile

Example: Setting the Same ACL on Two Files The file myfile already has an associated ACL. To set the same ACL on the file called newfile using the standard input, you type:
# getfacl myfile | setfacl -f - newfile

20-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to:
Describe environments in which quotas are beneficial. Define types of quota limits. Identify common VxFS quota commands. Set user and group quotas for a VERITAS file system. Describe situations in which ACLs are beneficial. Set an ACL for a file using the setfacl command. View ACLs for a file using the getfacl command.

FOS35_Sol_R1.0_20020930

20-14

Summary
When administering a file system, you sometimes need to establish limits on the use of file system resources and secure files from different types of user or group access. VERITAS File System supports the use of quotas for limiting disk usage for individual users or groups and the use of Access Control Lists (ACLs) for enhancing file security. In this lesson, you learned how to set user and group quotas and ACLs for a VERITAS file system. Additional Resource VERITAS File System System Administrators Guide This guide describes VERITAS File System concepts, how to use various utilities, and how to perform backup procedures.

Lesson 20: Controlling Users (Self Study)


Copyright 2002 VERITAS Software Corporation. All rights reserved.

20-19

Lab 20 (Optional)
Lab 20: Controlling Users This lab enables you to practice setting user quotas and creating ACLs. Lab instructions are in Appendix A. Lab solutions are in Appendix B.

FOS35_Sol_R1.0_20020930

20-15

Lab 20: Controlling Users (Optional)


Goal This lab enables you to practice setting user quotas and creating ACLs. To Begin This Lab To begin the lab, go to Appendix A, Lab Exercises. Lab solutions are contained in Appendix B, Lab Solutions.

20-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab Exercises

Lab 1: Virtual Objects


Introduction In this theoretical exercise, you explore the relationship between Volume Manager objects and physical disks by determining how data in a volume maps to a physical disk. In each problem, you are given the address of a byte of data written to a logical volume. Using the information provided and your knowledge of the relationships between Volume Manager objects, you will determine: The physical drive to which the byte of data is written The physical address of the byte of data on that drive

A-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab: Problem 1
Physical Disks
c0t0d0 Public Region c1t0d0 Public Region c1t1d0 Public Region c1t2d0
FOS35_Sol_R1.0_20020930

VxVM Disks
disk01 1-MB offset
disk01-01

Private Region (1 MB)

datavol
10 MB in size

datavol-01 A @ 5 MB A @ 5 MB

Private Region (1 MB)

disk02 1-MB offset


disk02-04

disk04-03

12 MB in size

Private Region (1 MB)

disk03 190-MB offset


disk03-02

disk02-04
at 5 MB into the volume at 5 MB into the volume B written at an address space B written at an address space at 12 MB into the volume at 12 MB into the volume 1-20 C written at an address space C written at an address space at 17 MB into the volume at 17 MB into the volume

10 MB in size A written at an address space A written at an address space

Public Region A @ 11 MB A @ 11 MB

Private Region (1 MB)

disk04
A @ 10 MB disk04-03

5-MB offset 8 MB in size

Copyright 2002 VERITAS

Problem 1
Character A The character A is written at an offset of 5 MB into the volume. Use the graphic to answer the following questions: 1 What is the size of the concatenated volume? 2 Is it a mirrored volume? 3 Which subdisk is the data being written to? 4 Where in the subdisk (in MB) is the data being written? 5 Which physical disk is the data being written to? 6 What is the physical address (in MB) on this disk that the data is being written to? To answer this question, first identify the offset of the public region and the offset of the disk within the public region. Offset of subdisk in disk04 in public region: Location in the subdisk that the data is written: Offset in the disk c1t2d0 of public region: Character B The character B is written at 12 MB into the volume. 7 Which subdisk is the data being written to? 8 Where in the subdisk (in MB) is the data being written? 9 Which physical disk is the data being written to?
Appendix A: Lab Exercises
Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-3

10 What is the physical address (in MB) on the disk that the data is being written to? Offset of subdisk in disk02 in public region: Location in the subdisk that the data is written: Offset in the disk c1t0d0 of public region: Character C The character C is written at 17 MB into the volume. 11 Which subdisk is the data being written to? 12 Where in the subdisk (in MB) is the data being written? 13 Which physical disk is the data being written to? 14 What is the physical address (in MB) on the disk that the data is being written to? Offset of subdisk in disk02 in public region: Location in the subdisk that the data is written: Offset in the disk c1t0d0 of public region:

A-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab: Problem 2
Physical Disks
c0t0d0 Public Region c1t0d0 Public Region c1t1d0 Public Region c1t2d0 Private Region (1 MB)

VxVM Disks
disk01 100-MB offset
disk01-01

payvol
10 MB in size

payvol-01

payvol-02

Private Region (1 MB)

disk02 150-MB offset


disk02-04

disk01-01

disk04-03

10 MB in size

Private Region (1 MB)

disk03 190-MB offset


disk03-02

disk03-02

disk02-04

10 MB in size A written at an address space A written at an address space

Private Region FOS35_Sol_R1.0_20020930 (1 MB) Public Region


Copyright 2002 VERITAS

disk04

disk04-03

at 5 MB into the volume at 5 MB into the volume B written at an address space B written at an address space at 12 MB into the volume at 12 MB into the volume 0-MB offset 1-21 C written at an address space C written at an address space 10 MB in size at 17 MB into the volume at 17 MB into the volume

Problem 2
Character A The character A is written at 5 MB into the volume. 1 What is the size of the concatenated volume? 2 Is it a mirrored volume? 3 Which subdisks is the data being written to? 4 Where in the subdisks (in MB) is the data being written? 5 Which physical disks is the data being written to? 6 What is the physical address (in MB) on these disks that the data is being written to?
c0t0d0 c1t2d0

Offset of subdisk in disk01 and disk04 in public region: Location in the subdisk that the data is written: Offset in the disk of public region:

Character B The character B is written at 12 MB into the volume. 7 Which subdisks is the data being written to?

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-5

8 Where in the subdisk (in MB) is the data being written? 9 Which physical disks is the data being written to? 10 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t0d0

Offset of subdisk in disk03 and disk02 in public region: Location in the subdisk that the data is written: Offset in the disk of public region:

Character C The character C is written at 17 MB into the volume. 11 Which subdisks is the data being written to? 12 Where in the subdisks (in MB) is the data being written? 13 Which physical disk is the data being written to? 14 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t0d0

Offset of subdisk in disk03 and disk02 in public region: Location in the subdisk that the data is written: Offset in the disk of public region:

A-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab: Problem 3
Physical Disks
c0t0d0 Public Region c1t0d0 Public Region c1t1d0 Public Region c1t2d0 Private Region (1 MB)

VxVM Disks
disk01 100-MB offset
disk01-01

mktvol
7 MB in size

mktvol-01

mktvol-02

Private Region (2 MB)

disk02 150-MB offset


disk02-04

disk01-01

disk04-03

6 MB in size

Private Region (2 MB)

disk03 190-MB offset


disk03-02

disk03-02

disk02-04

13 MB in size A written at an address space A written at an address space

Private Region FOS35_Sol_R1.0_20020930 (1 MB) Public Region


Copyright 2002 VERITAS

disk04

disk04-03

at 5 MB into the volume at 5 MB into the volume B written at an address space B written at an address space at 12 MB into the volume at 12 MB into the volume 0-MB offset 1-22 C written at an address space C written at an address space 14 MB in size at 17 MB into the volume at 17 MB into the volume

Problem 3
Character A The character A is written at 5 MB into the volume. 1 What is the size of the concatenated volume? 2 Is it a mirrored volume? 3 Which subdisks is the data being written to? 4 Where in the subdisk (in MB) is the data being written? 5 Which physical disks is the data being written to? 6 What is the physical address (in MB) on these disks that the data is being written to?
c0t0d0 c1t2d0

Offset of subdisk in disk01 and disk04 in public region: Location in the subdisk that the data is written: Offset in the disk of public region:

Character B The character B is written at 12 MB into the volume. 7 Which subdisks is the data being written to? 8 Where in the subdisk (in MB) is the data being written?

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-7

9 Which physical disk is the data being written to? 10 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t2d0

Offset of subdisk in disk03 and disk04 in public region: Location in the subdisk that the data is written: Offset in the disk of public region:

Character C The character C is written at 17 MB into the volume. 11 Which subdisks is the data being written to? 12 Where in the subdisk (in MB) is the data being written? 13 Which physical disk is the data being written to? 14 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t0d0

Offset of subdisk in disk03 and disk02 in public region: Location in the subdisk that the data is written: Offset in the disk of public region:

A-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 2: Installing VERITAS Foundation Suite


Introduction In this exercise, you add the Foundation Suite packages and install VERITAS Volume Manager. Preinstallation 1 What VRTS packages are currently installed on your system? 2 Does the boot disk have two free partitions, 2048 contiguous sectors available, and partition 2 tagged as backup? 3 Before installing VxVM, save your boot disk information by using the prtvtoc command. Save the output to a file for later use. Do not store the file in /tmp. 4 What VRTS packages are currently referenced by the /etc/system file? 5 Before installing VxVM with or without an encapsulated boot disk, save the /etc/system and /etc/vfstab files into backup files named /etc/system.preVM and /etc/vfstab.preVM. Note: By saving a copy of the system files before encapsulating the boot disk, you have another way to get the system up and running if rootdg fails. Adding Packages and Installing VxVM 1 Add the VERITAS Volume Manager software, documentation, and manual pages packages. The instructor provides you with the location of the packages. Note: Do not install the VRTSob, VRTSobgui, VRTSvmpro, or VRTSfspro packages. These packages are installed in the next lab. The VRTSvlic licensing package should already be installed. If this package is not installed, you should install VRTSvlic before installing any other packages. 2 Is a VxVM license installed? If no license is installed, then add a license. 3 Run the Volume Manager installation program. During the installation: Add a license key, if necessary. Obtain valid license keys from your instructor. Do not use enclosure-based naming. Select a Custom install. Encapsulate the boot disk, and accept the default name rootdisk as the boot disk name. Leave all other disks alone. Do not add any other disks to the rootdg disk group at this time.
Appendix A: Lab Exercises
Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-9

4 When prompted by the vxinstall program, shutdown and reboot your machine. After Installing VxVM 1 What are the main differences between the post-vxinstall encapsulated boot disk and the pre-vxinstall unencapsulated boot disk? 2 What are the main differences between the post-vxinstall /etc/system file and the pre-vxinstall /etc/system file? 3 What are the main differences between the post-vxinstall /etc/vfstab file and the pre-vxinstall /etc/vfstab? 4 Check in /.profile to ensure that the following paths are present. Note: This may be done in the jumpstart of your system prior to this lab, but the paths may need to be added after a normal install.
# PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSobgui/bin: /usr/sbin:/opt/VRTSob/bin:/opt/VRTSvxfs/sbin:/etc/fs/vxfs: /usr/lib/fs/vxfs # MANPATH=$MANPATH:/opt/VRTS/man # export PATH MANPATH

VERITAS File System Installation 1 VERITAS File System may already be installed on your system. Verify the installation and determine the version of the VxFS package. 2 Has a VxFS license key been installed? More Installation Exploration (Optional) 1 When does the VxVM license expire? 2 What is the version and revision number of the installed version of VxVM? 3 What start-up scripts are added to the system by the install program? 4 Examine the file in which VxVM has saved the VTOC data of the encapsulated root disk. 5 What daemons are running after the system boots under VxVM control?

A-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 3: VxVM Interfaces


Introduction In this lab, you set up VEA and explore its interface and options. You also invoke the vxdiskadm menu interface and display information about CLI commands by accessing the VxVM manual pages. Before you begin this lab, you should have already installed VxVM and added the VRTSvxvm and VRTSvmman software packages, and you should have an encapsulated boot disk in rootdg. To verify that the VRTSvxvm and VRTSvmman software packages are loaded, run:
# pkginfo | grep VRTS

Setting Up VEA 1 Install the VEA software. The instructor provides you with the location of the packages. 2 Add the directory containing the VEA startup scripts to your PATH environment variable in your .profile file: 3 Is the VEA server running? If not, start it. 4 Start the Volume Managers graphical user interface. 5 Connect to your system as root. Your instructor provides you with the password. 6 Examine the VEA log file. Exploring the VEA Interface 1 Access the Help system in VEA. 2 What disks are available to the OS? 3 What is the content of the boot disks header? 4 Display a graphical view of the boot disk. 5 What are the defined disk groups? 6 What volumes are defined in the rootdg disk group?

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-11

7 What type of file system does each volume on the boot disk in rootdg contain? 8 Execute the Disk Scan command. 9 What commands were executed by the Disk Scan task? 10 Stop the Volume Managers graphical interface. Adding a New Administrator Account for VEA 1 Create a root equivalent administrative account named admin1 for use of VEA. 2 Test the new account. After you have tested the new account, exit VEA. Automatically Connecting at Startup 1 Start the VEA client. 2 Connect to your system as root and specify that you want to save authentication information. 3 Configure VEA to automatically connect to the host when you start the VEA client. 4 Exit VEA, and then reconnect to test your configuration settings. Exploring vxdiskadm 1 From the command line, invoke the text-based VxVM menu interface. 2 Display information about the menu or about specific commands. 3 What disks are available to the OS? 4 What is the content of the configuration database for the root disk? 5 Exit the vxdiskadm interface.

A-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Accessing CLI Commands (Optional) Note: This exercise introduces four of the most commonly used VxVM commands: vxassist, vxdisk, vxdg, and vxprint. These commands and associated concepts are explained in detail throughout this course. If you have used Volume Manager before, you may already be familiar with these commands. If you are new to Volume Manager, you should start by reading the manual pages for each of these commands. vxassist 1 From the command line, invoke the VxVM manual pages and read about the vxassist command. 2 What vxassist command parameter creates a VxVM volume? vxdisk 1 From the command line, invoke the VxVM manual pages and read about the vxdisk command. 2 What disks are available to VxVM? 3 How do you display the header contents of the root disk?

vxdg 1 From the command line, invoke the VxVM manual pages and read about the vxdg command. 2 How do you list locally imported disk groups? 3 What is the content of the configuration database for the rootdg disk group? vxprint 1 From the command line, invoke the VxVM manual pages and read about the vxprint command. 2 What volumes are defined in rootdg? 3 What is the volume type of the boot disks volumes?

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-13

Lab 4: Managing Disks


Introduction In this lab, you use the VxVM interfaces to view the status of disks, initialize disks, move disks to the free disk pool, and move disks into and out of a disk group. Try to perform this lab using the CLI interface. The solutions for all three methods (VEA, CLI, and vxdiskadm) are included in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. Caution: In this lab, do not include the boot disk in any of the tasks. Managing Disks: CLI 1 View the status of the disks on your system. 2 Add one uninitialized disk to the free disk pool and view the status of the disk devices to verify your action. 3 Add the disk to the disk group rootdg and view the status of the disk devices to verify your action. 4 Remove the disk from rootdg and place it in the free disk pool, then view the status of the disk devices to verify your action. 5 Remove the disk from the free disk pool and return the disk to an uninitialized state. View the status of the disk devices to verify your action. 6 Add two disks to the free disk pool and view the status of the disk devices to verify your action. 7 Remove one of the disks from the free disk pool and return it to an uninitialized state. View the status of the disk devices to verify your action. 8 Add the same disk back to the free disk pool. You must still perform an initialize step even though the disk was initialized earlier. View the status of the disk devices to verify your action.

A-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 5: Managing Disk Groups


Introduction In this lab, you create new disk groups, remove disks from disk groups, deport and import disk groups, and destroy disk groups. This lab includes three separate exercises: The first exercise uses the VEA interface. The second exercise uses the command line interface. The third exercise is optional and requires participation from the whole class. If you use object names other than the ones provided, substitute the names accordingly in the commands. Managing Disk Groups: VEA 1 Run and log on to the VEA interface. 2 View all the disk devices on the system. 3 Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. View all the disk devices on the system. 4 Add one more disk to your disk group. Initialize the disk and view all the disk devices on the system. 5 Remove all of the disks from your disk group. What happens to your disk group? 6 Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. 7 Deport your disk group. Do not give it a new owner. View all the disk devices on the system. 8 Take the disk that was in your disk group and add it to rootdg. Were you successful?

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-15

9 Import your datadg disk group and view all the disk devices on the system. 10 Deport datadg and assign your machine name, for example, train5, as the New Host. 11 Import the disk group and change its name to data3dg. View all the disk devices on the system. 12 Deport the disk group data3dg by assigning the ownership to anotherhost. View all the disk devices on the system. Why would you do this? 13 Import data3dg. Were you successful? 14 Now import data3dg and overwrite the disk group lock. What did you have to do to import it and why? 15 Destroy data3dg. View all the disk devices on the system. At the end of this lab you should have one disk in rootdg (the boot disk). Leave all other disks as uninitialized disks or in the free disk pool.

A-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Disk Groups: CLI Note: Initialize your data disks by using the command line before beginning this lab, if the disks are not already initialized. 1 Create a disk group data4dg with at least one drive. Verify your action. 2 Deport disk group data4dg, then import the disk group back to your machine. Verify your action. 3 Destroy the disk group data4dg. Verify your action. 4 Create a new disk group data4dg with an older version assigned to it. Verify your action. 5 Upgrade the disk group to version 60. 6 How would you check that you have upgraded the version? 7 Add two more disks to the disk group data4dg. You should now have three disks in your disk group. Verify your action. 8 Remove a disk from the disk group data4dg. Verify your action. 9 Deport disk group data4dg and assign the host name as the host name of your machine. Verify your action. 10 View the status of the disks in the deported disk group using vxdisk list device_tag. What is in the hostid field? 11 Remove a disk from data4dg. Why does this fail? 12 Import the disk group data4dg. Verify your action. 13 Try again to remove a disk from data4dg. Does it work this time? 14 Deport the disk group data4dg and do not assign a host name. Verify your action. 15 View the status of the disk in the deported disk group using vxdisk list device_tag. What is in the hostid field?

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-17

16 Add the disk in data4dg to rootdg. Were you successful? 17 Uninitialize a disk that is in data4dg. Were you successful? 18 Import the disk group data4dg. Were you successful? At the end of this lab you should have one disk in rootdg (the boot disk). Leave all other disks as uninitialized disks.

A-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Group Activity: Managing Disk Groups (Optional) The purpose of this lab is to physically deport a one-disk disk group with a file system to another host, import the disk group onto the new host, and remount the file system onto the new host. Then deport the disk group back to the original host. This lab can be performed on a pair of systems sharing physical access to the same disk array, or between unconnected systems if you have removable disk packs. If you have removable disk packs, this lab is best performed with the whole class, with participants working initially on their own machines and then physically moving their disk groups to a host machine. The lab requires a host machine with empty slots in the multipack. (Remove all the disks from the disk pack and run devfsadm and vxdctl enable.) This host can be a spare machine, or it can be one of the delegate machines. If you have shared access to a disk array with another student and do not want to physically move disk packs, participants work initially on their own machines and then logically move their disk groups to another machine that shares physical access to their disk array. It is important that names of the disk groups and volumes be unique throughout the classroom for this exercise. As a recommendation, each participant (or team) should name the disk group using their own name. For example, Jane Doe should use jdoedg. As a recommendation, each participant (or team) should name the volume using their own name. For example, Jane Doe should use jdoevol.
Disk group Volume

yournamedg yournamevol

1 Create a disk group with one disk in it called yournamedg. 2 Create a volume called yournamevol in this disk group. 3 Create a file system on this volume. # newfs /dev/vx/rdsk/yournamedg/yournamevol 4 Create a directory and mount the file system. # mkdir /mount_point # mount /dev/vx/dsk/yournamedg/yournamevol /mount_point

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-19

5 Create a uniquely recognizable file in the root of the mounted file system. # echo "My name is Jane Doe" > /mount_point/jane_doe 6 Unmount the file system. # umount /mount_point 7 Deport the disk group to the new host. 8 If you are not physically moving the disks, import your disk group on the other machine and proceed to the next step in the lab. If you are physically moving the disks, remove the disk from the old host and place it in an empty slot in the new host. After all the empty slots in the multipack are full and all of the disks have spun up, the instructor will continue the lab as a demonstration with the following substeps on the new host: a Demonstrate that the OS cannot detect the disks. # format b Demonstrate that VxVM cannot detect the disks. # vxdisk list c Configure the devices. # devfsadm d Demonstrate that the OS can now detect the disks, but that VxVM still cannot detect the disks. # format # vxdisk list e Force the VxVM configuration daemon to rescan for the disks. # vxdctl enable f Demonstrate that VxVM can now detect the disks. # vxdisk list g Import one or more of the disk groups. If the participants deported the disk group correctly, vxdisk list displays the new disk groups as imported disk groups on the new host. Otherwise, import the disk group using the C option. 9 Display the state of the volumes using vxprint and VEA. The volumes are displayed with an alert and stopped. # vxprint 10 Start the volumes by using VEA or vxvol start volume_name. You may need to specify g diskgroup if the volume name is not unique.

A-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

11 Create a new mount point and mount one of the volumes. Demonstrate that all the files are still accessible. 12 Unmount the volume. 13 If you did not physically move the disks: a Deport the disk group without changing the host name. b Import the disk group back on your original machine. If the disks were physically moved: a At the end of the demonstration, participants should move their disk groups back to their own machines (without deporting). b Import the disk groups on their own machines. This simulates recovery after a host crash. You must use the C option to do an import. 14 Display the disk groups on your system. 15 Destroy the practice disk group. At the end of this lab you should have one disk in rootdg (the boot disk). Leave all other disks as uninitialized disks.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-21

Lab 6: Creating a Volume


Introduction In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. You also practice creating a RAID-5 volume, creating a volume with a file system, and mounting a file system. Attempt to perform this lab using command-line interface commands. If you use object names other than the ones provided, substitute the names accordingly in the commands. After each step, use the VEA interface to view the volume layout in the main window and in the Volume View window. Solutions for performing tasks from the command line and using the VERITAS Enterprise Administrator (VEA) are included in the Lab Solutions appendix. Setup A minimum of four disks is required to perform this lab, not including the root disk. Creating Volumes: CLI 1 Add four initialized disks to a disk group called datadg. Verify your action using vxdisk list. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. 2 Create a 50-MB concatenated volume with one drive. 3 Display the volume layout. What names have been assigned to the plex and subdisks? 4 Remove the volume. 5 Create a 50-MB striped volume on two disks and specify which two disks to use in creating the volume. What names have been assigned to the plex and subdisks? 6 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. What do you notice about the plexes? 7 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should not use. Was the volume created?
A-22 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

8 Create a 20-MB striped volume with a mirror that has one less column (3) than number of drives. Was the volume created? 9 Create the same volume specified in step 7, but without the mirror. What names have been assigned to the plex and subdisks? 10 Create a 100-MB RAID-5 volume. Set the number of columns to the number of drives in the disk group. Was the volume created? Run the command again, but use one less column. What is different about the structure? 11 Remove the volumes created in this exercise. More Practice (Optional) This optional guided practice illustrates how to use the /etc/default/vxassist and /etc/default/alt_vxassist files to create volumes with defaults specified by the user. 1 Create two files in /etc/default: a Create a file called vxassist that includes the following: # when mirroring create three mirrors nmirror=3 b Create a file called alt_vxassist that includes the following: # use 256K as the default stripe unit size for regular volumes stripe_stwid=256k

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-23

2 Use these files when creating the following volumes: Create a 100-MB volume using layout=mirror: # vxassist -g datadg make testvol 100m layout=mirror Create a 100-MB, two-column stripe volume using -d alt_vxassist so that Volume Manager uses the default file: # vxassist -g datadg -d alt_vxassist make testvol2 100m layout=stripe 3 View the layout of these volumes using VEA and by using vxprint. What do you notice?

4 Remove any vxassist default files that you created in this optional lab section. The presence of these files can impact subsequent labs where default behavior is assumed.

A-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 7: Configuring Volumes


Introduction This lab provides additional practice in configuring volume attributes. In this lab, you add mirrors, logs, and file systems to existing volumes, change the volume read policy, and specify ordered allocation of storage to volumes. You also practice creating layered volumes. Setup Before you begin this lab, ensure that any volumes created in previous labs have been removed. Configuring Volume Attributes: CLI Complete this exercise by using the command line interface. If you use object names other than the ones provided, substitute the names accordingly in the commands. Solutions for performing these tasks from the command line and using VEA are described in the Lab Solutions appendix. 1 Create a 20-MB, two-column striped volume with a mirror. 2 Display the volume layout. How are the disks allocated in the volume? Which disk devices are used? 3 Remove the volume you just made, and re-create it by specifying the four disks in order of highest target first (for example, datadg04, datadg03, datadg02, datadg01, where datadg04=c1t15d0, datadg03=c1t14d0, and so on). 4 Display the volume layout. How are the disks allocated this time? 5 Add a mirror to the existing volume. Were you successful? Why or why not? 6 Remove one of the two mirrors, and display the volume layout. 7 Add a mirror to the existing volume, and display the volume layout. 8 Add a dirty region log to the existing volume and specify the disk to use for the DRL. Display the volume layout. 9 Change the volume read policy to round robin, and display the volume layout.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-25

10 Create a file system for the existing volume. 11 Mount the file system at the mount point /mydirectory and add files. Verify that the files were added to the new volume. 12 View the mount points using df k. Using the VEA interface, open the Volume to Disk Mapping window and display the subdisk information for each disk. 13 Unmount and remove the volume with the file system. Creating Layered Volumes: VEA Complete this exercise by using the VEA interface. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you are using. 1 First, remove any volumes that you created in the previous lab. 2 Create a 100-MB Striped Pro volume with no logging. What command was used to create this volume? Hint: View the task properties. 3 Create a Concatenated Pro volume with no logging. The size of the volume should be greater than the size of the largest disk in the disk group; for example, if your largest disk is 8 GB, then create a 10-GB volume. What command was used to create this volume? 4 View the volumes in VEA and compare the layouts. 5 View the volumes from the command line. 6 Remove all of the volumes.

A-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 8: Volume Maintenance


Introduction In this lab, you resize volumes, change volume layouts, and create volume snapshots. Setup To perform this lab, you should have at least four disks in the disk group that you are using. You can use either the VEA interface or the command line interface, whichever you prefer. The solutions for both methods are covered in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. Note: If you are using VEA, view the properties of the related task after each step to view the underlying command that was issued. Resizing a Volume 1 If you have not already done so, remove the volumes created in the previous lab. 2 Create a 20-MB concatenated mirrored volume with a file system /myfs, and mount the volume. 3 View the layout of the volume. 4 Add data to the volume and verify that the file has been added. 5 Expand the file system and volume to 100 MB. Changing the Volume Layout 1 Change the volume layout from its current layout (mirrored) to a nonlayered mirror-stripe with two columns and a stripe unit size of 128 sectors (64K). Monitor the progress of the relayout operation, and display the volume layout after each command that you run. 2 Verify that the file is still accessible. 3 Unmount the file system on the volume and remove the volume.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-27

Performing Volume Snapshot Operations 1 Create a 500-MB volume named vol01 with a file system /myfs, and mount the file system on the volume. 2 Add data to the volume and verify that the data has been added. 3 Start the snapstart phase of creating a snapshot of the volume. 4 Add another file to /myfs. 5 Complete the snapshot of the volume. Name the snapshot volume snapshot_vol01. 6 Mount the snapshot volume to /snapmyfs. 7 View the files in /myfs and /snapmyfs. They should be identical. 8 Add more data to /myfs. Are the two file systems the same now? Why? 9 Add more data to the snapshot volume. You can add data from /usr/sbin by copying /usr/sbin/s* to the /snapmyfs. Note: If you are unable to copy data to /snapmyfs, check to ensure that the file system has not been mounted read-only. 10 Unmount the snapshot volume. 11 Unmount the original volume and reassociate the snapshot with the volume, resynchronizing the volumes by using the snapshot. 12 Create another snapshot volume. After you create the snapshot, permanently break the association between the snapshot and the original volume. 13 Attempt to reassociate the snapshot with the volume. Does this work? If not, why not? 14 Unmount any file systems and remove any volumes created in this exercise.

A-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Monitoring Tasks (Optional) Objective: In this advanced section of the lab, you track volume relayout processes using the vxtask command and recover from a vxrelayout crash. Setup: You should have at least four disks in the disk group that you are using. 1 Create a mirror-stripe volume with a size of 1 GB using the vxassist command. Assign a task tag to the task and run the vxassist command in the background. 2 View the progress of the task. 3 Slow down the task progress rate to insert an I/O delay of 100 milliseconds. View the layout of the volume in the VEA interface. 4 After the volume has been created, use vxassist to relayout the volume to stripe-mirror. Use a stripe unit size of 256K, use two columns, and assign the process to the above task tag. 5 In another terminal window, abort the task to simulate a crash during relayout. View the layout of the volume in the VEA interface. 6 Reverse the relayout operation. View the layout of the volume in the VEA interface. 7 Remove all of the volumes.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-29

Lab 9: Setting Up a File System


Introduction This lab ensures that you are able to use basic VERITAS File System administrative commands from the command line. Setup Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands: Setting Up a File System 1 Create a 500-MB striped volume named datavol in the disk group datadg and use the default number of columns and stripe unit size. 2 Create a VERITAS file system on the datavol volume using the default options. 3 Create a mount point /datamnt on which to mount the file system. 4 Mount the newly created file system on the mount point, and use all default options. 5 Using the newly created file system, create, modify, and remove files. 6 Display the content of the mount point directory, showing hidden entries, inode numbers, and block sizes of the files. 7 What is the purpose of the lost+found directory? 8 How many disk blocks are defined within the file system and are used by the file system?

9 Unmount the file system. 10 Mount and, if necessary, check the file system at boot time. 11 Verify that the mount information has been accepted. 12 Display details of the file system that were set when it was created.

A-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

13 Check the structural integrity of the file system using the default log policy.

14 Remove the volume that you created for this lab.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-31

Administering File Systems Through VEA (Optional) If you have time, try to perform the file system administration tasks by using the VERITAS Enterprise Administrator (VEA) graphical user interface. If all of your external disks are currently in the datadg disk group, you must remove at least two disks from the disk group in order to perform this lab. 1 Start the graphical user interface. 2 In VEA, what disks are available? 3 Create a disk group named acctdg containing two disks. 4 Create a 500-MB striped volume named acctvol using the default number of columns and stripe unit size in the disk group acctdg. 5 Create a VxFS file system in the acctvol volume using the default options. Mount the newly created file system on the acctmnt mount point. 6 Using the newly created file system, create, modify, and remove files. 7 Display the content, showing hidden entries, inodes, and block sizes. 8 How many disk blocks are defined within and are used by the file system? 9 In VEA, unmount the file system. 10 Check the structural integrity of the file system. 11 Mount the file system. 12 Display details of the file system that were set when it was created. 13 Unmount the file system, remove the acctvol volume, and destroy the acctdg disk group that you created in this exercise. Return all of your external disks to the datadg disk group.

A-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 10: Online File System Administration


Introduction In this lab, you investigate and practice online file system administration tasks. You resize a file system using fsadm, back up and restore a file system using vxdump and vxrestore, and create and use a snapshot file system. Setup Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. Resizing a File System 1 Create a 50-MB volume named reszvol in the diskgroup datadg by using the VERITAS Volume Manager utility vxassist. 2 Create a VERITAS file system on the volume by using the mkfs command. Specify the file system size as 40 MB. 3 Create a mount point /reszmnt on which to the mount the file system. 4 Mount the newly created file system on the mount point /reszmnt. 5 Verify disk space using the df command. Observe that the available space is smaller than the size of the volume. 6 Expand the file system to the full size of the underlying volume using the fsadm -b newsize option. 7 Verify disk space using the df command. 8 Make a file on the file system mounted at /reszmnt (using mkfile), so that the free space is less than 50 percent of the total file system size. 9 Shrink the file system to 50 percent of its current size. What happens? 10 Experiment with the vxresize command. Expand the file system to 100 MB and then shrink the file system down to 60 MB. Verify that the volume and file system are resized at the same time after each command issued. 11 Unmount the file system and remove the volume.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-33

Backing Up and Restoring a File System 1 Create a 100-MB volume named fsvol. Create a file system on the volume and mount the file system as /fsorig. Copy the contents of /usr/bin onto the file system. 2 Create a 200-MB volume to use as a backup device. Name this volume backupvol, and use different disks from the original volume. Note: Use the vxprint command to determine which disks are in use by the original volume. 3 Create a file system on the backup volume and mount it on /backup. 4 To prepare for the first backup, run the sync command several times to ensure that asynchronous I/O operations are complete before continuing. 5 Using vxdump, perform a level 0 backup to backup the contents of /fsorig to the file firstdump at the mount point /backup. 6 Create an additional file on /fsorig. 7 To prepare for the second backup, run the sync command several times to ensure that asynchronous I/O operations are complete before continuing. 8 Using vxdump, perform a level 1 backup to back up the contents of /fsorig to the file seconddump at the mount point /backup. 9 Destroy /fsorig by unmounting and remaking the file system with the same name. Mount the file system on the original volume fsvol and verify that /fsorig no longer contains the original files. 10 Using vxrestore, restore the contents of the level 0 backup. Note: Ensure that you are in /fsorig before you run the vxrestore command. 11 Check the contents of /fsorig for the original files. 12 Using vxrestore, restore the contents of the level 1 backup. Wait for the restore operation to complete. 13 Check the contents of /fsorig for the additional file that you created.

A-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating a Snapshot File System Note: Ensure that a Console window is open during this lab. 1 Create a volume called snapvol to use for a snapshot of /fsorig. Create the size of the volume to be at least five percent of the /fsorig file system. Create the volume on a different disk than the original. Make a directory called /snap. 2 Mount a snapshot of /fsorig onto the newly created volume snapvol onto /snap. 3 Verify that the two file systems are the same at this point by using the commands ls -al and df -k. 4 Open another terminal window and modify the original file system by removing some files, creating some new files, and updating the time stamps on the original files. Review the snapshot /snap after each action to ensure that the snapshot has not changed. 5 Restore some deleted files by copying them from the snapshot backup /snap to the original file system /fsorig. 6 Create a file in /fsorig that is larger in total size than the size of the snapshot. List the contents of the snapshot. Is the large file listed in /snap? 7 Unmount the snapshot file system. 8 Re-create the snapshot. Is the large file listed in /snap? 9 Remove the large file in /fsorig then copy it back from /snap. What happens? 10 Unmount the snapshot file system and remove the snapshot volume.

Online File System Administration in VEA (Optional) If you have time, try to resize a file system and create a snapshot file system by using the VERITAS Enterprise Administrator (VEA) graphical user interface.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-35

Lab 11: Defragmenting a File System


Introduction In this lab, you practice converting a UFS file system to VxFS, and you monitor and defragment a file system by using the fsadm command. Setup Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. Converting to a VERITAS File System 1 Create a 250-MB striped volume named convol that has three columns. 2 Create a UFS file system on the volume convol and mount it on /con. 3 Copy some files into the file system and stop when the file system is about 50 percent full. 4 Unmount the file system. 5 Convert the file system to VxFS type using the verbose option. Note the mapping output. 6 When prompted, do not commit to the conversion. 7 Try to mount the file system again. What happens? 8 Run an fsck on the file system. You should not get an error until Phase 5 of fsck. 9 Run the conversion again using the option to check the space required to complete the conversion. 10 Try to mount the file system again. What happens this time? 11 Unmount /con and run the conversion again. This time, commit to the conversion when prompted. 12 Determine whether you now have a VxFS file system. 13 Run an fsck on the file system. You should not get an error until Phase 4 of fsck.
A-36 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

14 Mount the file system as type vxfs and note the data files are the same.

15 After completing this exercise, unmount the file system and remove the volume.

Defragmenting a File System 1 Create a new 1-GB volume with a VxFS file system mounted on /fs_test. 2 Repeatedly copy /opt to the file system using a new target directory name each time until the file system is approximately 85 percent full. # for i in 1 2 3 > do > cp -r /opt /fs_test/opt$i > done 3 Delete all files over 100 MB in size. 4 Check the level of fragmentation in the file system. 5 Repeat steps two and three using values 4 5 for i in the loop. Fragmentation of both free space and directories will result. 6 Repeat step two using values 6 7 for i. Then delete all files that are smaller than 64K to release a reasonable amount of space. 7 Defragment the file system and display the results. Run fragmentation reports both before and after the defragmentation and display summary statistics after each pass. Compare the fsadm report from step 4 with the final report from the last pass in this step. After Completing This Lab Unmount the file systems and remove the volumes used in this lab.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-37

Lab 12: Intent Logging


Introduction In this lab, you investigate the impact of different intent log mount options and the impact of intent log size on file system performance. The PostMark Benchmarking Tool You use a benchmarking tool called PostMark to perform this lab. PostMark is an excellent tool for generating metadata changes (for example, file creation and deletion) to create stress on the parts of a file system that are metadata workloadsensitive. PostMark is available as shareware from:
http://www.netapp.com/tech_library/postmark.html

You use the PostMark utility postmark-1_5 and a text file called pmscript that contains tunable parameters for the PostMark utility. The pmscript file must be in the same directory as postmark-1_5. The output of PostMark displays the time to complete the requested number of transactions. Testing the Impact of Logging Mount Options In the first part of this lab, you test performance of your VxFS file system by using different logging mount options to examine the impact of logging options. You first test performance of your VxFS file system without setting logging options. Then, you run a script that iterates the same test for each of three intent log mount options: log, delaylog, and tmplog. The tests are performed a second time after creating a 750-MB filler file. The presence of the filler file creates a physical distance between the intent log and the files being written by PostMark, which should result in more physical disk access and lower performance. The script post_log_options.sh facilitates this part of the lab. Testing the Impact of Log Size In the second part of this lab, you test performance of your VxFS file system by using different log sizes to examine the impact of log size on performance. The script post_log_size.sh facilitates this part of the lab. Setup 1 Ensure that the external disks on your system are in a disk group named datadg. 2 If you have not already done so, unmount any file systems and remove any volumes from previous labs. 3 Locate the PostMark utility, including the pmscript file, and the lab scripts post_log_options.sh and post_log_size.sh. Ask your instructor for the location of the scripts.

A-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Performance Impact of mount Options for Logging 1 Create and mount a 1200-MB file system on the volume logvol at the mount point /logmnt. If you use object names other than the ones provided, substitute the names accordingly in the commands. 2 Change to the directory that contains the PostMark and lab scripts. Ask your instructor for the location of the scripts. 3 Set the location of PostMarks write I/O to the file system mounted at /logmnt by using the command: # echo set location=/logmnt > .pmrc 4 Run the following command to start PostMark: # pmscript | grep seconds of transactions 5 Observe the output and record the results in the table at the end of the lab. 6 Remount the file system and create a 750-MB file called filler on the file system. Then, change to the lab scripts directory and re-run the PostMark commands. 7 Observe the output and record the results in the table at the end of the lab. 8 From the directory that contains the lab scripts, examine the script post_log_options.sh. This script remounts the file system with the different logging options (log, delaylog, and tmplog) and runs the PostMark test for each iteration, both with and without a filler file. Run this script, and answer the prompts accordingly. Record the results in the table at the end of the lab.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-39

Performance Impact of Intent Log Size 1 Unmount the file system /logmnt. Create and mount a new file system on the volume logvol at the mount point /logmnt and specify an intent log size of 256K. If you use object names other than the ones provided, substitute the names accordingly in the commands. 2 Change to the directory that contains the PostMark and lab scripts. Ask your instructor for the location of the scripts. 3 Set the location of PostMarks write I/O to the file system mounted at /logmnt by using the command: # echo set location=/logmnt > .pmrc 4 Run the following command to start PostMark: # pmscript | grep seconds of transactions 5 Observe the output and record the results in the table at the end of the lab. 6 Remount the file system and create a 750-MB file called filler on the file system. Then, change to the lab scripts directory and re-run the PostMark commands. 7 Observe the output and record the results in the table at the end of the lab. 8 From the directory that contains the lab scripts, examine the post_log_size.sh script. This script remounts the file system with different log sizes (1024K, 2048K, 4096K, 8192K, and 16384K) and runs the PostMark test for each iteration, both with and without a filler file. Run this script, and answer the prompts accordingly. Record the results in the table at the end of the lab.

A-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary of Results: Impact of Logging Options and Log Size Note: Results vary depending on the nature of the data and the model of array used. Results documented in the lab solutions may be different from what you achieve in your classroom environment. No performance guarantees are implied by this lab. This lab provides a framework that you can use in benchmarking file system performance. Logging Options
Intent Log Option No option (default)
log delaylog tmplog

Time (seconds)

Throughput (transactions/ second)

Time with Filler

Throughput with Filler

Log Size
Intent Log Size 256K 1024K 2048K 4096K 8192K 16384K Time (seconds) Throughput (transactions/ second) Time with Filler Throughput with Filler

More Exploration of Intent Log Performance Tuning (Optional) With the file system mounted, change the layout of the volume by changing the resilience level of the volume, increasing or decreasing the number of columns in a striped volume, or changing stripe unit sizes. Then, rerun the post_log_options.sh or post_log_size.sh scripts with the PostMark tests and note any changes in performance. After Completing This Lab Unmount the file systems and remove the volumes used in this lab.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-41

Lab 13: Architecture


Introduction In this lab, you explore some of the components of the VxVM architecture by using commands to control the VxVM configuration daemon. Perform this exercise by using the command line interface. Displaying Licensing and Supported Version Information 1 Display supported disk group version and daemon protocol information. 2 Display all licensed features available for your system. Setup Before you begin the next exercise, you are going to hide the license key files from your system: 1 Create a new directory called /lic and copy the *.vxlic files from /etc/vx/licenses/lic to /lic. These files represent the license keys for your machine. 2 Remove the *.vxlic files from /etc/vx/licenses/lic. 3 Verify your action by running the command to display licensing information for VERITAS products. Exploring VxVM Architectural Components 1 Stop the VxVM configuration daemon. 2 Run the command to display the VxVM configuration daemon mode. What mode is the configuration daemon in? 3 Start the VxVM configuration daemon. Were you successful? Why or why not? 4 Install the VxVM licenses by using the license files that you saved. 5 Create a 100-MB mirrored volume. Are you successful? Why or why not? 6 Run the command to display the VxVM configuration daemon mode. What mode is the configuration daemon in?

A-42

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7 Enable the VxVM configuration daemon. 8 Try to create a 100-MB mirrored volume again. Are you successful? 9 Remove any volumes that you created.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-43

Lab 14: Introduction to Recovery


Introduction In this practice, you explore VxVM logging behavior and perform a variety of basic recovery operations. Perform this lab by using the command line interface. In some of the steps, the commands are provided for you. Setup For this lab, you should have at least four disks (datadg01 through datadg04) in a disk group called datadg. If your root disk is mirrored, you may need to unmirror the root disk and add the free disk to the datadg disk group. If you use object names other than the ones provided, substitute the names accordingly in the commands. Exploring Logging Behavior 1 Create two mirrored, concatenated volumes, 500 MB in size, called vollog and volnolog. 2 Add a log to the volume vollog. 3 Create a file system on both volumes. 4 Create mount points for the volumes, /vollog and /volnolog. 5 Copy /etc/vfstab to a file called origvfstab. 6 Edit /etc/vfstab so that vollog and volnolog are mounted automatically on reboot. (In the /etc/vfstab file, each entry should be separated by a tab.) Type mountall to mount the vollog and volnolog volumes. 7 In root, start an I/O process on each volume. For example: # find /usr -print | cpio -pmud /vollog & # find /usr -print | cpio -pmud /volnolog & 8 Press Stop-A. At the OK prompt, type boot. 9 After the system is running again, check the state of the volumes to ensure that neither of the volumes is in the sync/needsync mode.

A-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10 Run the vxstat command. This utility displays statistical information about volumes and other VxVM objects. For more information on this command, see the vxstat (1m) manual page. # vxstat -g datadg -fab vollog volnolog The output shows how many I/Os it took to resynchronize the mirrors. Compare the number of I/Os for each volume. What do you notice? 11 Stop the VxVM configuration daemon. 12 Create a 100-MB mirrored volume. What happens? 13 In root, start I/O on vollog by using the following command. Are you successful? Why or why not? # find /etc -print | cpio -pmud /vollog & 14 Start the VxVM configuration daemon. 15 Unmount both file systems and remove the volumes vollog and volnolog. 16 Restore your original vfstab file.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-45

Removing a Disk from VxVM Control 1 Create a 100-MB, mirrored volume named recvol. Create and mount a file system on the volume. 2 Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume.

Device Disk 1 Disk 2

Disk Media Name

3 Remove one of the disks that is being used by the volume. 4 Confirm that the disk was removed. 5 From the command line, check that the state of one the plexes is DISABLED and REMOVED. In VEA, the disk is shown as disconnected, because one of the plexes is unavailable. 6 Replace the disk back into the disk group. 7 Check the status of the disks. What is the status of the disks? 8 Display volume information. What is the state of the plexes? 9 In VEA, what is the status of the disks? What is the status of the volume? 10 From the command line, recover the volume. During and after recovery, check the status of the plex in another command window and in VEA.

A-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Replacing Physical Drives (Without Hot Relocation) For this exercise, use the mirrored volume, recvol, that you created in the previous exercise. The volume is in the disk group datadg. 1 Stop vxrelocd using ps and kill, in order to stop hot relocation from taking place. # ps -e | grep vx # kill -9 pid1 pid2 Note: There are two vxrelocd processes. You must kill both of them at the same time. 2 Next, you simulate disk failure by removing the public and private regions of one of the disks in the volume. In the commands, substitute the appropriate disk device name for one of the disks in use by recvol: # fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2 3 An error occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root. Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/datadg/recvol & 4 When the error occurs, view the status of the disks from the command line. 5 View the status of the volume from the command line. 6 In VEA, what is the status of the disks and volume? 7 Rescan for all attached disks: # vxdctl enable 8 Recover the disk by replacing the private and public regions on the disk: # vxdisksetup -i c1t2d0 Note: This method for recovering the disk is only used because of the method in which the disk was defaulted (by writing over the private and public regions). In most real-life situations, you do not need to perform this step. 9 Bring the disk back under VxVM control: # vxdg -g datadg -k adddisk datadg02=c1t2d0

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-47

10 Check the status of the disks and the volume. 11 From the command line, recover the volume. 12 Check the status of the disks and the volume to ensure that the disk and volume are fully recovered. 13 Unmount the file system and remove the volume. Exploring Spare Disk Behavior 1 You should have four disks (datadg01 through datadg04) in the disk group datadg. Set all disks to have the spare flag on. 2 Create a 100-MB mirrored volume called sparevol. Is the volume successfully created? Why or why not? 3 Attempt to create the same volume again, but this time specify two disks to use. Do not clear any spare flags on the disks. 4 Remove the volume. 5 Verify that the relocation daemon (vxrelocd) is running. If not, start it as follows: # vxrelocd root & 6 Remove the spare flags from three of the four disks. 7 Create a 100-MB concatenated mirrored volume called spare2vol. 8 Save the output of vxprint -thf to a file. 9 Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume. You are going to simulate disk failure on one of the disks. Decide which disk you are going to fail. Open a console screen.

Device Name Disk 1 Disk 2


A-48

Disk Media Name

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10 Next, you simulate disk failure by removing the public and private regions of one of the disks in the volume. In the commands, substitute the appropriate disk device name: # fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2 11 An error occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root. Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/datadg/volume_name & 12 Run vxprint -rth and compare the output to the vxprint output that you saved earlier. What has occurred? 13 In VEA, view the disks. Notice that the disk is in the disconnected state. 14 Run vxdisk list. What do you notice? 15 Rescan for all attached disks. 16 In VEA, view the status of the disks and the volume. 17 View the status of the disks and the volume from the command line. 18 Recover the disk by replacing the private and public regions on the disk. 19 Bring the disk back under VxVM control and into the disk group. 20 In VEA, undo hot relocation for the disk. 21 Wait until the volume is fully recovered before continuing. Check to ensure that the disk and the volume are fully recovered. 22 Reboot and then remove the volume. 23 Turn off any spare flags from your disks that you set during this lab.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-49

Restoring a Lost Volume (Optional) For this exercise, ensure that you have a disk group named datadg that contains at least three disks. 1 Create three simple volumes, each 50 MB in size, called lostvol1, lostvol2, and lostvol3 on any disks in datadg. Mirror lostvol3 on another disk. 2 Save the disk group configuration by using the vxprint command. 3 Display what you saved for backup. 4 Remove the volume lostvol3. 5 Restore the volume, plex, and subdisk objects for lostvol3: 6 Run vxprint -rth. What do you notice? 7 Recover the volume. 8 Run vxprint -rth to verify the original volume is started and is resynchronizing its mirrors. Disk Group Backup and Restoration (Optional) Setup: Use the disk group and volumes from the previous section. If you skipped that section, ensure that you have a disk group named datadg that contains at least three disks. Prepare the volumes as follows: Create three simple volumes, each 50 MB in size, called lostvol1, lostvol2, and lostvol3 on any disks in datadg. Mirror lostvol3 on another disk. Save the disk group configuration by using the vxprint command. Display what you saved for backup.

1 Destroy the entire disk group. 2 Re-create the disk group by initializing its former disks, and adding them to the group. Important: Use the same disk group name, disk names, and device names.

A-50

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 Restore each volume one at a time. 4 Run vxprint -rth. What do you notice? 5 Recover the volumes. 6 Run vxprint -ht to verify the volumes and disk group are restored successfully.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-51

Lab 15: Disk Problems and Solutions


Overview In this lab, you practice recovering from a variety of disk failure scenarios. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script: Sets up the required volumes Simulates and describes a failure scenario Prompts you to fix the problem Your goal is to recover from the problem as described in each scenario. Use your knowledge of VxVM administration, as well as the VxVM recovery tools and concepts described in the lesson, to determine which steps to take to ensure recovery. After you recover the test volumes, the script verifies your solution and provides you with the result. You succeed when you recover the volumes without corrupting the data. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup Due to the way in which the lab scripts work, it is important to set up your environment as described in this setup section: 1 Create a disk group named testdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdg01, testdg02, and testdg03. Note: You may need to destroy disk groups created in other labs (for example, datadg) in order to create the testdg disk group. 2 Before running the automated lab scripts, set the DG environment variable in your /.profile to the name of the test disk group that you are using: # DG=testdg; export DG Rerun your profile by logging out and logging back on, or by manually running it. 3 Ask your instructor for the location of the lab scripts.

A-52

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering from Temporary Disk Failure In this lab exercise, a temporary disk failure is simulated. Your goal is to recover all of the redundant and nonredundant volumes that were on the failed drive. The lab script run_disks sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 1, Turned off drive (temporary failure): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 1 This script sets up two volumes: test1 with a mirrored layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volumes. 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-53

Recovering from Permanent Disk Failure In this lab exercise, a permanent disk failure is simulated. Your goal is to replace the failed drive and recover the volumes as needed. The lab script run_disks sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 2, Power failed drive (permanent failure): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 2 This script sets up two volumes: test1 with a mirrored layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. The disk is detached by VxVM. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then, recover the volumes. 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

A-54

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering from Intermittent Disk Failure (1) In this lab exercise, intermittent disk failures are simulated, but the system is still OK. Your goal is to move data from the failing drive and remove the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 3, Intermittent Failures (system still ok): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 3 This script sets up two volumes: test1 with a mirrored layout test2 with a concatenated layout 2 Read the instructions in the lab script window. You are informed that the disk drive used by both volumes is experiencing intermittent failures that must be addressed. 3 In a second terminal window, move the data on the failing disk to another disk, and remove the failing disk. 4 After you resolve the problem, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-55

Recovering from Intermittent Disk Failure (2) In this lab exercise, intermittent disk failures are simulated, and the system has slowed down significantly, so that it is not possible to evacuate data from the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 4, Intermittent Failures (system too slow): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 4 This script sets up a mirrored volume named test. 2 Read the instructions in the lab script window. You are informed that: One of the disk drives used by the volume is experiencing intermittent failures that need to be addressed immediately. The system has slowed down significantly, so it is not possible to evacuate the disk before removing it. 3 In a second terminal window, perform the necessary actions to resolve the problem. 4 After you resolve the problem, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

A-56

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering from Temporary Disk Failure: Layered Volume (Optional) In this lab exercise, a temporary disk failure is simulated. Your goal is to recover all of the volumes that were on the failed drive. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 5, Turned off drive with layered volume: # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 5 This script sets up two volumes: test1 with a concat-mirror layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volumes. 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-57

Recovering from Permanent Disk Failure: Layered Volume (Optional) In this lab exercise, a permanent disk failure is simulated. Your goal is to replace the failed drive and recover the volumes as needed. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 6, Power failed drive with layered volume: # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 6 This script sets up two volumes: test1 with a concat-mirror layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. The disk is detached by VxVM. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then, recover the volumes. 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

A-58

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 16: Plex Problems and Solutions


Overview In this lab, you practice recovering from a variety of plex problem scenarios. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script: Sets up the required volumes Simulates and describes a failure scenario Prompts you to fix the problem Your goal is to recover from the problem as described in each scenario. Use your knowledge of VxVM administration, as well as the VxVM recovery tools and concepts described in the lesson, to determine what steps to take to ensure recovery. After you recover the test volumes, the script verifies your solution and provides you with the result. You succeed when you recover the volumes without corrupting the data. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup Due to the way in which the lab scripts work, it is important to set up your environment as described in this setup section: 1 Create a disk group named testdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdg01, testdg02, and testdg03. 2 Before running the automated lab scripts, set the DG environment variable in your /.profile to the name of the test disk group that you are using: # DG=testdg; export DG Rerun your profile by logging out and logging back on, or by manually running it. 3 Ask your instructor for the location of the lab scripts.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-59

Resolving Plex Problems: Temporary Failure In this lab exercise, a temporary disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 1, Turned off drive (temporary failure): # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 1 This script sets up a mirrored volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume, and I/O is started so that VxVM detects the failure. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plex is already in the STALE state before the drive fails. 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct.

A-60

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resolving Plex Problems: Permanent Failure In this lab exercise, a permanent disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 2, Power failed drive (permanent failure): # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 2 This script sets up a mirrored volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume. I/O is started so that VxVM detects the failure, and VxVM detaches the disk. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Note that the new disk does not have any data on it. The other plex of the volume became STALE ten minutes before the drive failed. However, it still has your data, but data from the last ten minutes is missing. 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-61

Resolving Plex Problems: Unknown Failure In this lab exercise, an unknown failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 3, Unknown failure: # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 3 This script sets up a mirrored volume named test that has three plexes. 2 Read the instructions in the lab script window. The script simulates an unknown failure that causes all plexes to be set to the STALE state. You are not provided with information about the cause of the problem with the plexes. 3 In a second terminal window, check each plex individually to determine if it has the correct data. To test if the plex has correct data, start the volume using that plex, and then, in the lab script window, press Return. The script output displays a message stating whether or not the plex has the correct data. Continue this process for each plex, until you determine which plex has the correct data. 4 After you determine which plex has the correct data, recover the volume. 5 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct.

A-62

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resolving Plex Problems: Temporary Failure with Layered Volume (Optional) In this lab exercise, a temporary disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 4, Turned off drive with layered volume: # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 4 This script sets up a concat-mirror volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume, and I/O is started so that VxVM detects the failure. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plex is already in the STALE state before the drive fails. 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-63

Resolving Plex Problems: Permanent Failure with Layered Volume (Optional) In this lab exercise, a permanent disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 5, Power failed drive with layered volume: # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 5 This script sets up a concat-mirror volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume. I/O is started so that VxVM detects the failure, and VxVM detaches the disk. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Note that the new disk does not have any data on it. The other plex of the volume became STALE ten minutes before the drive failed. However, this plex still has your data, but data from the last ten minutes is missing. 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.
A-64 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

On Your Own: Exploring Mirror Resynchronization (Optional) This exercise provides an additional opportunity to explore mirror resynchronization processes. 1 Create a three-way concatenated mirrored volume of 200 MB, and run the process in the background. 2 Run vxprint -ht volume. 3 Note the states of the volumes and plexes during synchronization. 4 Run vxtask monitor and note the type of synchronization being performed. 5 When the synchronization is finished, vxprint -ht volume should display the volume and its plexes as ACTIVE. 6 Stop the volume and change all plexes to the STALE state. 7 Set the first two plexes to the ACTIVE state, and leave the third plex as STALE. 8 Run vxprint again and note the volumes new state. 9 Start the volume in the background and run vxtask monitor. 10 How many synchronizations are performed in the volume, and what types of synchronization are performed?

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-65

Lab 17: Encapsulation and Root Disk Mirroring


Introduction In this practice, you create a root mirror, disable the root disk, and boot up from the mirror. Then you boot up again from the root disk, break the mirror, and remove the boot disk from rootdg. Finally, you reencapsulate the root disk and re-create the mirror. These tasks are performed using the VEA interface, the vxdiskadm tool, and CLI commands. Encapsulation and Root Disk Mirroring 1 Use vxdiskadm to place another disk in rootdg. This disk should be the same size (or greater) than the root disk. After completing this step, you should have two disks in rootdg: the boot disk and the new disk. 2 From the command line, set the eeprom variable to enable VxVM to create a device alias in the openboot program. 3 Use vxdiskadm to mirror the root volumes. This process can take a few minutes depending on the size of the disk. What order are the volumes mirrored? Check to determine if rootvol is enabled and active. Hint: Use vxprint and examine the STATE fields. 4 To disable the boot disk and make rootvol-01 disabled and offline, use the vxmend command. This command is used to make changes to configuration records. Here, you are using the command to place the plex in an offline state. For more information about this command, see the vxmend (1m) manual page. # vxmend off rootvol-01 5 Verify that rootvol-01 is now disabled and offline. 6 To change the plex to a STALE state, run the vxmend on command on rootvol-01. Verify that rootvol-01 is now in the DISABLED and STALE state. # vxmend on rootvol-01 7 Reboot the system using init 6.

A-66

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8 At the OK prompt, check for available boot disk aliases. Use the available boot disk alias to boot up from the alternate boot disk. 9 Verify that rootvol-01 is now in the ENABLED and ACTIVE state. Note: You may need to wait a few minutes for the state to change from STALE to ACTIVE. You have successfully booted up from the mirror. 10 To boot up from the original boot disk, reboot again using init 6. You have now booted up from the original boot disk. 11 Using VEA, remove all but one plex of rootvol, swapvol, usr, var, opt, and home (that is, remove the newer plex from each volume in rootdg.) 12 Run the command to convert the root volumes back to disk partitions. 13 Shut down the system when prompted. 14 Verify that the mount points are now slices rather than volumes. 15 Use the vxdiskadm menu to reencapsulate the boot disk and restart. Important: You must specify the device as c0t0d0 and the disk name as rootdisk, or else VxVM will use a default name, such as disk02. 16 Using VEA, mirror rootdisk. At the end of this lab, you should have rootdisk as the boot disk and another disk in rootdg that is a mirror of the boot disk.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-67

Troubleshooting Tip Problem If you do not add a disk to rootdg prior to attempting unencapsulation of the boot disk, volumes are converted back to slices, and the disk is still in rootdg. At this point, you are not able to encapsulate, because the disk is in a disk group, and you cannot rerun vxunroot. This problem is caused by not having another disk in rootdg to hold a copy of the rootdg configuration. Solution 1 Add a disk to rootdg. 2 Remove the boot disk from rootdg. 3 You can now encapsulate the boot disk.

A-68

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 18: VxVM, Boot Disk, and rootdg Recovery


Overview In this lab, you practice recovering from encapsulated boot disk failure scenarios. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script simulates a failure in the encapsulated boot disk (and its mirror, if required) and reboots the system. Your goal is to recover from the problem as described in each scenario. Use your knowledge of VxVM administration, as well as the VxVM recovery tools and concepts described in the lesson, to determine what steps to take to ensure recovery. You succeed when you solve the problem with the boot disk and boot to multiuser mode. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup In this lab, the automated lab scripts prompt you to reboot the system. If the reboot fails, ask your instructor how to bring the system down. 1 These labs require the system disk to be encapsulated. If your system disk is not encapsulated, you must encapsulate it before proceeding with this lab. 2 You must have at least one additional disk that is the same size (or larger) as your boot disk. You are instructed to create a mirror of the boot disk in the second exercise. 3 Ask your instructor for the location of the lab scripts.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-69

Recovering from Encapsulated, Unmirrored Boot Disk Failure In this lab exercise, you attempt to recover from encapsulated, unmirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 This lab requires that the system disk is encapsulated, but not mirrored. If your system disk is mirrored, then remove the mirror. 2 Save a copy of the /etc/system file to /etc/system.preencap. In the new file (/etc/system.preencap), comment out the non-forceload lines related to VxVM (the lines that define the disk to be an encapsulated device). 3 From the directory that contains the lab scripts, run the script run_root, and select option 1, Encapsulated, unmirrored boot disk failure: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 1 4 Follow the instructions in the lab script window. This script causes the only plex in rootvol to change to the STALE state. When you are ready, the system is rebooted. The system does not come up due to the STALE plex. 5 Recover the volume rootvol by using the /etc/system.preencap file that you created before the failure. You succeed when the system boots up to multiuser mode.

A-70

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering from Encapsulated, Mirrored Boot Disk Failure (1) In this lab exercise, you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored, then mirror the system disk before continuing. 2 Save a copy of the /etc/system file to /etc/system.preencap. In the new file (/etc/system.preencap), comment out the non-forceload lines related to VxVM (the lines that define the disk to be an encapsulated device). 3 From the directory that contains the lab scripts, run the script run_root, and select option 2, Encapsulated, mirrored boot disk failure - 1: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 2 4 Follow the instructions in the lab script window. This script causes both plexes in rootvol to change to the STALE state. When you are ready, the system is rebooted. The system does not come up due to the STALE plex. 5 Recover the volume rootvol by using the /etc/system.preencap file that you created before the failure. You succeed when the system boots up to multiuser mode.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-71

Recovering from Encapsulated, Mirrored Boot Disk Failure (2) (Optional) In this lab exercise, you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored, then mirror the system disk before continuing. 2 Save a copy of the /etc/system file to /etc/system.preencap. In the new file (/etc/system.preencap), comment out the non-forceload lines related to VxVM (the lines that define the disk to be an encapsulated device). 3 From the directory that contains the lab scripts, run the script run_root, and select option 3, Encapsulated, mirrored boot disk failure - 2: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 3 4 Follow the instructions in the lab script window. This script causes one of the plexes in rootvol to change to the STALE state. The clean plex is missing the /kernel directory, so you cannot boot up the system without recovery. When you are ready, the script reboots the system. 5 Recover the volume rootvol by using the /etc/system.preencap file that you created before the failure. You succeed when the system boots up to multiuser mode.

A-72

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering from Encapsulated, Mirrored Boot Disk Failure (3) (Optional) In this lab exercise, you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored, then mirror the system disk before continuing. 2 Create an emergency boot disk by following the procedures presented in the lesson. 3 From the directory that contains the lab scripts, run the script run_root, and select option 4, Encapsulated, mirrored boot disk failure - 3: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 4 4 Follow the instructions in the lab script window. This script causes both plexes in rootvol to change to the STALE state. Both plexes are missing the directory /kernel, so you cannot bring up the system without recovery. When you are ready, the script reboots the system. 5 Recover the volume rootvol by using the emergency boot disk that you created before the failure. You succeed when the system boots up to multiuser mode.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-73

Lab 19: Administering DMP (Optional)


Introduction In this lab, you explore the performance and redundancy benefits of Volume Managers dynamic multipathing (DMP) functionality. In this lab, you become familiar with the use of VxVMs device discovery layer (DDL) utility, vxddladm, the DMP management utility, vxdmpadm, and DMP-related options of vxdiskadm. You demonstrate DMPs ability to automatically detect a failed path and manage its I/O accordingly by disabling and reenabling a DMP channel from the command line (to simulate a DMP controller failure) and by observing DMPs actions through benchmarking utility output. In this lab, you also measure the performance benefits of VxVMs DMP by: 1 Setting up volumes with file systems and flooding them with various types of workloads and I/O 2 Recording the results of performance tests 3 Disabling one of the configured DMP paths 4 Running performance tests again, without using DMP, to note the differences Setup This lab requires that you use the two Sparc systems connected to the Winchester Systems FlashDisk RAID array. The instructor will configure NRAID (no hardware RAID) on all disks for this lab. The array is also capable of several forms of hardware RAID. If you are interested in learning more about the Winchester Systems FlashDisk RAID array, visit www.winsys.com. Ask your instructor if you have any questions related to setup. To prepare for the lab: Ensure all SCSI and power cables are securely connected to and from the array before starting. Ensure that you have a minimum of four disks in the array, not including the root disk.

A-74

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Verifying DMP Activation 1 Unarchive the VERITAS benchmarking utility, vxbench. # zcat /vxbench.tar.Z | tar xvfp 2 Run format and make sure all disks in the array are configured and displayed correctly. 3 Edit the /kernel/drv/vxdmp.conf file as follows: name="vxdmp" parent="pseudo" instance=0 dmp_jbod="WINSYS"; 4 When the system comes up, log on to CDE and verify that JBODs are currently supported in the systems by using VxVMs device discovery layer utility. If JBODs are not supported, add support by using the DDL utility and specifying the vendor ID, WINSYS. Use the DDL utility again to verify that support is added. 5 Run the following commands: # devfsadm # vxdctl enable Notice that you do not have to reboot the system during the process of activating DMP for this array. 6 Add four disks to a disk group called flashdg. Verify your action using vxdisk list. 7 On one disk, verify that active/active DMP is enabled on the disk by using the command line. You are now ready to use active/active DMP with the array.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-75

DMP Benchmark Testing: High Availability Benefits 1 Create two 8000-MB, simple (concatenated) volumes on the first and second disks in the disk group, respectively. 2 Create and mount VxFS file systems on each volume using the mount points /flash1 and /flash2. 3 Open two terminal windows on the system. In one window, run the following: iostat nM -l 7 3 Note: Try various options to iostat in order to view the disk devices being used for DMP. See the iostat manual page for more information. 4 In the other terminal window, run the mount command to verify that the two file systems you created are still mounted. If so, run the following set of commands on the first file system, mounted at /flash1: # for i in 1 2 3 4 5 6 7 8 9 > do > mkfile 500m /flash1/testfile$i & > done 5 In the output of iostat, observe the megabytes per second (Mps) and transactions per second (tps) columns for each controller path that is receiving I/O. What do you notice? 6 Next, simulate physical path failure by manually disabling one of the DMP paths. 7 Observe the change in the output of the running iostat command, and pinpoint where the change occurs. 8 Reenable the failed DMP path manually by using vxdmpadm, and observe the changes in iostat output. DMP should begin accessing the other path within a few seconds.

A-76

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

DMP Benchmark Testing: Workload Balancing/Performance Gains Before you start: Review the sample output from the previous section. Compute the total transactions per second in one three-second interval from the first output (when both DMP paths were enabled) and compare it to any similar three-second interval from the second output (when one DMP path was disabled). What is the difference in total transactions per second when you disable DMP? What is the difference in total megabytes per second throughput when you disable DMP? Verify that active/active DMP is enabled by running:
# vxdisk list disk_name

See if you can achieve a 20 percent or greater increase in throughput in the lab below. 1 On the second mounted file system, use vxbench to sequentially write a test file, called benchfile1, and note the output: # vxbench w write iosize=8k iocount=131072 /flash2/benchfile1 2 Now read the benchmark file back with vxbench and note the output: # vxbench w read iosize=8k iocount=131072 /flash2/benchfile1 3 Run the following command, which copies several small files between directories: # time cp r /opt /flash2/opt$i 4 Disable one of the DMP paths by using vxdiskadm. Run vxdiskadm. You may have to press Return to display all of the options in the main menu. Select option 17, Prevent multipathing/suppress devices from VxVMs view. Answer y when prompted. Select option 1, Suppress all paths through a controller from VxVMs view. Type c1. Answer y when prompted and press Return. Exit from vxdiskadm. Do not reboot.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-77

5 Repeat steps one through three above. What do you notice? 6 Use similar vxdiskadm options to reenable DMP. More Practice (Optional) 1 Unmount one of the file systems and run several simultaneous block-level dumps on its raw volume. First perform this test with DMP disabled.
# umount /flash1 # time dd if=/dev/zero of=/dev/vx/rdsk/flashdg/flashvol & # time dd if=/dev/zero of=/dev/vx/rdsk/flashdg/flashvol & # time dd if=/dev/zero of=/dev/vx/rdsk/flashdg/flashvol &

2 Reenable DMP and run the tests again.

After Completing This Lab Unmount the file systems and remove the volumes used in this lab.

A-78

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 20: Controlling Users (Optional)


Introduction This lab enables you to practice setting user quotas and creating ACLs. Set Up 1 Begin with a clean file system. In a disk group named datadg, create a 1-GB volume called quotavol. Create and mount a VERITAS file system on the volume at the mount point /fs_quota. 2 Create the group training: a Open the Admintool utility: # admintool & b In the Browse menu, select Groups to display a list of groups. c In the Edit menu, select Add to open the Add Group dialog box. d In the Add Group dialog box, create a new group by specifying: Group Name: training Group ID: 101 Member list: root Note: The group ID should already be set to 101. e Click OK. 3 Create four users for the group training: a In the Admintool utility, from the Browse menu, select Users to display a list of users. b In the Edit menu, select Add to open the Add User dialog box. c In the Add User dialog box, create a new user by specifying: User Name: user1 Primary Group: 101 Login Shell: Korn Home DirectoryPath: /fs_quota/user1/home d Repeat this process for three more users with the names user2, user3, and user4. e Set the passwords of all four users to veritas by using the passwd command. For each user: # passwd user1 Changing password for user1 user1s New password: veritas Enter the new password again: veritas

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-79

Using Quotas 1 Create the files required for managing quotas for a file system. 2 Turn on quotas for the file system. 3 Invoke the quota editor for the user with the username user1. 4 Modify the quotas file to specify a hard limit of 200 blocks and 20 inodes and a soft limit of 100 blocks and 10 inodes. 5 Modify the time limit to be one minute. 6 Verify the quotas for the user user1. 7 In order to test the quota limits that you set, you must log on as user1: a Set read, write, and execute permissions for user1 on /fs_quota. b Log off and log back on as user1. c When prompted, type a password for user1. d Relog on as user1 using your new password. e After you log on, go to the file system that has the quotas set. 8 Test the quota limits that you set by creating files that exceed the disk usage limits. Delete the files between each test. To test the soft block limit, create or copy a file of size greater than 100K and less than 200K. To test the hard block limit, create or copy a file of size greater than 200K. To test the soft inode limit, use touch to create 11 empty files. To test the hard inode limit, use touch to create 21 files. 9 Log off and log back on as root and turn off quotas for the VERITAS file system mounted at /fs_quota. 10 Exit from the superuser account and log back on as user1. Test the quota limits again, using the same tests as in step 9. What happens? 11 Log out and log back on as root.

A-80

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

ACLs 1 Create a file called file01 on the file system /fs_quota. 2 Add an ACL entry to file01 that gives user user1 read permission only. 3 View the ACLs for file01 to verify that the ACL entry was created. 4 Create a new file called file02 on the file system /fs_quota. 5 View the ACLs for file02. 6 Set the same ACL on file02 as the one on file01 using the standard input. 7 Confirm that the same ACLs are set on file02 as on file01. After Completing This Lab Unmount the file systems and remove the volumes used in this lab.

Appendix A: Lab Exercises


Copyright 2002 VERITAS Software Corporation. All rights reserved.

A-81

A-82

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab Solutions

Lab 1 Solutions: Virtual Objects


Introduction In this theoretical exercise, you explore the relationship between Volume Manager objects and physical disks by determining how data in a volume maps to a physical disk. In each problem, you are given the address of a byte of data written to a logical volume. Using the information provided and your knowledge of the relationships between Volume Manager objects, you will determine: The physical drive to which the byte of data is written The physical address of the byte of data on that drive

B-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab: Problem 1
Physical Disks
c0t0d0 Public Region c1t0d0 Public Region c1t1d0 Public Region c1t2d0
FOS35_Sol_R1.0_20020930

VxVM Disks
disk01 1-MB offset
disk01-01

Private Region (1 MB)

datavol
10 MB in size

datavol-01 A @ 5 MB A @ 5 MB

Private Region (1 MB)

disk02 1-MB offset


disk02-04

disk04-03

12 MB in size

Private Region (1 MB)

disk03 190-MB offset


disk03-02

disk02-04
at 5 MB into the volume at 5 MB into the volume B written at an address space B written at an address space at 12 MB into the volume at 12 MB into the volume 1-20 C written at an address space C written at an address space at 17 MB into the volume at 17 MB into the volume

10 MB in size A written at an address space A written at an address space

Public Region A @ 11 MB A @ 11 MB

Private Region (1 MB)

disk04
A @ 10 MB disk04-03

5-MB offset 8 MB in size

Copyright 2002 VERITAS

Problem 1
Character A The character A is written at an offset of 5 MB into the volume. Use the graphic to answer the following questions: 1 What is the size of the concatenated volume? 20 MB 2 Is it a mirrored volume? No 3 Which subdisk is the data being written to? disk04-03 4 Where in the subdisk (in MB) is the data being written? 5 MB 5 Which physical disk is the data being written to? c1t2d0 6 What is the physical address (in MB) on this disk that the data is being written to? To answer this question, first identify the offset of the public region and the offset of the disk within the public region. Offset of subdisk in disk04 in public region: 5 MB Location in the subdisk that the data is written: 5 MB Offset in the disk c1t2d0 of public region: 1 MB The character A is written at 11 MB into the disk c1t2d0. Character B The character B is written at 12 MB into the volume. 7 Which subdisk is the data being written to? disk02-04 8 Where in the subdisk (in MB) is the data being written? 12-8 = 4 MB 9 Which physical disk is the data being written to? c1t0d0
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-3

10 What is the physical address (in MB) on the disk that the data is being written to? Offset of subdisk in disk02 in public region: 1 MB Location in the subdisk that the data is written: 4 MB Offset in the disk c1t0d0 of public region: 1 MB The character B is written at 6 MB into the disk c1t0d0. Character C The character C is written at 17 MB into the volume. 11 Which subdisk is the data being written to? disk02-04 12 Where in the subdisk (in MB) is the data being written? 17-8 = 9 MB 13 Which physical disk is the data being written to? c1t0d0 14 What is the physical address (in MB) on the disk that the data is being written to? Offset of subdisk in disk02 in public region: 1 MB Location in the subdisk that the data is written: 9 MB Offset in the disk c1t0d0 of public region: 1 MB The character C is written at 11 MB into the disk c1t0d0.

B-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab: Problem 2
Physical Disks
c0t0d0 Public Region c1t0d0 Public Region c1t1d0 Public Region c1t2d0 Private Region (1 MB)

VxVM Disks
disk01 100-MB offset
disk01-01

payvol
10 MB in size

payvol-01

payvol-02

Private Region (1 MB)

disk02 150-MB offset


disk02-04

disk01-01

disk04-03

10 MB in size

Private Region (1 MB)

disk03 190-MB offset


disk03-02

disk03-02

disk02-04

10 MB in size A written at an address space A written at an address space

Private Region FOS35_Sol_R1.0_20020930 (1 MB) Public Region


Copyright 2002 VERITAS

disk04

disk04-03

at 5 MB into the volume at 5 MB into the volume B written at an address space B written at an address space at 12 MB into the volume at 12 MB into the volume 0-MB offset 1-21 C written at an address space C written at an address space 10 MB in size at 17 MB into the volume at 17 MB into the volume

Problem 2
Character A The character A is written at 5 MB into the volume. 1 What is the size of the concatenated volume? 20 MB 2 Is it a mirrored volume? Yes 3 Which subdisks is the data being written to? disk01-01 and disk04-03 4 Where in the subdisks (in MB) is the data being written? 5 MB and 5 MB 5 Which physical disks is the data being written to? c0t0d0 and c1t2d0 6 What is the physical address (in MB) on these disks that the data is being written to?
c0t0d0 c1t2d0

Offset of subdisk in disk01 and disk04 in public region: 100 MB 0 MB Location in the subdisk that the data is written: 5 MB 5 MB Offset in the disk of public region: 1 MB 1 MB The character A is written at 106 MB into the disk c0t0d0 and 6 MB into disk c1t2d0. Character B The character B is written at 12 MB into the volume. 7 Which subdisks is the data being written to? disk03-02 and disk02-04

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-5

8 Where in the subdisk (in MB) is the data being written? 12-10 = 2 MB and 12-10 = 2 MB 9 Which physical disks is the data being written to? c1t1d0 and c1t0d0 10 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t0d0

Offset of subdisk in disk03 and disk02 in public region: 190 MB 150 MB Location in the subdisk that the data is written: 2 MB 2 MB Offset in the disk of public region: 1 MB 1 MB The character B is written at 193 MB into the disk c1t1d0 and 153 MB into disk c1t0d0. Character C The character C is written at 17 MB into the volume. 11 Which subdisks is the data being written to? disk03-02 and disk02-04 12 Where in the subdisks (in MB) is the data being written? 17-10 = 7 MB and 17-10 = 7 MB 13 Which physical disk is the data being written to? c1t1d0 and c1t0d0 14 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t0d0

Offset of subdisk in disk03 and disk02 in public region: 190 MB 150 MB Location in the subdisk that the data is written: 7 MB 7 MB Offset in the disk of public region: 1 MB 1 MB The character C is written at 198 MB into the disk c1t1d0 and 158 MB into disk c1t0d0.

B-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab: Problem 3
Physical Disks
c0t0d0 Public Region c1t0d0 Public Region c1t1d0 Public Region c1t2d0 Private Region (1 MB)

VxVM Disks
disk01 100-MB offset
disk01-01

mktvol
7 MB in size

mktvol-01

mktvol-02

Private Region (2 MB)

disk02 150-MB offset


disk02-04

disk01-01

disk04-03

6 MB in size

Private Region (2 MB)

disk03 190-MB offset


disk03-02

disk03-02

disk02-04

13 MB in size A written at an address space A written at an address space

Private Region FOS35_Sol_R1.0_20020930 (1 MB) Public Region


Copyright 2002 VERITAS

disk04

disk04-03

at 5 MB into the volume at 5 MB into the volume B written at an address space B written at an address space at 12 MB into the volume at 12 MB into the volume 0-MB offset 1-22 C written at an address space C written at an address space 14 MB in size at 17 MB into the volume at 17 MB into the volume

Problem 3
Character A The character A is written at 5 MB into the volume. 1 What is the size of the concatenated volume? 20 MB 2 Is it a mirrored volume? Yes 3 Which subdisks is the data being written to? disk01-01 and disk04-03 4 Where in the subdisk (in MB) is the data being written? 5 MB and 5 MB 5 Which physical disks is the data being written to? c0t0d0 and c1t2d0 6 What is the physical address (in MB) on these disks that the data is being written to?
c0t0d0 c1t2d0

Offset of subdisk in disk01 and disk04 in public region: 100 MB 0 MB Location in the subdisk that the data is written: 5 MB 5 MB Offset in the disk of public region: 1 MB 1 MB The character A is written at 106 MB into the disk c0t0d0 and 6 MB into disk c1t2d0. Character B The character B is written at 12 MB into the volume. 7 Which subdisks is the data being written to? disk03-02 and disk04-03

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-7

8 Where in the subdisk (in MB) is the data being written? 5 MB and 12 MB 9 Which physical disk is the data being written to? c1t1d0 and c1t2d0 10 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t2d0

Offset of subdisk in disk03 and disk04 in public region: 190 MB 0 MB Location in the subdisk that the data is written: 5 MB 12 MB Offset in the disk of public region: 2 MB 1 MB The character B is written at 197 MB into the disk c1t1d0 and 13 MB into disk c1t2d0. Character C The character C is written at 17 MB into the volume. 11 Which subdisks is the data being written to? disk03-02 and disk02-04 12 Where in the subdisk (in MB) is the data being written? 17-7 = 10 MB and 17-14 = 3 MB 13 Which physical disk is the data being written to? c1t1d0 and c1t0d0 14 What is the physical address (in MB) on these disks that the data is being written to?
c1t1d0 c1t0d0

Offset of subdisk in disk03 and disk02 in public region: 190 MB 150 MB Location in the subdisk that the data is written: 10 MB 3 MB Offset in the disk of public region: 2 MB 2 MB The character C is written at 202 MB into the disk c1t1d0 and 155 MB into disk c1t0d0.

B-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 2 Solutions: Installing VERITAS Foundation Suite


Introduction In this exercise, you add the Foundation Suite packages and install VERITAS Volume Manager. Preinstallation 1 What VRTS packages are currently installed on your system? # pkginfo | grep -i VRTS 2 Does the boot disk have two free partitions, 2048 contiguous sectors available, and partition 2 tagged as backup? # prtvtoc /dev/rdsk/device_name 3 Before installing VxVM, save your boot disk information by using the prtvtoc command. Save the output to a file for later use. Do not store the file in /tmp. # prtvtoc /dev/rdsk/device_name > /etc/bootdisk.preVM 4 What VRTS packages are currently referenced by the /etc/system file? # pg /etc/system 5 Before installing VxVM with or without an encapsulated boot disk, save the /etc/system and /etc/vfstab files into backup files named /etc/system.preVM and /etc/vfstab.preVM. Note: By saving a copy of the system files before encapsulating the boot disk, you have another way to get the system up and running if rootdg fails. # cp /etc/system /etc/system.preVM # cp /etc/vfstab /etc/vfstab.preVM Adding Packages and Installing VxVM 1 Add the VERITAS Volume Manager software, documentation, and manual pages packages. The instructor provides you with the location of the packages. Note: Do not install the VRTSob, VRTSobgui, VRTSvmpro, or VRTSfspro packages. These packages are installed in the next lab. The VRTSvlic licensing package should already be installed. If this package is not installed, you should install VRTSvlic before installing any other packages. # cd package_location # pkgadd -d . VRTSvxvm VRTSvmman VRTSvmdoc When prompted, answer Y or continue to all questions. # cd /

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-9

2 Is a VxVM license installed? If no license is installed, then add a license. # vxlicrep | more Licenses for VxVM, FlashSnap, and VxFS should already be installed. If the license keys are not installed, ask your instructor for valid license keys. To add a license key: # vxlicinst 3 Run the Volume Manager installation program. During the installation: Add a license key, if necessary. Obtain valid license keys from your instructor. Do not use enclosure-based naming. Select a Custom install. Encapsulate the boot disk, and accept the default name rootdisk as the boot disk name. Leave all other disks alone. Do not add any other disks to the rootdg disk group at this time. # vxinstall 4 When prompted by the vxinstall program, shutdown and reboot your machine. After Installing VxVM 1 What are the main differences between the post-vxinstall encapsulated boot disk and the pre-vxinstall unencapsulated boot disk? # prtvtoc /dev/rdsk/c0t0d0s2 # pg /etc/bootdisk.preVM After you encapsulate the boot disk, two additional partitions are defined on the diskthe public region and the private region. If possible, Volume Manager uses partitions 3 and 4 for this purpose. However, the public and private regions can be more reliably identified by their tag numbers, which are always tags 14 and 15, respectively. All partitions on the boot disk are converted to concatenated volumes, and /, /usr, /var, and swap will remain defined in the VTOC. 2 What are the main differences between the post-vxinstall /etc/system file and the pre-vxinstall /etc/system file? # pg /etc/system # pg /etc/system.preVM The following lines are added to /etc/system: forceload rootdev:/pseudo/vxio@0:0

B-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

set vxio:vol_rootdev_is_volume=1 These lines tell VxVM to boot up on the root volume, instead of on the root partition underlying the root volume.

3 What are the main differences between the post-vxinstall /etc/vfstab file and the pre-vxinstall /etc/vfstab? # pg /etc/vfstab # pg /etc/vfstab.preVM The device entries represent volume device nodes and not partitions. 4 Check in /.profile to ensure that the following paths are present. Note: This may be done in the jumpstart of your system prior to this lab, but the paths may need to be added after a normal install.
# PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSobgui/bin: /usr/sbin:/opt/VRTSob/bin:/opt/VRTSvxfs/sbin:/etc/fs/vxfs: /usr/lib/fs/vxfs # MANPATH=$MANPATH:/opt/VRTS/man # export PATH MANPATH

VERITAS File System Installation 1 VERITAS File System may already be installed on your system. Verify the installation and determine the version of the VxFS package. # pkginfo -l VRTSvxfs 2 Has a VxFS license key been installed? # vxlicrep More Installation Exploration (Optional) 1 When does the VxVM license expire? # vxlicrep | more 2 What is the version and revision number of the installed version of VxVM? # pkginfo -l VRTSvxvm In the output: VERSION: ... 3 What start-up scripts are added to the system by the install program? Single-user mode scripts include: /etc/rcS.d/S25vxvm-sysboot /etc/rcS.d/S35vxvm-startup1 /etc/rcS.d/S85vxvm-startup2

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-11

/etc/rcS.d/S86vxvm-reconfig

4 Examine the file in which VxVM has saved the VTOC data of the encapsulated root disk. # pg /etc/vx/reconfig.d/disk.d/c0t0d0s2/vtoc 5 What daemons are running after the system boots under VxVM control? # ps -ef|grep -i vx vxconfigd, vxrelocd, vxnotify

B-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 3 Solutions: VxVM Interfaces


Introduction In this lab, you set up VEA and explore its interface and options. You also invoke the vxdiskadm menu interface and display information about CLI commands by accessing the VxVM manual pages. Before you begin this lab, you should have already installed VxVM and added the VRTSvxvm and VRTSvmman software packages, and you should have an encapsulated boot disk in rootdg. To verify that the VRTSvxvm and VRTSvmman software packages are loaded, run:
# pkginfo | grep VRTS

Setting Up VEA 1 Install the VEA software. The instructor provides you with the location of the packages. # cd package_location # pkgadd -a ../scripts/VRTSobadmin -d . VRTSob VRTSobgui VRTSvmpro VRTSfspro # cd / 2 Add the directory containing the VEA startup scripts to your PATH environment variable in your .profile file: # PATH=$PATH:/opt/VRTSob/bin # export PATH 3 Is the VEA server running? If not, start it. # vxsvc -m (to confirm that the server is running) # vxsvc (if the server is not already running) 4 Start the Volume Managers graphical user interface. # vea & 5 Connect to your system as root. Your instructor provides you with the password. Hostname: (For example, train13) Username: root Password: (Your instructor provides the password.) 6 Examine the VEA log file. # pg /var/vx/isis/vxisis.log

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-13

Exploring the VEA Interface 1 Access the Help system in VEA. In the VEA main window, select Help>Contents. 2 What disks are available to the OS? In the VEA object tree, expand your host and select the Disks node. Examine the Device column in the grid. 3 What is the content of the boot disks header? In the VEA object tree, expand the Disks node and select your boot disk. The content of the boot disks header is displayed in the grid. 4 Display a graphical view of the boot disk. In the VEA object tree, expand the Disks node and select your boot disk. In the grid, select the Disk View tab. 5 What are the defined disk groups? In the VEA object tree, expand your host, and select the Disk Groups node. Defined disk groups are displayed in the grid. 6 What volumes are defined in the rootdg disk group? In the VEA object tree, select the rootdg disk group. In the grid, click the Volumes tab. Volumes displayed in the grid include rootvol, swapvol, usr, and var. 7 What type of file system does each volume on the boot disk in rootdg contain? In the VEA object tree, click the File Systems node. File systems are displayed in the grid. The file system type for each volume on the boot disk is ufs. 8 Execute the Disk Scan command. In the VEA object tree, select your host. Select Actions>Rescan. 9 What commands were executed by the Disk Scan task? Click the Task tab at the bottom of the main window. Right-click Scan for new disks and select Properties. The commands executed are displayed as drvconfig; disks; vxdctl enable. 10 Stop the Volume Managers graphical interface. In the VEA main window, select File>Exit.
B-14 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Adding a New Administrator Account for VEA 1 Create a root equivalent administrative account named admin1 for use of VEA. Create a new administrative account named admin1: # useradd admin1 # passwd admin1 Type a password for admin1. Modify the /etc/group file to add the vrtsadm group and specify the root and admin1 users by using the vi editor: # vi /etc/group In the file, move to the location where you want to insert the vrtsadm entry, change to insert mode by typing i, then add the line: vrtsadm::99:root,admin1 When you are finished editing, press [Esc] to leave insert mode. Then, save the file and quit: :wq 2 Test the new account. After you have tested the new account, exit VEA. # vea & Hostname: (For example, train13) User: admin1 Password: (Type the password that you created for admin1.) Select File>Exit. Automatically Connecting at Startup 1 Start the VEA client. # vea & 2 Connect to your system as root and specify that you want to save authentication information. Hostname: (For example, train13) Username: root Password: (Type your password.) Mark the Remember password check box. 3 Configure VEA to automatically connect to the host when you start the VEA client. In the VEA object tree, right-click the name of the currently connected host and select Add to Favorite Hosts.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-15

4 Exit VEA, and then reconnect to test your configuration settings. Select File>Exit. Restart the VEA client: # vea & You are automatically reconnected to the host without specifying connection information. Exploring vxdiskadm 1 From the command line, invoke the text-based VxVM menu interface. # vxdiskadm 2 Display information about the menu or about specific commands. Type ? at any of the prompts within the interface. 3 What disks are available to the OS? Type list at the main menu, and then type all. 4 What is the content of the configuration database for the root disk? Type list at the main menu, and then type c0t0d0. 5 Exit the vxdiskadm interface. Type q at the prompts until you exit vxdiskadm.

B-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Accessing CLI Commands (Optional) Note: This exercise introduces four of the most commonly used VxVM commands: vxassist, vxdisk, vxdg, and vxprint. These commands and associated concepts are explained in detail throughout this course. If you have used Volume Manager before, you may already be familiar with these commands. If you are new to Volume Manager, you should start by reading the manual pages for each of these commands. vxassist 1 From the command line, invoke the VxVM manual pages and read about the vxassist command. # man vxassist 2 What vxassist command parameter creates a VxVM volume? The make parameter is used in creating a volume. vxdisk 1 From the command line, invoke the VxVM manual pages and read about the vxdisk command. # man vxdisk 2 What disks are available to VxVM? # vxdisk list All the available disks are displayed in the list. 3 How do you display the header contents of the root disk? # vxdisk list rootdisk vxdg 1 From the command line, invoke the VxVM manual pages and read about the vxdg command. # man vxdg 2 How do you list locally imported disk groups? # vxdg list 3 What is the content of the configuration database for the rootdg disk group? # vxdg list rootdg

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-17

vxprint 1 From the command line, invoke the VxVM manual pages and read about the vxprint command. # man vxprint 2 What volumes are defined in rootdg? # vxprint -htg rootdg The volumes defined are opt, rootvol, swapvol, usr, and var. 3 What is the volume type of the boot disks volumes? The Layout column shows a layout type of Concat.

B-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 4 Solutions: Managing Disks


Introduction In this lab, you use the VxVM interfaces to view the status of disks, initialize disks, move disks to the free disk pool, and move disks into and out of a disk group. Try to perform this lab using the CLI interface. The solutions for all three methods (VEA, CLI, and vxdiskadm) are included in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. Caution: In this lab, do not include the boot disk in any of the tasks. Managing Disks: CLI 1 View the status of the disks on your system. # vxdisk list or # vxdisk -s list 2 Add one uninitialized disk to the free disk pool and view the status of the disk devices to verify your action. # vxdisksetup -i c1t8d0 # vxdisk list 3 Add the disk to the disk group rootdg and view the status of the disk devices to verify your action. # vxdg -g rootdg adddisk disk01=c1t8d0 # vxdisk list 4 Remove the disk from rootdg and place it in the free disk pool, then view the status of the disk devices to verify your action. # vxdg -g rootdg rmdisk disk01 # vxdisk list 5 Remove the disk from the free disk pool and return the disk to an uninitialized state. View the status of the disk devices to verify your action. # vxdiskunsetup c1t8d0 # vxdisk list 6 Add two disks to the free disk pool and view the status of the disk devices to verify your action. # vxdisksetup -i c1t2d0 # vxdisksetup -i c1t3d0 # vxdisk list
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-19

7 Remove one of the disks from the free disk pool and return it to an uninitialized state. View the status of the disk devices to verify your action. # vxdiskunsetup c1t2d0 # vxdisk list 8 Add the same disk back to the free disk pool. You must still perform an initialize step even though the disk was initialized earlier. View the status of the disk devices to verify your action. # vxdisksetup c1t2d0 # vxdisk list

B-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Disks: VEA Solutions 1 View the status of the disks on your system. In the VEA object tree, select the Disks node. The disks and their properties are displayed in the grid. 2 Add one uninitialized disk to the free disk pool and view the status of the disk devices to verify your action. You cannot add a disk to the free disk pool by using VEA. However, if you add a disk to the free disk pool by using other interfaces, VEA displays the appropriate status of the disk (Free and Dynamic). 3 Add the disk to the disk group rootdg and view the status of the disk devices to verify your action. Select an uninitialized disk and select Actions>Add Disk to Dynamic Disk Group. In the Add Disk to Dynamic Disk Group Wizard, verify that the Dynamic disk group name is rootdg, and that the desired disk is in Selected disks. Type or browse to specify rootdg in the Disk Group Name field. Click Next, confirm your selection, and complete the wizard. Verify your action by displaying the status of the disk in the grid. 4 Remove the disk from rootdg, then view the status of the disk devices to verify your action. Select the disk in the grid, and select Actions>Remove Disk from Dynamic Disk Group. In the Remove Disk dialog box, ensure that the correct disk name is displayed under Selected disks. Click OK. Verify your action by displaying the status of the disk in the grid.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-21

Managing Disks: vxdiskadm Solutions 1 View the status of the disks on your system. # vxdiskadm At the main menu prompt, type list. 2 Add one uninitialized disk to the free disk pool and view the status of the disk devices to verify your action. At the main menu prompt, select option 1, Add or initialize one or more disks. 3 Add the disk to the disk group rootdg and view the status of the disk devices to verify your action. After you select option 1, answer the questions that follow appropriately. When asked which disk group, type rootdg. To view the status, return to the vxdiskadm main menu and type list. 4 Remove the disk from rootdg and place it in the free disk pool, then view the status of the disk devices to verify your action. At the main menu prompt, select option 3, Remove a disk. Answer the questions that follow appropriately to remove the disk. The disk is automatically placed in the free disk pool. To view the status, return to the vxdiskadm main menu and type list. 5 Remove the disk from the free disk pool and return the disk to an uninitialized state. View the status of the disk devices to verify your action. You cannot return a disk to an uninitialized state by using the vxdiskadm menu. You must use a CLI command such as vxdiskunsetup. 6 Add two disks to the free disk pool and view the status of the disk devices to verify your action. To add disks, select option 1 from the vxdiskadm main menu. To view the status, return to the vxdiskadm main menu and type list. 7 Remove one of the disks from the free disk pool and return it to an uninitialized state. View the status of the disk devices to verify your action. You cannot return a disk to an uninitialized state by using the vxdiskadm menu. You must use a CLI command, such as vxdiskunsetup.

B-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

8 Add the same disk back to the free disk pool. Notice that you still have to go through an initialize step even though the disk had been initialized earlier. View the status of the disk devices to verify your action. To add disks, select option 1 from the vxdiskadm main menu. To view the status, return to the vxdiskadm main menu and type list.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-23

Lab 5 Solutions: Managing Disk Groups


Introduction In this lab, you create new disk groups, remove disks from disk groups, deport and import disk groups, and destroy disk groups. This lab includes three separate exercises: The first exercise uses the VEA interface. The second exercise uses the command line interface. The third exercise is optional and requires participation from the whole class. If you use object names other than the ones provided, substitute the names accordingly in the commands. Managing Disk Groups: VEA 1 Run and log on to the VEA interface. # vea & 2 View all the disk devices on the system. In the object tree, select the Disks node and view the disks in the grid. 3 Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. View all the disk devices on the system. Select the Disk Groups node and select Actions>New Dynamic Disk Group. In the New Dynamic Disk Group wizard, type a name for the disk group, select a disk to be placed in the disk group, and click Add. Click Next, confirm your selection, and click Finish. 4 Add one more disk to your disk group. Initialize the disk and view all the disk devices on the system. Select an unused disk and select Actions>Add Disk to Dynamic Disk Group. In the Add Disk to Dynamic Disk Group Wizard, select the disk group name, and verify or change the list of disks under Selected disks. Click Next, confirm your selection, and click Finish.

B-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5 Remove all of the disks from your disk group. What happens to your disk group? Select a disk that is in your disk group, and select Actions>Remove Disk from Dynamic Disk Group. In the Remove Disk dialog box, click Add All to select all disks in the disk group for removal, and click OK. All disks are returned to an uninitialized state, and the disk group is destroyed. 6 Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. View all the disk devices on the system. Select the Disk Groups node and select Actions>New Dynamic Disk Group. In the New Dynamic Disk Group Wizard, type a name for the disk group, select a disk to be placed in the disk group, and click Add. Click Next, confirm your selection, and click Finish. 7 Deport your disk group. Do not give it a new owner. View all the disk devices on the system. Select the disk group and select Actions>Deport Dynamic Disk Group. Confirm your request, then click OK in the Deport Disk Group dialog box. 8 Take the disk that was in your disk group and add it to rootdg. Were you successful? Select the disk and display the Actions menu. The Add Disk to Dynamic Disk Group option is disabled, because the disk group is deported. 9 Import your datadg disk group and view all the disk devices on the system. Select the disk group and select Actions>Import Dynamic Disk Group. In the Import Dynamic Disk Group dialog box, click OK. 10 Deport datadg and assign your machine name, for example, train5, as the New Host. Select the disk group and select Actions>Deport Dynamic Disk Group. Confirm your request. In the Deport Dynamic Disk Group dialog box, type your machine name in the New Host field and click OK.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-25

11 Import the disk group and change its name to data3dg. View all the disk devices on the system. Select the disk group and select Actions>Import Dynamic Disk Group. Confirm your request. In the Import Dynamic Disk Group dialog box, type data3dg in the New Name field, and click OK. 12 Deport the disk group data3dg by assigning the ownership to anotherhost. View all the disk devices on the system. Why would you do this? Select the disk group and select Actions>Deport Dynamic Disk Group. Confirm your request. In the Deport Dynamic Disk Group dialog box, type anotherhost in the New Host field. In the list of disks, this status of the disk is displayed as Foreign. You would do this to ensure the disks are not imported accidentally. 13 Import data3dg. Were you successful? Select the disk group and select Actions>Import Dynamic Disk Group. In the Import Dynamic Disk Group dialog box, click OK. This operation should fail, because data3dg belongs to another host. 14 Now import data3dg and overwrite the disk group lock. What did you have to do to import it and why? Select the disk group and select Actions>Import Dynamic Disk Group. In the Import Dynamic Disk Group dialog box, mark the Clear host ID check box, and click OK. 15 Destroy data3dg. View all the disk devices on the system. Select the host machine and select Actions>Destroy Dynamic Disk Group. In the Destroy Dynamic Disk Group dialog box, type the name of the disk group to be destroyed. Click OK and confirm your action. At the end of this lab you should have one disk in rootdg (the boot disk). Leave all other disks as uninitialized disks or in the free disk pool.

B-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Managing Disk Groups: CLI Note: Initialize your data disks by using the command line before beginning this lab, if the disks are not already initialized. To initialize a disk, use the command:
vxdisksetup -i device_tag

1 Create a disk group data4dg with at least one drive. Verify your action. Command: vxdg init diskgroup disk_name=device_tag # vxdg init data4dg data4dg01=c1t8d0 # vxdisk list 2 Deport disk group data4dg, then import the disk group back to your machine. Verify your action. Command: vxdg deport diskgroup # vxdg deport data4dg Command: vxdg import diskgroup # vxdg import data4dg # vxdisk list 3 Destroy the disk group data4dg. Verify your action. Command: vxdg destroy diskgroup # vxdg destroy data4dg # vxdisk list 4 Create a new disk group data4dg with an older version assigned to it. Verify your action. Command: vxdg T 20 init diskgroup disk_name=device_tag # vxdg T 20 init data4dg data4dg01=c1t8d0 # vxdisk list 5 Upgrade the disk group to version 60. Command: vxdg T 60 upgrade diskgroup # vxdg T 60 upgrade data4dg 6 How would you check that you have upgraded the version? # vxdg list data4dg

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-27

7 Add two more disks to the disk group data4dg. You should now have three disks in your disk group. Verify your action. Command: vxdg g diskgroup adddisk disk_name=device_tag # vxdg g data4dg adddisk data4dg02=c1t10d0 # vxdg -g data4dg adddisk data4dg03=c1t11d0 # vxdisk list 8 Remove a disk from the disk group data4dg. Verify your action. Command: vxdg g diskgroup rmdisk disk_name # vxdg g data4dg rmdisk data4dg01 # vxdisk list 9 Deport disk group data4dg and assign the host name as the host name of your machine. Verify your action. Command: vxdg h hostname deport diskgroup # vxdg -h hostname deport data4dg # vxdisk list 10 View the status of the disks in the deported disk group using vxdisk list device_tag. What is in the hostid field? # vxdisk list c1t10d0 The hostid is the name of your machine. 11 Remove a disk from data4dg. Why does this fail? Command: vxdg g diskgroup rmdisk disk_name # vxdg g data4dg rmdisk data4dg03 The operation fails, because you are trying to remove a disk from a deported disk group. 12 Import the disk group data4dg. Verify your action. Command: vxdg import diskgroup # vxdg import data4dg # vxdisk list 13 Try again to remove a disk from data4dg. Does it work this time? Command: vxdg g diskgroup rmdisk disk_name # vxdg g data4dg rmdisk data4dg03 The operation is successful, because the disk group is imported.

B-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

14 Deport the disk group data4dg and do not assign a host name. Verify your action. Command: vxdg deport diskgroup # vxdg deport data4dg # vxdisk list 15 View the status of the disk in the deported disk group using vxdisk list device_tag. What is in the hostid field? # vxdisk list c1t10d0 The host id is now empty. 16 Add the disk in data4dg to rootdg. Were you successful? Command: vxdg g diskgroup adddisk disk_name=device_tag # vxdg g rootdg adddisk data4dg02=c1t10d0 The operation should fail, because you are trying to add a disk to rootdg from a deported disk group. 17 Uninitialize a disk that is in data4dg. Were you successful? Command: vxdiskunsetup device_tag # vxdiskunsetup c1t10d0 This operation should be successful. 18 Import the disk group data4dg. Were you successful? Command: vxdg import diskgroup # vxdg import data4dg This operation fails, because there are no disks left in the disk group. At the end of this lab you should have one disk in rootdg (the boot disk). Leave all other disks as uninitialized disks.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-29

Group Activity: Managing Disk Groups (Optional) The purpose of this lab is to physically deport a one-disk disk group with a file system to another host, import the disk group onto the new host, and remount the file system onto the new host. Then deport the disk group back to the original host. This lab can be performed on a pair of systems sharing physical access to the same disk array, or between unconnected systems if you have removable disk packs. If you have removable disk packs, this lab is best performed with the whole class, with participants working initially on their own machines and then physically moving their disk groups to a host machine. The lab requires a host machine with empty slots in the multipack. (Remove all the disks from the disk pack and run devfsadm and vxdctl enable.) This host can be a spare machine, or it can be one of the delegate machines. If you have shared access to a disk array with another student and do not want to physically move disk packs, participants work initially on their own machines and then logically move their disk groups to another machine that shares physical access to their disk array. It is important that names of the disk groups and volumes be unique throughout the classroom for this exercise. As a recommendation, each participant (or team) should name the disk group using their own name. For example, Jane Doe should use jdoedg. As a recommendation, each participant (or team) should name the volume using their own name. For example, Jane Doe should use jdoevol.
Disk group Volume

yournamedg yournamevol

1 Create a disk group with one disk in it called yournamedg. # vxdg init yournamedg disk_name=device_tag 2 Create a volume called yournamevol in this disk group. # vxassist g yournamedg make yournamevol 500m 3 Create a file system on this volume. # newfs /dev/vx/rdsk/yournamedg/yournamevol 4 Create a directory and mount the file system. # mkdir /mount_point # mount /dev/vx/dsk/yournamedg/yournamevol /mount_point

B-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5 Create a uniquely recognizable file in the root of the mounted file system. # echo "My name is Jane Doe" > /mount_point/jane_doe 6 Unmount the file system. # umount /mount_point 7 Deport the disk group to the new host. # vxdg -h hostname deport yournamedg 8 If you are not physically moving the disks, import your disk group on the other machine and proceed to the next step in the lab. # vxdg import yournamedg If you are physically moving the disks, remove the disk from the old host and place it in an empty slot in the new host. After all the empty slots in the multipack are full and all of the disks have spun up, the instructor will continue the lab as a demonstration with the following substeps on the new host: a Demonstrate that the OS cannot detect the disks. # format b Demonstrate that VxVM cannot detect the disks. # vxdisk list c Configure the devices. # devfsadm d Demonstrate that the OS can now detect the disks, but that VxVM still cannot detect the disks. # format # vxdisk list e Force the VxVM configuration daemon to rescan for the disks. # vxdctl enable f Demonstrate that VxVM can now detect the disks. # vxdisk list g Import one or more of the disk groups. If the participants deported the disk group correctly, vxdisk list displays the new disk groups as imported disk groups on the new host. Otherwise, import the disk group using the C option. # vxdg import yournamedg # vxdisk list 9 Display the state of the volumes using vxprint and VEA. The volumes are displayed with an alert and stopped. # vxprint

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-31

10 Start the volumes by using VEA or vxvol start volume_name. You may need to specify g diskgroup if the volume name is not unique. 11 Create a new mount point and mount one of the volumes. Demonstrate that all the files are still accessible. 12 Unmount the volume. 13 If you did not physically move the disks: a Deport the disk group without changing the host name. # vxdg deport yournamedg b Import the disk group back on your original machine. # vxdg import yournamedg If the disks were physically moved: a At the end of the demonstration, participants should move their disk groups back to their own machines (without deporting). b Import the disk groups on their own machines. This simulates recovery after a host crash. You must use the C option to do an import. # vxdg -C import yournamedg 14 Display the disk groups on your system. # vxdg list 15 Destroy the practice disk group. # vxdg destroy yournamedg At the end of this lab you should have one disk in rootdg (the boot disk). Leave all other disks as uninitialized disks.

B-32

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 6 Solutions: Creating a Volume


Introduction In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. You also practice creating a RAID-5 volume, creating a volume with a file system, and mounting a file system. Attempt to perform this lab using command-line interface commands. If you use object names other than the ones provided, substitute the names accordingly in the commands. After each step, use the VEA interface to view the volume layout in the main window and in the Volume View window. Solutions for performing tasks from the command line and using the VERITAS Enterprise Administrator (VEA) are included in the Lab Solutions appendix. Setup A minimum of four disks is required to perform this lab, not including the root disk. Creating Volumes: CLI 1 Add four initialized disks to a disk group called datadg. Verify your action using vxdisk list. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. Create a new disk group and add disks: Command: vxdg init diskgroup disk_name=device_tag # vxdg init datadg datadg01=c1t8d0 datadg02=c1t9d0 Add additional disks to the disk group: Command: vxdg g diskgroup adddisk disk_name=device_tag # vxdg g datadg adddisk datadg03=c1t3d0 2 Create a 50-MB concatenated volume with one drive. Command: vxassist g diskgroup make volume_name size # vxassist g datadg make vol01 50m 3 Display the volume layout. What names have been assigned to the plex and subdisks? To view the assigned names, view the volume using: # vxprint g datadg thf | more 4 Remove the volume. Command: vxedit g diskgroup rf rm volume_name # vxedit g datadg rf rm vol01

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-33

5 Create a 50-MB striped volume on two disks and specify which two disks to use in creating the volume. Command: vxassist g diskgroup make volume_name size layout=stripe disk1 disk2 # vxassist g datadg make vol02 50m layout=stripe datadg01 datadg02 What names have been assigned to the plex and subdisks? To view the assigned names, view the volume using: vxprint -g diskgroup -thf | more 6 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Command: vxassist g diskgroup make volume_name size layout=stripe,mirror ncol=number_columns stripeunit=size [disks] # vxassist g datadg make vol03 20m layout=stripe,mirror ncol=2 stripeunit=128k What do you notice about the plexes? View the volume using vxprint g datadg thf | more. Notice that you now have a second plex. 7 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should not use. Command: vxassist g diskgroup make volume_name size layout=stripe,mirror ncol=number_columns stripeunit= size !disk # vxassist g datadg make vol04 20m layout=stripe,mirror ncol=2 stripeunit=128k !datadg03 Was the volume created? This operation should fail, because there are not enough disks available in the disk group. A two-column striped mirror requires at least 4 disks. 8 Create a 20-MB striped volume with a mirror that has one less column (3) than number of drives. Command: vxassist g diskgroup b make volume_name size layout=stripe,mirror ncol=number_columns disks # vxassist g datadg b make vol04 20m layout=stripe,mirror ncol=3 datadg01 datadg02 datadg03

B-34

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Was the volume created? Again, this operation should fail, because there are not enough disks available in the disk group. At least four disks are required for this type of volume configuration. 9 Create the same volume specified in step 7, but without the mirror. Command: vxassist g diskgroup b make volume_name size layout=stripe ncol=number_columns # vxassist g datadg b make vol05 20m layout=stripe ncol=3 datadg01 datadg02 datadg03 What names have been assigned to the plex and subdisks? To view the assigned names, view the volume using: vxprint -g diskgroup -thf | more 10 Create a 100-MB RAID-5 volume. Set the number of columns to the number of drives in the disk group. Command: vxassist g diskgroup make volume_name size layout=raid5 ncol=number_columns disks # vxassist g datadg make vol06 100m layout=raid5 ncol=4 datadg01 datadg02 datadg03 datadg04 Was the volume created? This operation should fail, because when you create a RAID-5 volume, a RAID-5 log is created by default. Therefore, at least five disks are required for this volume configuration. Run the command again, but use one less column. # vxassist g datadg make vol06 100m layout=raid5 ncol=3 datadg01 datadg02 datadg03 What is different about the structure? View the volume using vxprint g Notice that you now have a log plex.

datadg thf | more.

11 Remove the volumes created in this exercise. For each volume: Command: vxedit -g diskgroup -rf rm volume_name # vxedit -g datadg -rf rm vol01

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-35

More Practice (Optional) This optional guided practice illustrates how to use the /etc/default/vxassist and /etc/default/alt_vxassist files to create volumes with defaults specified by the user. 1 Create two files in /etc/default: # cd /etc/default a Create a file called vxassist that includes the following: # when mirroring create three mirrors nmirror=3 b Create a file called alt_vxassist that includes the following: # use 256K as the default stripe unit size for regular volumes stripe_stwid=256k 2 Use these files when creating the following volumes: Create a 100-MB volume using layout=mirror: # vxassist -g datadg make testvol 100m layout=mirror Create a 100-MB, two-column stripe volume using -d alt_vxassist so that Volume Manager uses the default file: # vxassist -g datadg -d alt_vxassist make testvol2 100m layout=stripe 3 View the layout of these volumes using VEA and by using vxprint. What do you notice? The first volume should show three plexes rather than the standard two. The second volume should show a stripe size of 256K instead of the standard 64K. 4 Remove any vxassist default files that you created in this optional lab section. The presence of these files can impact subsequent labs where default behavior is assumed.

B-36

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Creating Volumes: VEA Solutions 1 Add four initialized disks to a disk group called datadg. Verify your action in the main window. Create a new disk group and add disks: Select a disk, and select Actions>New Dynamic Disk Group. In the New Dynamic Disk Group wizard, specify the disk group name, select the disks you want to use from the Available disks list, and click Add. Click Next, confirm your selection, and click Finish. 2 Create a 50-MB concatenated volume with one drive. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and specify a size of 50 MB. Verify that the Concatenated layout is selected in the Layout region. Complete the wizard by accepting all remaining defaults to create the volume. 3 Display the volume layout. Notice the naming convention of the plex and subdisk. Select the volume in the object tree, and select Actions>Volume View. In the Volumes window, click the Expand button. Compare the information in the Volumes window to the information under the Mirrors, Logs, and Subdisks tabs in the right pane of the main window. 4 Remove the volume. Select the volume, and select Actions>Delete Volume. In the Delete Volume dialog box, click Yes. 5 Create a 50-MB striped volume on two disks, and specify which two disks to use in creating the volume. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and specify a size of 50 MB. Select the Striped option in the Layout region. Verify that the number of columns is 2. Click Next, and on the Select disks to use for volume page, select Manually select disks for use by this volume. Move two disks into the Included box, and then click Next. Complete the wizard by accepting all remaining defaults to create the volume.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-37

View the volume. Select the volume, and select Actions>Volume View. Close the Volumes window when you are satisfied. 6 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and specify a size of 20 MB. Select the Striped option in the Layout region. Verify that the number of columns is 2. Set the Stripe unit size to 256 (sectors), or 128K. Mark the Mirrored check box in the Mirror Info region. Complete the wizard by accepting all remaining defaults to create the volume. View the volume. Notice that you now have a second plex. Select the volume, and select Actions>Volume View. Close the Volumes window when you are satisfied. 7 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should not use. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and specify a size of 20 MB. Select the Striped option in the Layout region. Verify that the number of columns is 2. Set the Stripe unit size to 256 (sectors), or 128K. Mark the Mirrored check box in the Mirror Info region. Click Next, and on the Select disks to use for volume page, select Manually select disks for use by this volume. Move one disk into the Excluded box, and then click Next. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? This operation should fail, because there are not enough disks available in the disk group. A two-column striped mirror requires at least four disks. 8 Create a 20-MB striped volume with a mirror with one less column than number of drives. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and specify a size of 20 MB.

B-38

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Select the Striped option in the Layout region. Change the number of columns to 3. Mark the Mirrored check box in the Mirror Info region. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? Again, this operation should fail, because there are not enough disks available in the disk group. At least four disks are required for this type of volume configuration. 9 Create the same volume specified in step 7, but without the mirror. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and specify a size of 20 MB. Select the Striped option in the Layout region. Change the number of columns to 3. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? Yes, the volume is created this time. 10 Create a 100-MB RAID-5 volume. Set the number of columns to the number of drives in the disk group. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and a size of 100 MB. Select the RAID-5 option in the Layout region. Change the number of columns to 4. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? This operation should fail, because when you create a RAID-5 volume, a RAID-5 log is created by default. Therefore, at least five disks are required for this volume configuration. Run the command again, but use one less column. Select a disk group, and select Actions>New Volume. In the New Volume wizard, verify the name of the disk group, type the name of the volume, and a size of 100 MB.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-39

Select the RAID-5 option in the Layout region. Verify that the number of columns is 3. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? Yes, the volume is created this time. 11 Delete all volumes from the disk group. For each volume, select the volume and select Actions>Delete Volume. Click Yes to delete the volume.

B-40

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 7 Solutions: Configuring Volumes


Introduction This lab provides additional practice in configuring volume attributes. In this lab, you add mirrors, logs, and file systems to existing volumes, change the volume read policy, and specify ordered allocation of storage to volumes. You also practice creating layered volumes. Setup Before you begin this lab, ensure that any volumes created in previous labs have been removed. Configuring Volume Attributes: CLI Complete this exercise by using the command line interface. If you use object names other than the ones provided, substitute the names accordingly in the commands. Solutions for performing these tasks from the command line and using VEA are described in the Lab Solutions appendix. 1 Create a 20-MB, two-column striped volume with a mirror. # vxassist -g diskgroup make volume_name 20m layout=stripe,mirror ncol=2 2 Display the volume layout. How are the disks allocated in the volume? Which disk devices are used? # vxprint -rth Notice, for example, that the first plex uses disks datadg02 and datadg03, and the second plex uses the disks datadg04 and datadg01. 3 Remove the volume you just made, and re-create it by specifying the four disks in order of highest target first (for example, datadg04, datadg03, datadg02, datadg01, where datadg04=c1t15d0, datadg03=c1t14d0, and so on). # vxassist -g diskgroup remove volume volume_name # vxassist -g diskgroup -o ordered make volume_name 20m layout=stripe,mirror ncol=2 datadg04 datadg03 datadg02 datadg01 4 Display the volume layout. How are the disks allocated this time? # vxprint -rth The plexes are now allocated in the order specified on the command line. For example, the first plex uses datadg04 and datadg03 and the second plex uses datadg02 and datadg01.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-41

5 Add a mirror to the existing volume. # vxassist -g diskgroup mirror volume_name Were you successful? Why or why not? The original volume already occupied all four disks in the disk group. To add another mirror requires two extra disks. 6 Remove one of the two mirrors, and display the volume layout. # vxplex -g diskgroup -o rm dis plex_name # vxprint -rth 7 Add a mirror to the existing volume, and display the volume layout. # vxassist -g diskgroup mirror volume_name # vxprint -rth The original order of the four disks was preserved. 8 Add a dirty region log to the existing volume and specify the disk to use for the DRL. Display the volume layout. # vxassist -g diskgroup addlog volume_name logtype=drl disk_name # vxprint -rth 9 Change the volume read policy to round robin, and display the volume layout. # vxvol -g diskgroup rdpol round volume_name # vxprint -rth 10 Create a file system for the existing volume. # newfs /dev/vx/rdsk/diskgroup/volume_name Or, to create a VxFS file system: # mkfs F vxfs /dev/vx/rdsk/diskgroup/volume_name 11 Mount the file system at the mount point /mydirectory and add files. Verify that the files were added to the new volume. Create a mount point: # mkdir /mydirectory Mount the file system: # mount F ufs /dev/vx/dsk/diskgroup/volume_name /mydirectory Or, if using a VxFS file system: # mount F vxfs /dev/vx/dsk/diskgroup/volume_name /mydirectory
B-42 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

12 View the mount points using df k. Using the VEA interface, open the Volume to Disk Mapping window and display the subdisk information for each disk. Select the disk group, and select Actions>Disk/Volume Map. In the Volume to Disk Mapping window, click the triangle to the left of each disk name to view the subdisks. 13 Unmount and remove the volume with the file system. Unmount the file system: # umount /mydirectory Remove the volume: # vxassist g diskgroup remove volume volume_name Configuring Volume Attributes: VEA Solutions 1 Create a 20-MB, two-column striped volume with a mirror. Highlight a disk group and select Actions>New Volume. Complete the New Volume wizard. Notice that, by default in VEA, a log is created for the mirrored volume. This is not the case when creating a mirrored volume from the command line. 2 Display the volume layout. How are the disks allocated in the volume? Which disk devices are used? Highlight the volume and click each of the tabs in the right pane and notice the information under the Mirrors, Logs, and Subdisks tabs. Select Actions>Volume View, click the Expand button, and compare the information to the information in the main window. 3 Remove the volume you just made, and re-create it by specifying the four disks in order of highest target first (for example, datadg04, datadg03, datadg02, datadg01, where datadg04=c1t15d0, datadg03=c1t14d0, and so on). When you create the volume, in the Select disks to use for volume page of the New Volume wizard, select Manually select disks for use by this volume. Move the disks into the Included box in the desired order, mark the Ordered check box, click Next, and click Finish. 4 Display the volume layout. How are the disks allocated this time? Highlight the volume and click each of the tabs in the right pane. Notice the information in the Mirrors, Logs, and Subdisks tabs.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-43

Select Actions>Volume View, click the Expand button, and compare the information to the information in the main window. 5 Remove a mirror and update the layout display. What happened? Highlight the volume, and click the Mirrors tab in the left pane. Rightclick a plex, and select Actions>Remove Mirror. In the Remove Mirror dialog box, click Yes. Only the selected plex is removed. 6 Add a mirror to the existing volume and show the layout. Highlight the volume to be mirrored, and select Actions>Mirror> Add. Complete the Add Mirror dialog box and click OK. Remove a mirror using the vxassist command and investigate the resulting layout with VEA. What happened? # vxassist -g diskgroup remove mirror volume_name Both a plex and the dirty region log have been removed. Add a mirror to the existing volume with VEA and show the layout. Is the log re-created? Highlight the volume to be mirrored, and select Actions>Mirror> Add. Complete the Add Mirror dialog box and click OK. Highlight the volume and click the Logs tab. Notice that a dirty region log is not created automatically when you add a mirror to an existing volume. 7 Add a dirty region log to the existing volume, specify the target disk for the log, and then show the layout. Highlight the volume, and select Actions>Log>Add. Complete the Add Log dialog box, specify a target disk for the log, and click OK. Highlight the volume and click the Logs tab. 8 Change the volume read policy to round robin. Highlight the volume, and select Actions>Set Volume Usage. Select Round robin and click OK. 9 Create a file system for the existing volume. Highlight the volume, and select Actions>File System>New File System. In the New File System dialog box, specify a mount point for the volume, and click OK.

B-44

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

10 Add files to the new volume. Verify that the files were added to the new volume. After adding files to the file system, you can verify that files were added by displaying file system information. Expand the File Systems node in the object tree, and right-click the file system in the right pane, and select Properties. Using the VEA interface, open the Volume to Disk Mapping window and display the subdisk information for each disk. Highlight the disk group and select Actions>Disk/Volume Map. 11 Unmount and remove the volume with the file system. Highlight the volume, and select Actions>Delete Volume. In the Delete Volume dialog box, click Yes. In the Unmount File System dialog box, click Yes. Creating Layered Volumes: VEA Complete this exercise by using the VEA interface. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you are using. 1 First, remove any volumes that you created in the previous lab. To remove a volume, highlight a volume in the main window, and select Actions>Remove Volume. 2 Create a 100-MB Striped Pro volume with no logging. Select a disk group in the main window. Select Actions>New Volume. In the New Volume wizard, specify a volume name, specify a volume size of 100 MB, and select a Striped Pro layout. Clear the Enable logging check box, and complete the wizard by accepting all remaining defaults to create the volume. What command was used to create this volume? Hint: View the task properties. Click the Tasks tab at the bottom of the screen. In the Tasks tab, rightclick the latest Create Volume tasks and select Properties. The command issued is displayed in the Commands Executed field.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-45

3 Create a Concatenated Pro volume with no logging. The size of the volume should be greater than the size of the largest disk in the disk group; for example, if your largest disk is 8 GB, then create a 10-GB volume. Select a disk group, and select Actions>New Volume. In the New Volume wizard, specify a volume name, an appropriate volume size, and select a Concatenated Pro layout. Clear the Enable logging check box, and complete the wizard by accepting all remaining defaults to create the volume. What command was used to create this volume? Click the Tasks tab at the bottom of the screen. In the Tasks tab, rightclick the latest Create Volume tasks and select Properties. The command issued is displayed in the Commands Executed field. 4 View the volumes in VEA and compare the layouts. Highlight the disk group and select Actions>Volume View. Click the Expand button in the Volumes window. You can also highlight each volume in the object tree and view information in the tabs in the right pane. Notice the information on the Mirrors, Logs, and Subdisks tabs. 5 View the volumes from the command line. # vxprint -rth volume_name 6 Remove all of the volumes. To remove a volume, use the command: # vxedit g diskgroup rf rm volume_name

B-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 8 Solutions: Volume Maintenance


Introduction In this lab, you resize volumes, change volume layouts, and create volume snapshots. Setup To perform this lab, you should have at least four disks in the disk group that you are using. You can use either the VEA interface or the command line interface, whichever you prefer. The solutions for both methods are covered in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. Note: If you are using VEA, view the properties of the related task after each step to view the underlying command that was issued. Resizing a Volume 1 If you have not already done so, remove the volumes created in the previous lab. VEA: For each volume in your disk group, highlight the volume, and select Actions>Delete Volume. CLI: # umount /filesystem # vxedit g diskgroup rf rm volume_name 2 Create a 20-MB concatenated mirrored volume with a file system /myfs, and mount the volume. VEA: Highlight the disk group, and select Actions>New Volume. Specify a volume name, the size, a concatenated layout, and select mirrored. Ensure that Enable logging is not checked. Add a UFS file system and set a mount point. CLI: # vxassist -g diskgroup make volume_name 20m layout=mirror # newfs /dev/vx/rdsk/diskgroup/volume_name # mkdir /myfs # mount F ufs /dev/vx/dsk/diskgroup/volume_name /myfs

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-47

3 View the layout of the volume. VEA: Highlight the volume and click each of the tabs in the right pane to display information about Mirrors, Logs, and Subdisks. You can also select Actions>Volume View, click the Expand button, and compare the information to the main window. CLI: # vxprint -rth 4 Add data to the volume and verify that the file has been added. # echo hello myfs > /myfs/hello 5 Expand the file system and volume to 100 MB. VEA: Highlight the volume and select Actions>Resize Volume. In the Resize Volume dialog box, specify 100 MB in the New volume size field, and click OK. CLI: # vxresize g diskgroup volume_name 100m Changing the Volume Layout 1 Change the volume layout from its current layout (mirrored) to a nonlayered mirror-stripe with two columns and a stripe unit size of 128 sectors (64K). Monitor the progress of the relayout operation, and display the volume layout after each command that you run. VEA: Highlight the volume and select Actions>Change Layout. In the Change Volume Layout dialog box, select a Striped layout, specify two columns, and click OK. To monitor the progress of the relayout, the Relayout status monitor window is automatically displayed when you start the relayout operation. When you view the task properties of the relayout operation, notice that two commands are issued: # vxassist -t taskid -g diskgroup relayout volume_name layout=mirror-stripe nmirror=2 ncol=2 stripeunit=128 # vxassist -g diskgroup convert volume_name layout=mirror-stripe

B-48

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

CLI: To begin the relayout operation: # vxassist g diskgroup relayout volume_name layout=mirror-stripe ncol=2 stripeunit=128 To monitor the progress of the task, run: # vxtask monitor Run vxprint to display the volume layout. Notice that a layered layout is created: # vxprint -rth Recall that when you relayout a volume to a striped layout, a layered layout is created. You must then use vxassist convert to complete the conversion to a nonlayered mirror-stripe: # vxassist -g diskgroup convert volume_name layout=mirror-stripe Run vxprint to confirm the resulting layout. Notice that the volume is now a nonlayered volume: # vxprint -rth 2 Verify that the file is still accessible. # cat /myfs/hello 3 Unmount the file system on the volume and remove the volume. VEA: Highlight the volume, and select Actions>Delete Volume. In the Delete Volume dialog box, click Yes. In the Unmount File System dialog box, click Yes. CLI: # umount /filesystem # vxedit g diskgroup rf rm volume_name Performing Volume Snapshot Operations 1 Create a 500-MB volume named vol01 with a file system /myfs, and mount the file system on the volume. VEA: Highlight the disk group, and select Actions>New Volume. Specify a volume name, the size, a concatenated layout, and no mirror. Add a file system and set a mount point as /myfs.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-49

CLI: # vxassist -g datadg make vol01 500m # newfs /dev/vx/rdsk/datadg/vol01 # mkdir /myfs (if the directory does not already exist) # mount -F ufs /dev/vx/dsk/datadg/vol01 /myfs Or, if using VxFS: # mkfs -F vxfs /dev/vx/rdsk/datadg/vol01 # mkdir /myfs (if the directory does not already exist) # mount -F vxfs /dev/vx/dsk/datadg/vol01 /myfs 2 Add data to the volume and verify that the data has been added. # echo hello myfs > /myfs/hello 3 Start the snapstart phase of creating a snapshot of the volume. VEA: Highlight the volume and select Actions>Snap>Snap Start. In the Snap Start Volume dialog box, let VxVM determine which disks to use, and click OK. A snapshot mirror is added to the volume. CLI: # vxassist -g datadg -b snapstart vol01 4 Add another file to /myfs.
# echo file added during snapstart > /myfs/anotherfile

5 Complete the snapshot of the volume. Name the snapshot volume snapshot_vol01. VEA: Highlight the volume that has the snapshot mirror, and select Actions> Snap>Snap Shot. In the Snap Shot Volume window, specify a snapshot name of snapshot_vol01, select the snapshot mirror to be used in creating the snapshot volume, and click OK. CLI: # vxassist -g datadg snapshot vol01 snapshot_vol01 6 Mount the snapshot volume to /snapmyfs. VEA: Highlight the snapshot volume and select Actions>File System> Mount File System. In the Mount File System dialog box, specify the file system type, the mount point as /snapmyfs, and click OK.

B-50

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

CLI: # mkdir /snapmyfs # mount -F ufs /dev/vx/dsk/datadg/snapshot_vol01 /snapmyfs Or, if using VxFS: # mkdir /snapmyfs # mount -F vxfs /dev/vx/dsk/datadg/snapshot_vol01 /snapmyfs 7 View the files in /myfs and /snapmyfs. They should be identical. # ls -l /myfs /snapmyfs 8 Add more data to /myfs. # echo hello again myfs > /myfs/helloagain Are the two file systems the same now? Why? # ls -l /myfs /snapmyfs No, the two file systems are no longer the same, because the volumes are separated. 9 Add more data to the snapshot volume. You can add data from /usr/sbin by copying /usr/sbin/s* to the /snapmyfs. # cp /usr/sbin/s* /snapmyfs Note: If you are unable to copy data to /snapmyfs, check to ensure that the file system has not been mounted read-only. 10 Unmount the snapshot volume. VEA: Highlight the volume and select Actions>File System>Unmount File System. When prompted, click Yes to unmount the file system. CLI: # umount /snapmyfs 11 Unmount the original volume and reassociate the snapshot with the volume, resynchronizing the volumes by using the snapshot. VEA: Highlight the original volume and select Actions>File System> Unmount File System. When prompted, click Yes to unmount the file system. Highlight the snapshot volume and select Actions>Snap>Snap Back. In the Snap Back Volume window, select the Resynchronize using the snapshot option, and click OK.
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-51

CLI # umount /myfs # vxassist -g datadg -o resyncfromreplica snapback snapshot_vol01 12 Create another snapshot volume. After you create the snapshot, permanently break the association between the snapshot and the original volume. VEA: Highlight the volume and select Actions>Snap>Snap Start. In the Snap Start Volume dialog box, let VxVM determine which disks to use, and click OK. A snapshot mirror is added to the volume. Highlight the volume that has the snapshot mirror, and select Actions> Snap>Snap Shot. In the Snap Shot Volume window, specify a snapshot name of snapshot_vol01, select the snapshot mirror to be used in creating the snapshot volume, and click OK. After you create the snapshot, highlight the snapshot volume and select Actions>Snap>Snap Clear. When prompted, click Yes to clear the snapshot volume. CLI: # vxassist -g datadg -b snapstart vol01 Wait for the snapstart operation to complete. # vxassist -g datadg snapshot vol01 snapshot_vol01 After you create the snapshot, use the snapclear option: # vxassist -g datadg snapclear snapshot_vol01 13 Attempt to reassociate the snapshot with the volume. Does this work? If not, why not? VEA: Highlight the snapshot volume and attempt to select Actions> Snap>Snap Back. The Snap Back option is not available. CLI: # vxassist -g datadg snapback snapshot_vol01 The operation fails, because the record of the association between the original volume and the snapshot plex was removed by the snapclear operation. 14 Unmount any file systems and remove any volumes created in this exercise. VEA:

B-52

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Highlight the volume, and select Actions>Delete Volume. In the Delete Volume dialog box, click Yes. In the Unmount File System dialog box, click Yes. CLI: # umount /filesystem # vxedit g diskgroup rf rm volume_name Monitoring Tasks (Optional) Objective: In this advanced section of the lab, you track volume relayout processes using the vxtask command and recover from a vxrelayout crash. Setup: You should have at least four disks in the disk group that you are using. 1 Create a mirror-stripe volume with a size of 1 GB using the vxassist command. Assign a task tag to the task and run the vxassist command in the background. VEA: Highlight a disk group and select Actions>New Volume. Specify a volume name, the size, a striped layout, and select mirrored. Ensure that Enable logging is not checked. Add a VxFS file system and create a mount point. Note: You cannot assign a task tag when using VEA. CLI: # vxassist g diskgroup -b t task_name make volume_name 1g layout=mirror-stripe 2 View the progress of the task. VEA: Click the Tasks tab at the bottom of the main window to display the task and the percent complete. CLI: # vxtask list task_name or # vxtask monitor 3 Slow down the task progress rate to insert an I/O delay of 100 milliseconds. VEA: Right-click the task in the Tasks tab, and select Throttle Task. Specify 100 as the Throttling value, and click OK.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-53

CLI: # vxtask set slow=100 task_name View the layout of the volume in the VEA interface. 4 After the volume has been created, use vxassist to relayout the volume to stripe-mirror. Use a stripe unit size of 256K, use two columns, and assign the process to the above task tag. VEA: Highlight the volume and select Actions>Change Layout. In the Change Volume Layout dialog box, select a Striped Pro layout. Change the stripe unit size value to 512. CLI: # vxassist g diskgroup t task_name relayout volume_name layout=stripe-mirror stripeunit=256k ncol=2 5 In another terminal window, abort the task to simulate a crash during relayout. VEA: In the Relayout status monitor window, click Abort. CLI: # vxtask abort task_name View the layout of the volume in the VEA interface. 6 Reverse the relayout operation. VEA: In the Relayout status monitor window, click Reverse. CLI: # vxrelayout g diskgroup reverse volume_name View the layout of the volume in the VEA interface. 7 Remove all of the volumes. VEA: Highlight the volume, select Actions>Delete Volume, and click Yes. CLI: # vxedit g diskgroup rf rm volume_name

B-54

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 9 Solutions: Setting Up a File System


Introduction This lab ensures that you are able to use basic VERITAS File System administrative commands from the command line. Setup Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands: Setting Up a File System 1 Create a 500-MB striped volume named datavol in the disk group datadg and use the default number of columns and stripe unit size. # vxassist -g datadg make datavol 500m layout=stripe 2 Create a VERITAS file system on the datavol volume using the default options. # mkfs -F vxfs /dev/vx/rdsk/datadg/datavol 3 Create a mount point /datamnt on which to mount the file system. # mkdir /datamnt 4 Mount the newly created file system on the mount point, and use all default options. # mount -F vxfs /dev/vx/dsk/datadg/datavol /datamnt 5 Using the newly created file system, create, modify, and remove files. # cd /datamnt # cp /etc/r* . # touch file1 file2 # mkfile 64b file3 # vi newfile (Enter some content into the new file and save the file.) # rm reboot 6 Display the content of the mount point directory, showing hidden entries, inode numbers, and block sizes of the files. # ls -alis 7 What is the purpose of the lost+found directory? To hold the data blocks salvaged from running fsck. 8 How many disk blocks are defined within the file system and are used by the file system? # df # df -k # du -s . 9 Unmount the file system. # cd /
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-55

# umount /datamnt 10 Mount and, if necessary, check the file system at boot time. #vi /etc/vfstab G o In the /etc/vfstab file, add the following information: device to mount: /dev/vx/dsk/datadg/datavol device to fsck: /dev/vx/rdsk/datadg/datavol mount point: /datamnt FS Type: vxfs fsck pass: 2 mount at boot: yes mount options: [Esc] :wq

11 Verify that the mount information has been accepted. # mount -a 12 Display details of the file system that were set when it was created. # fstyp -v /dev/vx/dsk/datadg/datavol 13 Check the structural integrity of the file system using the default log policy. # umount /datamnt # fsck -F vxfs /dev/vx/dsk/datadg/datavol 14 Remove the volume that you created for this lab. # vxassist -g datadg remove volume datavol

B-56

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Administering File Systems Through VEA (Optional) If you have time, try to perform the file system administration tasks by using the VERITAS Enterprise Administrator (VEA) graphical user interface. If all of your external disks are currently in the datadg disk group, you must remove at least two disks from the disk group in order to perform this lab. 1 Start the graphical user interface. # vea & 2 In VEA, what disks are available? In the object tree of the main window, expand your system node, and select Disks. In the grid, examine the Status column to determine what disks are available. 3 Create a disk group named acctdg containing two disks. Highlight two disks that are not set up, and select Actions>New Dynamic Disk Group. In the New Dynamic Disk Group wizard, type a name for the disk group, confirm the selected disks, and type disk names for the disks. Click Next, confirm your selections, and click Finish. 4 Create a 500-MB striped volume named acctvol using the default number of columns and stripe unit size in the disk group acctdg. In the main window, highlight the disk group. Select Actions>New Volume. In the New Volume wizard, specify the volume name, acctvol, a size of 500m, and select the Striped layout option. Complete the wizard by accepting all remaining defaults to create the volume without a file system. 5 Create a VxFS file system in the acctvol volume using the default options. Mount the newly created file system on the acctmnt mount point. In the main window, highlight the volume to contain the file system. Select Actions>File System>New File System. In the New File System dialog box, specify the mount point as /acctmnt, mark the Add to file system table and Mount at boot check boxes, and click OK to create and mount the file system. 6 Using the newly created file system, create, modify, and remove files. Use the command line interface as specified in the first exercise. 7 Display the content, showing hidden entries, inodes, and block sizes. Use the command line interface as specified in the first exercise. 8 How many disk blocks are defined within and are used by the file system? Use the command line interface as specified in the first exercise. 9 In VEA, unmount the file system. In the main window, highlight the volume containing the file system to be unmounted. Select Actions>File System>Unmount File System. Ensure that the name of the file system to be unmounted is displayed, and click Yes. 10 Check the structural integrity of the file system.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-57

In the main window, under the File Systems node, select the file system to be checked. Select Actions>Check File System. In the Check File System dialog box, click Yes to begin the file system check. 11 Mount the file system. In the main window, under the File Systems node, highlight the file system to be mounted. Select Actions>Mount File System. In the Mount File System dialog box, verify that the Mount using options in file system table check box is marked, and click OK. 12 Display details of the file system that were set when it was created. In the main window, right-click the file system, and select Properties. 13 Unmount the file system, remove the acctvol volume, and destroy the acctdg disk group that you created in this exercise. Return all of your external disks to the datadg disk group. Highlight the volume, and select Actions>Delete Volume. In the Delete Volume dialog box, click Yes. In the Unmount File System dialog box, click Yes. Select the host machine and select Actions>Destroy Dynamic Disk Group. In the Destroy Dynamic Disk Group dialog box, type the name of the disk group to be destroyed. Click OK and confirm your action. Select an uninitialized disk and select Actions>Add Disk to Dynamic Disk Group. Complete the Add Disk to Dynamic Disk Group Wizard by adding the disks to the datadg disk group.

B-58

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 10 Solutions: Online File System Administration


Introduction In this lab, you investigate and practice online file system administration tasks. You resize a file system using fsadm, back up and restore a file system using vxdump and vxrestore, and create and use a snapshot file system. Setup Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. Resizing a File System 1 Create a 50-MB volume named reszvol in the diskgroup datadg by using the VERITAS Volume Manager utility vxassist. # vxassist -g datadg make reszvol 50m 2 Create a VERITAS file system on the volume by using the mkfs command. Specify the file system size as 40 MB. # mkfs -F vxfs /dev/vx/rdsk/datadg/reszvol 40m 3 Create a mount point /reszmnt on which to the mount the file system. # mkdir /reszmnt 4 Mount the newly created file system on the mount point /reszmnt. # mount -F vxfs /dev/vx/dsk/datadg/reszvol /reszmnt 5 Verify disk space using the df command. Observe that the available space is smaller than the size of the volume. # df -k 6 Expand the file system to the full size of the underlying volume using the fsadm -b newsize option. # fsadm -b 50m -r /dev/vx/rdsk/datadg/reszvol /reszmnt 7 Verify disk space using the df command. # df -k 8 Make a file on the file system mounted at /reszmnt (using mkfile), so that the free space is less than 50 percent of the total file system size. # mkfile 25m /reszmnt/myfile 9 Shrink the file system to 50 percent of its current size. What happens? # fsadm -b 25m -r /dev/vx/rdsk/datadg/reszvol /reszmnt The command fails. You cannot shrink the file system because blocks are currently in use. 10 Experiment with the vxresize command. Expand the file system to 100 MB and then shrink the file system down to 60 MB. Verify that the volume and file system are resized at the same time after each command issued. # vxresize -F vxfs -g datadg reszvol 100m # vxprint -ht
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-59

# df -k # vxresize -F vxfs -g datadg reszvol 60m # vxprint -ht # df -k 11 Unmount the file system and remove the volume. # umount /reszmnt # vxassist -g datadg remove volume reszvol

Backing Up and Restoring a File System 1 Create a 100-MB volume named fsvol. Create a file system on the volume and mount the file system as /fsorig. Copy the contents of /usr/bin onto the file system. # vxassist -g datadg make fsvol 100m # mkfs -F vxfs /dev/vx/rdsk/datadg/fsvol # mkdir /fsorig # mount -F vxfs /dev/vx/dsk/datadg/fsvol /fsorig # cp /usr/bin/* /fsorig 2 Create a 200-MB volume to use as a backup device. Name this volume backupvol, and use different disks from the original volume. Note: Use the vxprint command to determine which disks are in use by the original volume. # vxprint -g datadg # vxassist -g datadg make backupvol 200m datadg03 3 Create a file system on the backup volume and mount it on /backup. # mkfs -F vxfs /dev/vx/rdsk/datadg/backupvol # mkdir /backup # mount -F vxfs /dev/vx/dsk/datadg/backupvol /backup 4 To prepare for the first backup, run the sync command several times to ensure that asynchronous I/O operations are complete before continuing. # sync; sync 5 Using vxdump, perform a level 0 backup to backup the contents of /fsorig to the file firstdump at the mount point /backup. # vxdump -0 -u -f /backup/firstdump /fsorig 6 Create an additional file on /fsorig. # mkfile 50m /fsorig/newfile 7 To prepare for the second backup, run the sync command several times to ensure that asynchronous I/O operations are complete before continuing. # sync; sync 8 Using vxdump, perform a level 1 backup to back up the contents of /fsorig to the file seconddump at the mount point /backup. # vxdump -1 -u -f /backup/seconddump /fsorig

B-60

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9 Destroy /fsorig by unmounting and remaking the file system with the same name. Mount the file system on the original volume fsvol and verify that /fsorig no longer contains the original files. # umount /fsorig # mkfs -F vxfs /dev/vx/rdsk/datadg/fsvol # mount -F vxfs /dev/vx/dsk/datadg/fsvol /fsorig # cd /fsorig # ls -l 10 Using vxrestore, restore the contents of the level 0 backup. Note: Ensure that you are in /fsorig before you run the vxrestore command. # cd /fsorig # vxrestore -vrf /backup/firstdump 11 Check the contents of /fsorig for the original files. # ls -l /fsorig # df -k 12 Using vxrestore, restore the contents of the level 1 backup. Wait for the restore operation to complete. # vxrestore -vrf /backup/seconddump 13 Check the contents of /fsorig for the additional file that you created. # ls -l /fsorig/newfile Creating a Snapshot File System Note: Ensure that a Console window is open during this lab. 1 Create a volume called snapvol to use for a snapshot of /fsorig. Create the size of the volume to be at least five percent of the /fsorig file system. Create the volume on a different disk than the original. Make a directory called /snap. # vxassist -g datadg make snapvol 10m datadg04 # mkdir /snap 2 Mount a snapshot of /fsorig onto the newly created volume snapvol onto /snap. # mount -F vxfs -o snapof=/fsorig /dev/vx/dsk/datadg/snapvol /snap 3 Verify that the two file systems are the same at this point by using the commands ls -al and df -k. 4 Open another terminal window and modify the original file system by removing some files, creating some new files, and updating the time stamps on the original files. Review the snapshot /snap after each action to ensure that the snapshot has not changed. # chmod 755 /fsorig/* # cd /fsorig
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-61

7 8

10

# ls r* # rm r* # ls -l /fsorig/r* /snap/r* Note that the files you removed from the original file system are present in the snapshot. # mkfile 20k myfile # touch script # ls -l /fsorig/script # ls -l /snap/script Note the difference in time stamps. # ls -l /fsorig/myfile Note that the file is present. # ls -l /snap/myfile Note that the file is not present in the snapshot. Restore some deleted files by copying them from the snapshot backup /snap to the original file system /fsorig. # cp /snap/r* /fsorig # ls -l /fsorig/r* Create a file in /fsorig that is larger in total size than the size of the snapshot. List the contents of the snapshot. Is the large file listed in /snap? # mkfile 20m /fsorig/largefile # ls -l /snap/largefile The file is not there, because it was created after the snapshot was taken. Unmount the snapshot file system. # umount /snap Re-create the snapshot. Is the large file listed in /snap? # mount -F vxfs -o snapof=/fsorig /dev/vx/dsk/datadg/snapvol /snap # ls -l /snap/largefile The large file is there. Remove the large file in /fsorig then copy it back from /snap. What happens? # rm /fsorig/largefile # cp /snap/largefile /fsorig The file cannot be completely copied because the original version cannot be saved in the snapshot volume. Unmount the snapshot file system and remove the snapshot volume. # umount /snap # vxassist -g datadg remove volume snapvol

B-62

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Online File System Administration in VEA (Optional) If you have time, try to resize a file system and create a snapshot file system by using the VERITAS Enterprise Administrator (VEA) graphical user interface. To resize a file system using VEA: 1 Select the file system to be resized. 2 Select Actions>Resize File System. 3 Complete the Resize File System dialog box by specifying the new size and disk assignment. To increase the file system size by a specific amount of space, use the Add by field to specify how much space should be added. To decrease the file system size by a specific amount of space, use the Subtract by field to specify how much space should be removed. To specify a new file system size, type the size in the New size field. To specify the largest size possible, click the Max Size button. 4 To use a specific disk for the additional space, select Manually assign destination disks, select the disks to use, and click Add. 5 When you have provided all necessary information in the dialog box, click OK. Notice that VEA uses the vxresize command, not the fsadm command, to perform the resize operation. To display the underlying CLI command, rightclick the resize task in the Task History window at the bottom of the main window, and select Properties. To create a snapshot file system using VEA: 1 Under the File Systems node, select the file system to be backed up. 2 Select Actions>Snapshot>Create. 3 In the Snapshot File System dialog box, verify the file system block device and mount point, and specify the snapshot mount point and snapshot size. 4 To place the snapshot on a specific disk, select Manually select which disks to use for the volume, select the disks to use, and click Add. 5 When you have provided all necessary information in the dialog box, click OK.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-63

Lab 11 Solutions: Defragmenting a File System


Introduction In this lab, you practice converting a UFS file system to VxFS, and you monitor and defragment a file system by using the fsadm command. Setup Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. Converting to a VERITAS File System 1 Create a 250-MB striped volume named convol that has three columns. # vxassist -g diskgroup make convol 250m layout=stripe ncol=3 2 Create a UFS file system on the volume convol and mount it on /con. # newfs /dev/vx/rdsk/diskgroup/convol # mkdir /con # mount /dev/vx/dsk/diskgroup/convol /con 3 Copy some files into the file system and stop when the file system is about 50 percent full. # cp -pR /opt/* /con # cp -pR /usr/sbin/* /con 4 Unmount the file system. # umount /con 5 Convert the file system to VxFS type using the verbose option. Note the mapping output. # /opt/VRTSvxfs/sbin/vxfsconvert -v /dev/vx/rdsk/diskgroup/convol 6 When prompted, do not commit to the conversion. UX:vxfs vxfsconvert: INFO: Do you wish to commit to the conversion? (ynq) n UX:vxfs vxfsconvert: INFO: CONVERSION WAS NOT COMMITTED

B-64

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

7 Try to mount the file system again. What happens? # mount /dev/vx/dsk/diskgroup/convol /con mount: the state of /dev/vx/dsk/diskgroup/convol is not okay and it was attempted to be mounted read/write mount: please run fsck and try again 8 Run an fsck on the file system. You should not get an error until Phase 5 of fsck. # fsck -Y /dev/vx/rdsk/diskgroup/convol Phase 5 - Check Cyl groups BLK(S) MISSING IN BIT MAPS SALVAGE? yes 9 Run the conversion again using the option to check the space required to complete the conversion. # /opt/VRTSvxfs/sbin/vxfsconvert -ev /dev/vx/rdsk/diskgroup/convol UX:vxfs vxfsconvert: INFO: Total of 30734K bytes required to complete the conversion 10 Try to mount the file system again. What happens this time? # mount /dev/vx/dsk/diskgroup/convol /con This time, the file system mounts successfully, because the file system is now clean. 11 Unmount /con and run the conversion again. This time, commit to the conversion when prompted. # umount /con # /opt/VRTSvxfs/sbin/vxfsconvert -v /dev/vx/rdsk/diskgroup/convol UX:vxfs vxfsconvert: INFO: Do you wish to commit to the conversion? (ynq) y UX:vxfs vxfsconvert: INFO: CONVERSION WAS SUCCESSFUL 12 Determine whether you now have a VxFS file system. # fstyp /dev/vx/dsk/diskgroup/convol 13 Run an fsck on the file system. You should not get an error until Phase 4 of fsck. # fsck -F vxfs -o full -Y /dev/vx/rdsk/diskgroup/convol super-block indicates that intent logging was disabled cannot perform log replay
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-65

pass0 - checking structural files pass1 - checking inode sanity and blocks... ... fileset 1 au 0 imap incorrect - fix (ynq) y ... OK to clear log? (ynq) y set state to CLEAN? (ynq) y ...

14 Mount the file system as type vxfs and note the data files are the same. # mount -F vxfs /dev/vx/dsk/diskgroup/convol /con # df -kv /con # ls -l /con 15 After completing this exercise, unmount the file system and remove the volume. # umount /con # vxassist -g diskgroup remove volume convol

Defragmenting a File System 1 Create a new 1-GB volume with a VxFS file system mounted on /fs_test. # vxassist -g diskgroup make volume 1g layout=stripe # mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume # mkdir /fs_test # mount -F vxfs /dev/vx/dsk/diskgroup/volume /fs_test 2 Repeatedly copy /opt to the file system using a new target directory name each time until the file system is approximately 85 percent full. # for i in 1 2 3 > do > cp -r /opt /fs_test/opt$i > done 3 Delete all files over 100 MB in size. # find /fs_test -size +100m -exec rm {} \; 4 Check the level of fragmentation in the file system. # /opt/VRTSvxfs/sbin/fsadm -D -E /fs_test 5 Repeat steps two and three using values 4 5 for i in the loop. Fragmentation of both free space and directories will result.
B-66 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

6 Repeat step two using values 6 7 for i. Then delete all files that are smaller than 64K to release a reasonable amount of space. # find /fs_test -size -64k -exec rm {} \; 7 Defragment the file system and display the results. Run fragmentation reports both before and after the defragmentation and display summary statistics after each pass. Compare the fsadm report from step 4 with the final report from the last pass in this step. # /opt/VRTSvxfs/sbin/fsadm -e -E -d -D -s /fs_test

After Completing This Lab Unmount the file systems and remove the volumes used in this lab.
# umount mount_point # vxassist -g diskgroup remove volume volume_name

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-67

Lab 12 Solutions: Intent Logging


Introduction In this lab, you investigate the impact of different intent log mount options and the impact of intent log size on file system performance. The PostMark Benchmarking Tool You use a benchmarking tool called PostMark to perform this lab. PostMark is an excellent tool for generating metadata changes (for example, file creation and deletion) to create stress on the parts of a file system that are metadata workloadsensitive. PostMark is available as shareware from:
http://www.netapp.com/tech_library/postmark.html

You use the PostMark utility postmark-1_5 and a text file called pmscript that contains tunable parameters for the PostMark utility. The pmscript file must be in the same directory as postmark-1_5. The output of PostMark displays the time to complete the requested number of transactions. Testing the Impact of Logging Mount Options In the first part of this lab, you test performance of your VxFS file system by using different logging mount options to examine the impact of logging options. You first test performance of your VxFS file system without setting logging options. Then, you run a script that iterates the same test for each of three intent log mount options: log, delaylog, and tmplog. The tests are performed a second time after creating a 750-MB filler file. The presence of the filler file creates a physical distance between the intent log and the files being written by PostMark, which should result in more physical disk access and lower performance. The script post_log_options.sh facilitates this part of the lab. Testing the Impact of Log Size In the second part of this lab, you test performance of your VxFS file system by using different log sizes to examine the impact of log size on performance. The script post_log_size.sh facilitates this part of the lab. Setup 1 Ensure that the external disks on your system are in a disk group named datadg. 2 If you have not already done so, unmount any file systems and remove any volumes from previous labs. # umount mount_point # vxassist -g diskgroup remove volume volume_name 3 Locate the PostMark utility, including the pmscript file, and the lab scripts post_log_options.sh and post_log_size.sh. Ask your instructor for the location of the scripts.

B-68

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Performance Impact of mount Options for Logging 1 Create and mount a 1200-MB file system on the volume logvol at the mount point /logmnt. If you use object names other than the ones provided, substitute the names accordingly in the commands. # vxassist -g datadg make logvol 1200m # mkfs -F vxfs /dev/vx/rdsk/datadg/logvol # mkdir /logmnt # mount -F vxfs /dev/vx/dsk/datadg/logvol /logmnt 2 Change to the directory that contains the PostMark and lab scripts. Ask your instructor for the location of the scripts. 3 Set the location of PostMarks write I/O to the file system mounted at /logmnt by using the command: # echo set location=/logmnt > .pmrc 4 Run the following command to start PostMark: # pmscript | grep seconds of transactions 5 Observe the output and record the results in the table at the end of the lab. 6 Remount the file system and create a 750-MB file called filler on the file system. Then, change to the lab scripts directory and re-run the PostMark commands. # mount -F vxfs -o remount /dev/vx/dsk/datadg/logvol /logmnt # cd /logmnt # mkfile 750m filler # cd lab_scripts_location # echo set location=/logmnt > .pmrc # pmscript | grep seconds of transactions 7 Observe the output and record the results in the table at the end of the lab. 8 From the directory that contains the lab scripts, examine the script post_log_options.sh. This script remounts the file system with the different logging options (log, delaylog, and tmplog) and runs the PostMark test for each iteration, both with and without a filler file. Run this script, and answer the prompts accordingly. # post_log_options.sh Record the results in the table at the end of the lab.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-69

Performance Impact of Intent Log Size 1 Unmount the file system /logmnt. Create and mount a new file system on the volume logvol at the mount point /logmnt and specify an intent log size of 256K. If you use object names other than the ones provided, substitute the names accordingly in the commands. # umount /logmnt # mkfs -F vxfs -o logsize=256k /dev/vx/rdsk/datadg/logvol # mount -F vxfs /dev/vx/dsk/datadg/logvol /logmnt 2 Change to the directory that contains the PostMark and lab scripts. Ask your instructor for the location of the scripts. 3 Set the location of PostMarks write I/O to the file system mounted at /logmnt by using the command: # echo set location=/logmnt > .pmrc 4 Run the following command to start PostMark: # pmscript | grep seconds of transactions 5 Observe the output and record the results in the table at the end of the lab. 6 Remount the file system and create a 750-MB file called filler on the file system. Then, change to the lab scripts directory and re-run the PostMark commands. # mount -F vxfs -o remount /dev/vx/dsk/datadg/logvol /logmnt # cd /logmnt # mkfile 750m filler # cd lab_scripts_location # echo set location=/logmnt > .pmrc # pmscript | grep seconds of transactions 7 Observe the output and record the results in the table at the end of the lab. 8 From the directory that contains the lab scripts, examine the post_log_size.sh script. This script remounts the file system with different log sizes (1024K, 2048K, 4096K, 8192K, and 16384K) and runs the PostMark test for each iteration, both with and without a filler file. Run this script, and answer the prompts accordingly. # post_log_size.sh Record the results in the table at the end of the lab.

B-70

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary of Results: Impact of Logging Options and Log Size Note: Results vary depending on the nature of the data and the model of array used. Results documented in the lab solutions may be different from what you achieve in your classroom environment. No performance guarantees are implied by this lab. This lab provides a framework that you can use in benchmarking file system performance. Logging Options
Intent Log Option No option (default)
log delaylog tmplog

Time (seconds) 20 38 20 19

Throughput (transactions/ second) 508 263 508 526

Time with Filler 37 53 37 40

Throughput with Filler 270 188 270 217

Log Size
Intent Log Size 256K 1024K 2048K 4096K 8192K 16384K Time (seconds) 72 34 24 15 11 9 Throughput (transactions/ second) 124 294 416 666 909 1111 Time with Filler 79 52 44 41 38 35 Throughput with Filler 138 192 227 243 260 263

More Exploration of Intent Log Performance Tuning (Optional) With the file system mounted, change the layout of the volume by changing the resilience level of the volume, increasing or decreasing the number of columns in a striped volume, or changing stripe unit sizes. Then, rerun the post_log_options.sh or post_log_size.sh scripts with the PostMark tests and note any changes in performance. After Completing This Lab Unmount the file systems and remove the volumes used in this lab.
# umount mount_point # vxassist -g diskgroup remove volume volume_name

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-71

Lab 13 Solutions: Architecture


Introduction In this lab, you explore some of the components of the VxVM architecture by using commands to control the VxVM configuration daemon. Perform this exercise by using the command line interface. Displaying Licensing and Supported Version Information 1 Display supported disk group version and daemon protocol information. # vxdctl support 2 Display all licensed features available for your system. # vxdctl license Setup Before you begin the next exercise, you are going to hide the license key files from your system: 1 Create a new directory called /lic and copy the *.vxlic files from /etc/vx/licenses/lic to /lic. These files represent the license keys for your machine. # mkdir /lic # cp /etc/vx/licenses/lic/*.vxlic /lic 2 Remove the *.vxlic files from /etc/vx/licenses/lic. # rm /etc/vx/licenses/lic/*.vxlic 3 Verify your action by running the command to display licensing information for VERITAS products. # vxlicrep No VERITAS products should be licensed now. Exploring VxVM Architectural Components 1 Stop the VxVM configuration daemon. # vxdctl stop 2 Run the command to display the VxVM configuration daemon mode. What mode is the configuration daemon in? # vxdctl mode The configuration daemon should be in not-running mode.

B-72

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 Start the VxVM configuration daemon. Were you successful? Why or why not? # vxconfigd You are not able to start the configuration daemon, because VxVM cannot find the license information. 4 Install the VxVM licenses by using the license files that you saved. # vxlicinst 5 Create a 100-MB mirrored volume. Are you successful? Why or why not? # vxassist -g diskgroup make volume_name 100m layout=mirror You are not able to create a volume, because the configuration daemon (vxconfigd) is not accessible. 6 Run the command to display the VxVM configuration daemon mode. What mode is the configuration daemon in? # vxdctl mode The configuration daemon should be in disabled mode. 7 Enable the VxVM configuration daemon. # vxdctl enable 8 Try to create a 100-MB mirrored volume again. Are you successful? # vxassist -g diskgroup make volume_name 100m layout=mirror You should be able to create a volume, because the configuration daemon (vxconfigd) is now accessible. 9 Remove any volumes that you created. For each volume: # vxedit -g diskgroup -rf rm volume_name

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-73

Lab 14 Solutions: Introduction to Recovery


Introduction In this practice, you explore VxVM logging behavior and perform a variety of basic recovery operations. Perform this lab by using the command line interface. In some of the steps, the commands are provided for you. Setup For this lab, you should have at least four disks (datadg01 through datadg04) in a disk group called datadg. If your root disk is mirrored, you may need to unmirror the root disk and add the free disk to the datadg disk group. If you use object names other than the ones provided, substitute the names accordingly in the commands. Exploring Logging Behavior 1 Create two mirrored, concatenated volumes, 500 MB in size, called vollog and volnolog. # vxassist -g datadg make vollog 500m layout=mirror # vxassist -g datadg make volnolog 500m layout=mirror 2 Add a log to the volume vollog. # vxassist -g datadg addlog vollog 3 Create a file system on both volumes. # mkfs -F vxfs /dev/vx/rdsk/datadg/volnolog # mkfs -F vxfs /dev/vx/rdsk/datadg/vollog 4 Create mount points for the volumes, /vollog and /volnolog. # mkdir /vollog # mkdir /volnolog 5 Copy /etc/vfstab to a file called origvfstab. # cp /etc/vfstab /origvfstab 6 Edit /etc/vfstab so that vollog and volnolog are mounted automatically on reboot. (In the /etc/vfstab file, each entry should be separated by a tab.) Type mountall to mount the vollog and volnolog volumes. # mountall 7 In root, start an I/O process on each volume. For example: # find /usr -print | cpio -pmud /vollog & # find /usr -print | cpio -pmud /volnolog &
B-74 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

8 Press Stop-A. At the OK prompt, type boot. OK? boot 9 After the system is running again, check the state of the volumes to ensure that neither of the volumes is in the sync/needsync mode. # vxprint -thf vollog volnolog 10 Run the vxstat command. This utility displays statistical information about volumes and other VxVM objects. For more information on this command, see the vxstat (1m) manual page. # vxstat -g datadg -fab vollog volnolog The output shows how many I/Os it took to resynchronize the mirrors. Compare the number of I/Os for each volume. What do you notice? You should notice that fewer I/O operations were required to resynchronize vollog. The log keeps track of data that needs to be resynchronized. 11 Stop the VxVM configuration daemon. # vxdctl stop 12 Create a 100-MB mirrored volume. What happens? # vxassist -g datadg make testvol 100m layout=mirror The task fails, because the configuration daemon is not running. 13 In root, start I/O on vollog by using the following command. Are you successful? Why or why not? # find /etc -print | cpio -pmud /vollog & You can start I/O on the volume, because I/O does not rely on the configuration daemon to be running to access an existing volume. 14 Start the VxVM configuration daemon. # vxconfigd 15 Unmount both file systems and remove the volumes vollog and volnolog. # umount /vollog # umount /volnolog # vxedit -rf rm vollog volnolog 16 Restore your original vfstab file. # cp /origvfstab /etc/vfstab

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-75

Removing a Disk from VxVM Control 1 Create a 100-MB, mirrored volume named recvol. Create and mount a file system on the volume. # vxassist -g datadg make recvol 100m layout=mirror # mkfs -F vxfs /dev/vx/rdsk/datadg/recvol # mkdir /recvol # mount -F vxfs /dev/vx/dsk/datadg/recvol /recvol 2 Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume. # vxprint -thf For example, the volume recvol uses datadg02 and datadg04:
Device Disk 1 Disk 2 c1t2d0s2 c1t3d0s2 Disk Media Name datadg02 datadg04

3 Remove one of the disks that is being used by the volume. # vxdg -g datadg -k rmdisk datadg02 4 Confirm that the disk was removed. # vxdisk list 5 From the command line, check that the state of one the plexes is DISABLED and REMOVED. # vxprint -thf In VEA, the disk is shown as disconnected, because one of the plexes is unavailable. 6 Replace the disk back into the disk group. # vxdg -g datadg -k adddisk datadg02=c1t2d0 7 Check the status of the disks. What is the status of the disks? # vxdisk list The status of the disks is ONLINE. 8 Display volume information. What is the state of the plexes? # vxprint -thf The plex you removed is marked RECOVER.

B-76

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9 In VEA, what is the status of the disks? What is the status of the volume? The disk is reconnected and shows that the disk contains a volume that is recoverable. Select the volume in the left pane, and click the Mirrors tab in the right pane. The plex is marked recoverable. 10 From the command line, recover the volume. During and after recovery, check the status of the plex in another command window and in VEA. # vxrecover In VEA, the status of the plex changes to Recovering, and eventually to Attached. With vxprint, the status of the plex changes to STALE and eventually to ACTIVE. Replacing Physical Drives (Without Hot Relocation) For this exercise, use the mirrored volume, recvol, that you created in the previous exercise. The volume is in the disk group datadg. 1 Stop vxrelocd using ps and kill, in order to stop hot relocation from taking place. # ps -e | grep vx # kill -9 pid1 pid2 Note: There are two vxrelocd processes. You must kill both of them at the same time. 2 Next, you simulate disk failure by removing the public and private regions of one of the disks in the volume. In the commands, substitute the appropriate disk device name for one of the disks in use by recvol: # fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2 3 An error will occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root. Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/datadg/recvol & 4 When the error occurs, view the status of the disks from the command line. # vxdisk list The physical device is no longer associated with the disk media name and the disk group.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-77

5 View the status of the volume from the command line. # vxprint -thf The plex displays a status of DISABLED NODEVICE. 6 In VEA, what is the status of the disks and volume? The status is that the disk is disconnected, and the volume has a disconnected plex. 7 Rescan for all attached disks: # vxdctl enable 8 Recover the disk by replacing the private and public regions on the disk: # vxdisksetup -i c1t2d0 Note: This method for recovering the disk is only used because of the method in which the disk was defaulted (by writing over the private and public regions). In most real-life situations, you do not need to perform this step. 9 Bring the disk back under VxVM control: # vxdg -g datadg -k adddisk datadg02=c1t2d0 10 Check the status of the disks and the volume. # vxdisk list # vxprint -thf 11 From the command line, recover the volume. # vxrecover 12 Check the status of the disks and the volume to ensure that the disk and volume are fully recovered. # vxdisk list # vxprint -thf 13 Unmount the file system and remove the volume. # umount /recvol # vxassist -g datadg remove volume recvol

B-78

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Exploring Spare Disk Behavior 1 You should have four disks (datadg01 through datadg04) in the disk group datadg. Set all disks to have the spare flag on. # vxedit -g datadg set spare=on datadg01 # vxedit -g datadg set spare=on datadg02 # vxedit -g datadg set spare=on datadg03 # vxedit -g datadg set spare=on datadg04 2 Create a 100-MB mirrored volume called sparevol. # vxassist -g datadg make sparevol 100m layout=mirror Is the volume successfully created? Why or why not? No, the volume is not created, and you receive the error: cannot allocate space for size block volume The volume is not created, because all disks are set as spares, and vxassist and VEA do not find enough free space to create the volume. 3 Attempt to create the same volume again, but this time specify two disks to use. Do not clear any spare flags on the disks. # vxassist -g datadg make sparevol 100m layout=mirror datadg03 datadg04 Notice that VxVM overrides its default and applies the two spare disks to the volume, because the two disks were specified by the administrator. 4 Remove the volume. # vxedit -g datadg -rf rm sparevol 5 Verify that the relocation daemon (vxrelocd) is running. If not, start it as follows: # vxrelocd root & 6 Remove the spare flags from three of the four disks. # vxedit -g datadg set spare=off datadg01 # vxedit -g datadg set spare=off datadg02 # vxedit -g datadg set spare=off datadg03 7 Create a 100-MB concatenated mirrored volume called spare2vol. # vxassist -g datadg make spare2vol 100m layout=mirror 8 Save the output of vxprint -thf to a file. # vxprint -thf > savedvxprint

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-79

9 Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume. You are going to simulate disk failure on one of the disks. Decide which disk you are going to fail. Open a console screen. For example, the volume spare2vol uses datadg02 and datadg04:
Device Name Disk 1 Disk 2 c1t2d0s2 c1t3d0s2 Disk Media Name datadg02 datadg04

10 Next, you simulate disk failure by removing the public and private regions of one of the disks in the volume. In the commands, substitute the appropriate disk device name: # fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2 11 An error occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root. Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/datadg/volume_name & 12 Run vxprint -rth and compare the output to the vxprint output that you saved earlier. What has occurred? Hot relocation has taken place. The failed disk has a status of NODEVICE. VxVM has relocated the mirror of the failed disk onto the designated spare disk. 13 In VEA, view the disks. Notice that the disk is in the disconnected state. 14 Run vxdisk list. What do you notice? This disk is displayed as a failed disk. 15 Rescan for all attached disks. # vxdctl enable 16 In VEA, view the status of the disks and the volume. Highlight the volume and click each of the tabs in the right pane. You can also select Actions>Volume View and Actions>Disk View to view status information.

B-80

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

17 View the status of the disks and the volume from the command line. # vxdisk list # vxprint -thf 18 Recover the disk by replacing the private and public regions on the disk. # vxdisksetup -i c1t2d0 19 Bring the disk back under VxVM control and into the disk group. # vxdg -g datadg -k adddisk datadg02=c1t2d0 20 In VEA, undo hot relocation for the disk. Right-click the disk group and select Undo Hot Relocation. In the dialog box, select the disk for which you want to undo hot relocation and click OK. After the task has completed, the alert on the disk group should be removed. Alternatively, from the command line, run: # vxunreloc -g datadg datadg02 21 Wait until the volume is fully recovered before continuing. Check to ensure that the disk and the volume are fully recovered. # vxdisk list # vxprint -thf 22 Reboot and then remove the volume. # vxedit -rf rm spare2vol 23 Turn off any spare flags from your disks that you set during this lab. # vxedit -g datadg set spare=off datadg04

Restoring a Lost Volume (Optional) For this exercise, ensure that you have a disk group named datadg that contains at least three disks. 1 Create three simple volumes, each 50 MB in size, called lostvol1, lostvol2, and lostvol3 on any disks in datadg. Mirror lostvol3 on another disk. # vxassist -g datadg make lostvol1 50m # vxassist -g datadg make lostvol2 50m # vxassist -g datadg make lostvol3 50m layout=mirror

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-81

2 Save the disk group configuration by using the vxprint command. # vxprint -g datadg -hmvpsQqr > backup.datadg 3 Display what you saved for backup. # more backup.datadg # vxprint -D - -rht < backup.datadg 4 Remove the volume lostvol3. # vxedit -rf rm lostvol3 5 Restore the volume, plex, and subdisk objects for lostvol3: # vxprint -D - -rhtmqQ lostvol3 < backup.datadg > restorevol3 # vxmake -g datadg -d restorevol3 6 Run vxprint -rth. What do you notice? You notice the volume is restored, but not started. 7 Recover the volume. # vxrecover -Es lostvol3 & 8 Run vxprint -rth to verify the original volume is started and is resynchronizing its mirrors. # vxprint -ht

B-82

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Group Backup and Restoration (Optional) Setup: Use the disk group and volumes from the previous section. If you skipped that section, ensure that you have a disk group named datadg that contains at least three disks. Prepare the volumes as follows: Create three simple volumes, each 50 MB in size, called lostvol1, lostvol2, and lostvol3 on any disks in datadg. Mirror lostvol3 on another disk. # vxassist -g datadg make lostvol1 50m # vxassist -g datadg make lostvol2 50m # vxassist -g datadg make lostvol3 50m layout=mirror Save the disk group configuration by using the vxprint command. # vxprint -g datadg -hmvpsQqr > backup.datadg Display what you saved for backup. # more backup.datadg # vxprint -D - -rht < backup.datadg

1 Destroy the entire disk group. # vxdg destroy datadg 2 Re-create the disk group by initializing its former disks, and adding them to the group. Important: Use the same disk group name, disk names, and device names. For each disk: # vxdisksetup -i device_tag # vxdg init datadg disk_name=device_tag 3 Restore each volume one at a time. For each volume: # vxprint -D - -rhtmqQ lostvol1 < backup.datadg > restorevol1 # vxmake -g datadg -d restorevol1 4 Run vxprint -rth. What do you notice? You notice the volumes are restored, but not started. 5 Recover the volumes. For each volume: # vxrecover -Es lostvol1 &

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-83

6 Run vxprint -ht to verify the volumes and disk group are restored successfully. # vxprint -ht

B-84

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 15 Solutions: Disk Problems and Solutions


Overview In this lab, you practice recovering from a variety of disk failure scenarios. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script: Sets up the required volumes Simulates and describes a failure scenario Prompts you to fix the problem Your goal is to recover from the problem as described in each scenario. Use your knowledge of VxVM administration, as well as the VxVM recovery tools and concepts described in the lesson, to determine which steps to take to ensure recovery. After you recover the test volumes, the script verifies your solution and provides you with the result. You succeed when you recover the volumes without corrupting the data. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup Due to the way in which the lab scripts work, it is important to set up your environment as described in this setup section: 1 Create a disk group named testdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdg01, testdg02, and testdg03. Note: You may need to destroy disk groups created in other labs (for example, datadg) in order to create the testdg disk group. 2 Before running the automated lab scripts, set the DG environment variable in your /.profile to the name of the test disk group that you are using: # DG=testdg; export DG Rerun your profile by logging out and logging back on, or by manually running it. 3 Ask your instructor for the location of the lab scripts.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-85

Recovering from Temporary Disk Failure In this lab exercise, a temporary disk failure is simulated. Your goal is to recover all of the redundant and nonredundant volumes that were on the failed drive. The lab script run_disks sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 1, Turned off drive (temporary failure): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 1 This script sets up two volumes: test1 with a mirrored layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure.

B-86

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volumes. Assume that the drive that was turned off and then back on was c1t2d0s2. To recover from the temporary failure: a Ensure that the operating system recognizes the device: # drvconfig # disks Note: Because you have not changed the SCSI location of the drive, running the first two commands (drvconfig and disks) may not be necessary. However, running these commands verify the existence and validity of the disk label (VTOC). In Solaris 7 and later, you can use devfsadm, a one-command replacement for drvconfig and disks. b Verify that the operating system recognizes the device: # prtvtoc /dev/rdsk/c1t2d0s2 c Force the VxVM configuration daemon to reread all of the drives in the system: # vxdctl enable d Reattach the device to the disk media record: # vxreattach e Recover the volumes: # vxrecover f Start the nonredundant volume: # vxvol -g testdg -f start test2 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-87

Recovering from Permanent Disk Failure In this lab exercise, a permanent disk failure is simulated. Your goal is to replace the failed drive and recover the volumes as needed. The lab script run_disks sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 2, Power failed drive (permanent failure): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 2 This script sets up two volumes: test1 with a mirrored layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. The disk is detached by VxVM.

B-88

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then, recover the volumes. Assume that the failed disk is testdg02 (c1t2d0s2) and the new disk used to replace it is c1t3d0s2, which is originally uninitialized. To recover from the permanent failure: a Initialize the new drive: # vxdisksetup -i c1t3d0 b Attach the disk media name (testdg02) to the new drive: # vxdg -g testdg -k adddisk testdg02=c1t3d0s2 c Recover the volumes: # vxrecover d Start the nonredundant volume: # vxvol -g testdg -f start test2 Alternatively, you can use the vxdiskadm menu interface: a Invoke vxdiskadm: # vxdiskadm b From the vxdiskadm main menu, select Option 5, Replace a failed or removed disk. When prompted, select c1t3d0 to initialize and replace testdg02. c Start the nonredundant volume: # vxvol -g testdg -f start test2 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-89

Recovering from Intermittent Disk Failure (1) In this lab exercise, intermittent disk failures are simulated, but the system is still OK. Your goal is to move data from the failing drive and remove the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 3, Intermittent Failures (system still ok): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 3 This script sets up two volumes: test1 with a mirrored layout test2 with a concatenated layout 2 Read the instructions in the lab script window. You are informed that the disk drive used by both volumes is experiencing intermittent failures that must be addressed.

B-90

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 In a second terminal window, move the data on the failing disk to another disk, and remove the failing disk. Assume that testdg02 (c1t2d0s2, and with plex test1-01 from the mirrored volume test1) is the drive experiencing intermittent problems. To recover: a Set the read policy to read from a preferred plex that is not on the failing drive before evacuating the disk. This technique prevents VxVM from accessing the failing drive during a read: # vxvol -g testdg rdpol prefer test1 test1-02 b Evacuate data from the failing drive to one or more other drives by using the vxdiskadm menu interface. Invoke vxdiskadm: # vxdiskadm c From the vxdiskadm main menu, select option 7, Move volumes from a disk. Evacuate the volumes on testdg02 to another disk in the disk group, such as testdg03. d Remove the failing disk by using the vxdiskadm menu interface. From the vxdiskadm main menu, select option 3, Remove a disk. Remove the disk testdg02. e Set the volume read policy back to the original read policy: # vxvol -g testdg rdpol select test1 Note: In this exercise, you still succeed even if you do not change the read policy. 4 After you resolve the problem, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-91

Recovering from Intermittent Disk Failure (2) In this lab exercise, intermittent disk failures are simulated, and the system has slowed down significantly, so that it is not possible to evacuate data from the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 4, Intermittent Failures (system too slow): # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 4 This script sets up a mirrored volume named test. 2 Read the instructions in the lab script window. You are informed that: One of the disk drives used by the volume is experiencing intermittent failures that need to be addressed immediately. The system has slowed down significantly, so it is not possible to evacuate the disk before removing it.

B-92

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 In a second terminal window, perform the necessary actions to resolve the problem. Assume that testdg02 (c1t2d0s2, and with plex test-01 from the mirrored volume test) is the drive experiencing intermittent problems. To recover: a Remove the failing disk for replacement by using the vxdiskadm menu interface. Invoke vxdiskadm: # vxdiskadm b From the vxdiskadm main menu, select option 4, Remove a disk for replacement. Remove the disk testdg02. c To ensure that you have an uninitialized disk to use as the replacement disk, you may need to uninitialize the disk c1t2d0 to use as the replacement disk. This step is not part of the recovery, but is used to imitate that c1t2d0s2 is a new disk: # vxdiskunsetup c1t2d0 d Replace the failed disk with a new disk by using the vxdiskadm menu interface. From the vxdiskadm main menu, select option 5, Replace a failed or removed disk. Select an uninitialized disk to replace testdg02. 4 After you resolve the problem, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-93

Recovering from Temporary Disk Failure: Layered Volume (Optional) In this lab exercise, a temporary disk failure is simulated. Your goal is to recover all of the volumes that were on the failed drive. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 5, Turned off drive with layered volume: # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 5 This script sets up two volumes: test1 with a concat-mirror layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure.

B-94

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volumes. Assume that the drive that was turned off and then back on was c1t2d0s2. To recover from the temporary failure: a Ensure that the operating system recognizes the device: # drvconfig # disks Note: Because you have not changed the SCSI location of the drive, running the first two commands (drvconfig and disks) may not be necessary. However, running these commands verify the existence and validity of the disk label (VTOC). In Solaris 7 and later, you can use devfsadm, a one-command replacement for drvconfig and disks. b Verify that the operating system recognizes the device: # prtvtoc /dev/rdsk/c1t2d0s2 c Force the VxVM configuration daemon to reread all of the drives in the system: # vxdctl enable d Reattach the device to the disk media record: # vxreattach e Recover the volumes: # vxrecover f Start the nonredundant volume: # vxvol -g testdg -f start test2 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-95

Recovering from Permanent Disk Failure: Layered Volume (Optional) In this lab exercise, a permanent disk failure is simulated. Your goal is to replace the failed drive and recover the volumes as needed. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_disks, and select option 6, Power failed drive with layered volume: # run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Lab 4 - Intermittent Failures (system too slow) 5) Lab 5 - Turned off drive with layered volume 6) Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 6 This script sets up two volumes: test1 with a concat-mirror layout test2 with a concatenated layout 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by both volumes. The disk is detached by VxVM.

B-96

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then, recover the volumes. Assume that the failed disk is testdg02 (c1t2d0s2) and the new disk used to replace it is c1t3d0s2, which is originally uninitialized. To recover from the permanent failure: a Initialize the new drive: # vxdisksetup -i c1t3d0 b Attach the disk media name (testdg02) to the new drive: # vxdg -g testdg -k adddisk testdg02=c1t3d0s2 c Recover the volumes: # vxrecover d Start the nonredundant volume: # vxvol -g testdg -f start test2 Alternatively, you can use the vxdiskadm menu interface: a Invoke vxdiskadm: # vxdiskadm b From the vxdiskadm main menu, select Option 5, Replace a failed or removed disk. When prompted, select c1t3d0 to initialize and replace testdg02. c Start the nonredundant volume: # vxvol -g testdg -f start test2 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-97

Lab 16 Solutions: Plex Problems and Solutions


Overview In this lab, you practice recovering from a variety of plex problem scenarios. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script: Sets up the required volumes Simulates and describes a failure scenario Prompts you to fix the problem Your goal is to recover from the problem as described in each scenario. Use your knowledge of VxVM administration, as well as the VxVM recovery tools and concepts described in the lesson, to determine what steps to take to ensure recovery. After you recover the test volumes, the script verifies your solution and provides you with the result. You succeed when you recover the volumes without corrupting the data. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup Due to the way in which the lab scripts work, it is important to set up your environment as described in this setup section: 1 Create a disk group named testdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdg01, testdg02, and testdg03. 2 Before running the automated lab scripts, set the DG environment variable in your /.profile to the name of the test disk group that you are using: # DG=testdg; export DG Rerun your profile by logging out and logging back on, or by manually running it. 3 Ask your instructor for the location of the lab scripts.

B-98

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resolving Plex Problems: Temporary Failure In this lab exercise, a temporary disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 1, Turned off drive (temporary failure): # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 1 This script sets up a mirrored volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume, and I/O is started so that VxVM detects the failure. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-99

3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plex is already in the STALE state before the drive fails. Assume that the drive that was turned off and then back on was c1t2d0s2 (with the plex test-01). The plex test-02 was STALE prior to the failure of the disk with the plex test-01. When the disk is powered back on and reattached, the plex test-01 continues to contain the most up-to-date data. To recover: a Ensure that the operating system recognizes the device: # drvconfig # disks Note: In Solaris 7 and later, you can use devfsadm, a one-command replacement for drvconfig and disks. Verify that the operating system recognizes the device: # prtvtoc /dev/rdsk/c1t2d0s2 Force the VxVM configuration daemon to reread all of the drives in the system: # vxdctl enable Reattach the device to the disk media record: # vxreattach Change the state of plex test-01 to STALE: # vxmend -g testdg fix stale test-01 Change the state of plex test-01 to CLEAN: # vxmend -g testdg fix clean test-01 Recover and start the volume: # vxrecover -s

b c

d e f g

4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct.

B-100

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resolving Plex Problems: Permanent Failure In this lab exercise, a permanent disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 2, Power failed drive (permanent failure): # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 2 This script sets up a mirrored volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume. I/O is started so that VxVM detects the failure, and VxVM detaches the disk.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-101

3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Note that the new disk does not have any data on it. The other plex of the volume became STALE ten minutes before the drive failed. However, it still has your data, but data from the last ten minutes is missing. Assume that the failed disk is testdg02 (c1t2d0s2) with plex test-01, and the new disk used to replace it is c1t3d0s2, which is originally uninitialized. Because the newly replaced disk has no data on it, you can only use the stale plex test-02 to recover the volume. To recover from the permanent disk failure: a Invoke vxdiskadm: # vxdiskadm b From the vxdiskadm main menu, select Option 5, Replace a failed or removed disk. When prompted, select c1t3d0 to initialize and replace testdg02. c Change the state of plex test-02 to CLEAN: # vxmend -g testdg fix clean test-02 d Recover and start the volume: # vxrecover -s 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

B-102

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resolving Plex Problems: Unknown Failure In this lab exercise, an unknown failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 3, Unknown failure: # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 3 This script sets up a mirrored volume named test that has three plexes. 2 Read the instructions in the lab script window. The script simulates an unknown failure that causes all plexes to be set to the STALE state. You are not provided with information about the cause of the problem with the plexes.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-103

3 In a second terminal window, check each plex individually to determine if it has the correct data. To test if the plex has correct data, start the volume using that plex, and then, in the lab script window, press Return. The script output displays a message stating whether or not the plex has the correct data. Continue this process for each plex, until you determine which plex has the correct data. Because all three plexes of the volume test are STALE, and you do not know which plex contains the good data, you must offline all but one plex and check to determine if that plex has the good data. If it is the correct plex, you can recover the volume. If it is not the correct plex, repeat the offlining of all but one plex to check the other plexes. a Start by checking the data on test-01: # vxmend -g testdg off test-02 # vxmend -g testdg off test-03 # vxmend -g testdg fix clean test-01 # vxvol -g testdg start test b Press Return on the output of the script. The script tests the data. If the plex does not have the good data, continue by checking the data on test-02: # vxvol -g testdg stop test # vxmend -g testdg -o force off test-01 # vxmend -g testdg on test-02 # vxmend -g testdg fix clean test-02 # vxvol -g testdg start test c Press Return on the output of the script. The script tests the data. If this plex has the good data, you do not need to search any further. d To recover the volume: # vxmend -g testdg on test-01 # vxmend -g testdg on test-03 # vxrecover 4 After you determine which plex has the correct data, recover the volume. 5 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct.

B-104

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resolving Plex Problems: Temporary Failure with a Layered Volume (Optional) In this lab exercise, a temporary disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 4, Turned off drive with layered volume: # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 4 This script sets up a concat-mirror volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume, and I/O is started so that VxVM detects the failure. Then, when you are ready to power the disk back on, the script replaces the partitions as they were before the failure.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-105

3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plex is already in the STALE state before the drive fails. Assume that the drive that was turned off and then back on was c1t2d0s2 (with the plex test-P01). The plex test-P02 was STALE prior to the failure of the disk with the plex test-P01. When the disk is powered back on and reattached, the plex test-P01 continues to contain the most up-to-date data. To recover: a Ensure that the operating system recognizes the device: # drvconfig # disks Note: In Solaris 7 and later, you can use devfsadm, a one-command replacement for drvconfig and disks. Verify that the operating system recognizes the device: # prtvtoc /dev/rdsk/c1t2d0s2 Force the VxVM configuration daemon to reread all of the drives in the system: # vxdctl enable Reattach the device to the disk media record: # vxreattach Change the state of plex test-P01 to STALE: # vxmend -g testdg fix stale test-P01 Change the state of plex test-P01 to CLEAN: # vxmend -g testdg fix clean test-P01 Recover and start the volume: # vxrecover -s

b c

d e f g

4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct.

B-106

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Resolving Plex Problems: Permanent Failure with a Layered Volume (Optional) In this lab exercise, a permanent disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plex as the clean plex, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Ensure that the environment variable DG is set to the name of the testdg disk group. For example:
# DG=testdg # export DG

1 From the directory that contains the lab scripts, run the script run_states, and select option 5, Power failed drive with layered volume: # run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Lab 4 - Turned off drive with layered volume 5) Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 5 This script sets up a concat-mirror volume named test. 2 Read the instructions in the lab script window. The script simulates a disk power-off by removing the private and public regions from the drive that is used by the volume. I/O is started so that VxVM detects the failure, and VxVM detaches the disk.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-107

3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Note that the new disk does not have any data on it. The other plex of the volume became STALE ten minutes before the drive failed. However, this plex still has your data, but data from the last ten minutes is missing. Assume that the failed disk is testdg02 (c1t2d0s2) with plex testP01, and the new disk that you use to replace it with is c1t3d0s2, which is originally uninitialized. Because the newly replaced disk has no data on it, you can only use the stale plex test-P02 to recover the volume. To recover from the permanent disk failure: a Invoke vxdiskadm: # vxdiskadm b From the vxdiskadm main menu, select Option 5, Replace a failed or removed disk. When prompted, select c1t3d0 to initialize and replace testdg02. c Change the state of plex test-P02 to CLEAN: # vxmend -g testdg fix clean test-02 d Recover and start the volume: # vxrecover -s 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if you did not use the disk at the same SCSI location for the replacement disk, reinitialize the disk and add it to the testdg disk group so that you can use it in later labs.

B-108

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

On Your Own: Exploring Mirror Resynchronization (Optional) This exercise provides an additional opportunity to explore mirror resynchronization processes. 1 Create a three-way concatenated mirrored volume of 200 MB, and run the process in the background. 2 Run vxprint -ht volume. 3 Note the states of the volumes and plexes during synchronization. 4 Run vxtask monitor and note the type of synchronization being performed. 5 When the synchronization is finished, vxprint -ht volume should display the volume and its plexes as ACTIVE. 6 Stop the volume and change all plexes to the STALE state. 7 Set the first two plexes to the ACTIVE state, and leave the third plex as STALE. 8 Run vxprint again and note the volumes new state. 9 Start the volume in the background and run vxtask monitor. 10 How many synchronizations are performed in the volume, and what types of synchronization are performed?

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-109

Lab 17 Solutions: Encapsulation and Root Disk Mirroring


Introduction In this practice, you create a root mirror, disable the root disk, and boot up from the mirror. Then you boot up again from the root disk, break the mirror, and remove the boot disk from rootdg. Finally, you reencapsulate the root disk and re-create the mirror. These tasks are performed using the VEA interface, the vxdiskadm tool, and CLI commands. Encapsulation and Root Disk Mirroring 1 Use vxdiskadm to place another disk in rootdg. This disk should be the same size (or greater) than the root disk. After completing this step, you should have two disks in rootdg: the boot disk and the new disk. Select vxdiskadm option 1, Add or initialize one or more disks, and follow the steps to add a disk to the rootdg disk group. 2 From the command line, set the eeprom variable to enable VxVM to create a device alias in the openboot program. # eeprom use-nvramrc?=true 3 Use vxdiskadm to mirror the root volumes. This process can take a few minutes depending on the size of the disk. Select vxdiskadm option 6, Mirror volumes on a disk, and follow the steps to mirror the disk. What order are the volumes mirrored? Alphabetical order Check to determine if rootvol is enabled and active. Hint: Use vxprint and examine the STATE fields. # vxprint -thf The rootvol should be in the ENABLED and ACTIVE state, and you should also see two plexes for each of the volumes in rootdg. 4 To disable the boot disk and make rootvol-01 disabled and offline, use the vxmend command. This command is used to make changes to configuration records. Here, you are using the command to place the plex in an offline state. For more information about this command, see the vxmend (1m) manual page. # vxmend off rootvol-01 5 Verify that rootvol-01 is now disabled and offline. # vxprint -thf
B-110 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

6 To change the plex to a STALE state, run the vxmend on command on rootvol-01. Verify that rootvol-01 is now in the DISABLED and STALE state. # vxmend on rootvol-01 # vxprint -thf 7 Reboot the system using init 6. # init 6 8 At the OK prompt, check for available boot disk aliases. OK> devalias Use the available boot disk alias to boot up from the alternate boot disk. For example: OK> boot vx-disk01 9 Verify that rootvol-01 is now in the ENABLED and ACTIVE state. Note: You may need to wait a few minutes for the state to change from STALE to ACTIVE. # vxprint -thf You have successfully booted up from the mirror. 10 To boot up from the original boot disk, reboot again using init 6. # init 6 You have now booted up from the original boot disk. 11 Using VEA, remove all but one plex of rootvol, swapvol, usr, var, opt, and home (that is, remove the newer plex from each volume in rootdg.) For each volume in rootdg, remove all of the newly created mirrors. More specifically, for each volume, two plexes are displayed, and you should remove the newer (-02) plexes from each volume. To remove a mirror, highlight a volume and select Actions>Mirror>Remove. 12 Run the command to convert the root volumes back to disk partitions. # vxunroot 13 Shut down the system when prompted. 14 Verify that the mount points are now slices rather than volumes. # df -k

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-111

15 Use the vxdiskadm menu to reencapsulate the boot disk and restart. Important: You must specify the device as c0t0d0 and the disk name as rootdisk, or else VxVM will use a default name, such as disk02. In the vxdiskadm main menu, select option 2, Encapsulate one or more disks. Follow the prompts, specifying the disk group as rootdg, the device as c0t0d0, and the disk name as rootdisk. When prompted, encapsulate and reboot. 16 Using VEA, mirror rootdisk. Highlight rootdisk and select Actions>Mirror Disk. At the end of this lab, you should have rootdisk as the boot disk and another disk in rootdg that is a mirror of the boot disk. Troubleshooting Tip Problem If you do not add a disk to rootdg prior to attempting unencapsulation of the boot disk, volumes are converted back to slices, and the disk are still in rootdg. At this point, you are not able to encapsulate, because the disk is in a disk group, and you cannot rerun vxunroot. This problem is caused by not having another disk in rootdg to hold a copy of the rootdg configuration. Solution 1 Add a disk to rootdg. 2 Remove the boot disk from rootdg. 3 You can now encapsulate the boot disk.

B-112

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 18 Solutions: VxVM, Boot Disk, and rootdg Recovery


Overview In this lab, you practice recovering from encapsulated boot disk failure scenarios. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script simulates a failure in the encapsulated boot disk (and its mirror, if required) and reboots the system. Your goal is to recover from the problem as described in each scenario. Use your knowledge of VxVM administration, as well as the VxVM recovery tools and concepts described in the lesson, to determine what steps to take to ensure recovery. You succeed when you solve the problem with the boot disk and boot to multiuser mode. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup In this lab, the automated lab scripts prompt you to reboot the system. If the reboot fails, ask your instructor how to bring the system down. 1 These labs require the system disk to be encapsulated. If your system disk is not encapsulated, you must encapsulate it before proceeding with this lab. 2 You must have at least one additional disk that is the same size (or larger) as your boot disk. You are instructed to create a mirror of the boot disk in the second exercise. 3 Ask your instructor for the location of the lab scripts.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-113

Recovering from Encapsulated, Unmirrored Boot Disk Failure In this lab exercise, you attempt to recover from encapsulated, unmirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 This lab requires that the system disk is encapsulated, but not mirrored. If your system disk is mirrored, then remove the mirror. 2 Save a copy of the /etc/system file to /etc/system.preencap. In the new file (/etc/system.preencap), comment out the non-forceload lines related to VxVM (the lines that define the disk to be an encapsulated device). To comment out a line, place an asterisk (*) in front of the line in the /etc/system.preencap file: ... * rootdev:/pseudo/vxio@0:0 * set vxio:vol_rootdev_is_volume=1 ... 3 From the directory that contains the lab scripts, run the script run_root, and select option 1, Encapsulated, unmirrored boot disk failure: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 1 4 Follow the instructions in the lab script window. This script causes the only plex in rootvol to change to the STALE state. When you are ready, the system is rebooted. The system does not come up due to the STALE plex.

B-114

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5 Recover the volume rootvol by using the /etc/system.preencap file that you created before the failure. You succeed when the system boots up to multiuser mode. To recover: a When the system fails to boot up, use the command boot -a from the ok prompt: ok> boot -a b Press Return when prompted for the UNIX and kernel information. When prompted for the name of the system file, enter the name of the file that you copied with non-forceload lines commented out:
Name of system file [etc/system]: etc/system.preencap

c When you are in maintenance mode, check the state of rootvol by using the vxprint command: # vxprint -g rootdg -ht You should notice that rootvol is not started (DISABLED mode), and the only plex it has (rootvol-01) is STALE. d To recover: # vxmend fix clean rootvol-01 # vxvol start rootvol # reboot

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-115

Recovering from Encapsulated, Mirrored Boot Disk Failure (1) In this lab exercise, you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored, then mirror the system disk before continuing. 2 Save a copy of the /etc/system file to /etc/system.preencap. In the new file (/etc/system.preencap), comment out the non-forceload lines related to VxVM (the lines that define the disk to be an encapsulated device). To comment out a line, place an asterisk (*) in front of the line in the /etc/system.preencap file: ... * rootdev:/pseudo/vxio@0:0 * set vxio:vol_rootdev_is_volume=1 ... 3 From the directory that contains the lab scripts, run the script run_root, and select option 2, Encapsulated, mirrored boot disk failure - 1: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 2 4 Follow the instructions in the lab script window. This script causes both plexes in rootvol to change to the STALE state. When you are ready, the system is rebooted. The system does not come up due to the STALE plex.

B-116

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5 Recover the volume rootvol by using the /etc/system.preencap file that you created before the failure. You succeed when the system boots up to multiuser mode. To recover: a When the system fails to boot up, use the command boot -a from the ok prompt: ok boot -a b Press Return when prompted for the UNIX and kernel information. When prompted for the name of the system file, enter the name of the file that you copied with non-forceload lines commented out:
Name of system file [etc/system]: etc/system.preencap

c When you are in maintenance mode, check the state of rootvol by using the vxprint command: # vxprint -g rootdg -ht You should notice that rootvol is not started (DISABLED mode), and both plexes (rootvol-01 and rootvol-02) are STALE. d To recover: # vxmend fix clean rootvol-01 If you do not want to wait for the mirrors to resynchronize before booting up to multiuser mode, you can offline the second plex and then continue: # vxmend off rootvol-02 Otherwise, the stale plex is resynchronized from the clean plex when you start the volume. e Start the volume rootvol and reboot: # vxvol start rootvol # reboot f After you boot up to multiuser mode, online the second plex and recover: # vxmend on rootvol-02 # vxrecover

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-117

Recovering from Encapsulated, Mirrored Boot Disk Failure (2) (Optional) In this lab exercise, you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored, then mirror the system disk before continuing. 2 Save a copy of the /etc/system file to /etc/system.preencap. In the new file (/etc/system.preencap), comment out the non-forceload lines related to VxVM (the lines that define the disk to be an encapsulated device). To comment out a line, place an asterisk (*) in front of the line in the /etc/system.preencap file: ... * rootdev:/pseudo/vxio@0:0 * set vxio:vol_rootdev_is_volume=1 ... 3 From the directory that contains the lab scripts, run the script run_root, and select option 3, Encapsulated, mirrored boot disk failure - 2: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 3 4 Follow the instructions in the lab script window. This script causes one of the plexes in rootvol to change to the STALE state. The clean plex is missing the /kernel directory, so you cannot boot up the system without recovery. When you are ready, the script reboots the system.

B-118

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5 Recover the volume rootvol by using the /etc/system.preencap file that you created before the failure. You succeed when the system boots up to multiuser mode. In this lab, the original system disk fails to boot because the /kernel directory is missing on the rootvol-01 plex, although the rootvol-01 plex is in a CLEAN state. (This directory has been renamed as /kernel.bak.) The mirror disk fails to boot because the second plex is in STALE mode. To recover, you must boot up on the mirror disk using the partition rather than the volume. Assume that the mirror of the system disk is called disk01 and you have already changed the use-nvramrc? parameter to true while you were mirroring the system disk so that VxVM created the devalias vx-disk01 for the mirror disk. a Run the following command on the ok prompt: ok boot vx-disk01 -a b Press Return when prompted for the UNIX and kernel information. When prompted for the name of the system file, enter the name of the file that you copied with non-forceload lines commented out:
Name of system file [etc/system]: etc/system.preencap

c When you are in maintenance mode, check the state of rootvol by using the vxprint command: # vxprint -g rootdg -ht You should notice that the rootvol is not started (DISABLED mode), the plex rootvol-01 is ACTIVE, and the plex rootvol-02 is STALE. d To recover using rootvol-01: # vxmend off rootvol-02 # vxvol start rootvol # mount -F ufs /dev/vx/dsk/rootvol /mnt Note: When you run this command, ignore any errors reported about not being able to write to /etc/mnttab. cd /mnt mv kernel.bak kernel cd / umount /mnt reboot

# # # # #

e Once you boot up to multiuser mode, online the second plex and recover: # vxmend on rootvol-02 # vxrecover
Appendix B: Lab Solutions
Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-119

Notes In a real-world environment, because the kernel directory is totally missing, you must copy it from the partition that you booted up on (in step d): # mkdir /mnt/kernel # cd /kernel; tar cf - . | (cd /mnt/kernel; tar xfBp -) You could also recover by using the second plex rootvol-02, by offlining the first plex rootvol-01 and setting the second plex rootvol-02 to CLEAN before rebooting. However, if the second plex became STALE before you lost the kernel directory on the first plex, this plex does not contain the most up-to-date data for recovery.

B-120

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Recovering from Encapsulated, Mirrored Boot Disk Failure (3) (Optional) In this lab exercise, you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. 1 Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored, then mirror the system disk before continuing. 2 Create an emergency boot disk by following the procedures presented in the lesson. 3 From the directory that contains the lab scripts, run the script run_root, and select option 4, Encapsulated, mirrored boot disk failure - 3: # run_root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Lab 3 - Encapsulated, mirrored boot disk failure - 2 4) Lab 4 - Encapsulated, mirrored boot disk failure - 3 x) Exit Your Choice? 4 4 Follow the instructions in the lab script window. This script causes both plexes in rootvol to change to the STALE state. Both plexes are missing the /kernel directory, so you cannot bring up the system without recovery. When you are ready, the script reboots the system.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-121

5 Recover the volume rootvol by using the emergency boot disk that you created before the failure. You succeed when the system boots up to multiuser mode. In this lab, both the system disk and its mirror fail to boot up, because both plexes of rootvol are in STALE mode. You cannot boot on the partitions using the command boot -a, because both plexes are missing the /kernel directory. Therefore, you must have an emergency boot disk to boot the system. a Boot the system using the emergency boot disk that you created. b After booting on the emergency boot disk, run the following commands to recover: # vxmend off rootvol-02 # vxmend fix clean rootvol-01 # vxvol start rootvol # mount -F ufs /dev/vx/dsk/rootvol /mnt # mkdir /mnt/kernel # cd /kernel; tar cf - . | (cd /mnt/kernel; tar xfBp -) # cd / # umount /mnt # reboot Note: For this lab, you can also rename the kernel.bak directory to kernel after you mount rootvol to /mnt, instead of copying the kernel directory from the emergency boot disk. c After you boot up to multiuser mode, online the second plex and recover: # vxmend on rootvol-02 # vxrecover

B-122

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Alternative Solution If you have a net device to boot from, you can also use the boot net command with the following procedure: a Run boot net. b Run fsck /dev/rdsk/c0t0d0s0. c Mount it on /mnt. d Rename kernel.bak to kernel. e Reboot the box. f When you are in maintenance mode, set one plex of rootvol to CLEAN. g Reboot again. The system should come up with all volumes recovered.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-123

Lab 19 Solutions: Administering DMP (Optional)


Introduction In this lab, you explore the performance and redundancy benefits of Volume Managers dynamic multipathing (DMP) functionality. In this lab, you become familiar with the use of VxVMs device discovery layer (DDL) utility, vxddladm, the DMP management utility, vxdmpadm, and DMP-related options of vxdiskadm. You demonstrate DMPs ability to automatically detect a failed path and manage its I/O accordingly by disabling and reenabling a DMP channel from the command line (to simulate a DMP controller failure) and by observing DMPs actions through benchmarking utility output. In this lab, you also measure the performance benefits of VxVMs DMP by: 1 Setting up volumes with file systems and flooding them with various types of workloads and I/O 2 Recording the results of performance tests 3 Disabling one of the configured DMP paths 4 Running performance tests again, without using DMP, to note the differences Setup This lab requires that you use the two Sparc systems connected to the Winchester Systems FlashDisk RAID array. The instructor will configure NRAID (no hardware RAID) on all disks for this lab. The array is also capable of several forms of hardware RAID. If you are interested in learning more about the Winchester Systems FlashDisk RAID array, visit www.winsys.com. Ask your instructor if you have any questions related to setup. To prepare for the lab: Ensure all SCSI and power cables are securely connected to and from the array before starting. Ensure that you have a minimum of four disks in the array, not including the root disk.

B-124

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Verifying DMP Activation 1 Unarchive the VERITAS benchmarking utility, vxbench. # zcat /vxbench.tar.Z | tar xvfp 2 Run format and make sure all disks in the array are configured and displayed correctly. 3 Edit the /kernel/drv/vxdmp.conf file as follows: name="vxdmp" parent="pseudo" instance=0 dmp_jbod="WINSYS"; 4 When the system comes up, log on to CDE and verify that JBODs are currently supported in the systems by using VxVMs device discovery layer utility. # vxddladm listjbod vxvm:vxddladm: INFO: No JBODs are supported on the system If JBODs are not supported, add support by using the DDL utility and specifying the vendor ID, WINSYS. Use the DDL utility again to verify that support is added. # vxddladm addjbod vid=WINSYS # vxddladm listjbod VID PID Opcode Page Code Page Offset SNO length ====================================================== WINSYS ALL PIDs 18 -1 36 12 5 Run the following commands: # devfsadm # vxdctl enable Notice that you do not have to reboot the system during the process of activating DMP for this array. 6 Add four disks to a disk group called flashdg. Verify your action using vxdisk list. In VEA, select the Disk Groups node, and select Actions>New Dynamic Disk Group. In the New Dynamic Disk Group wizard, specify the disk group name as flashdg and select four available disks to add to the disk group. Initialize the disks and complete the wizard.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-125

7 On one disk, verify that active/active DMP is enabled on the disk by using the command line. # vxdisk list disk2 Device: c0t2d0s2 ... Multipathing information: numpaths: 2 c0t2d0s2 state=enabled c1t2d0s2 state=enabled You are now ready to use active/active DMP with the array. DMP Benchmark Testing: High Availability Benefits 1 Create two 8000-MB, simple (concatenated) volumes on the first and second disks in the disk group, respectively. # vxassist -d diskgroup make volume1 8000m disk1 # vxassist -d diskgroup make volume2 8000m disk2 2 Create and mount VxFS file systems on each volume using the mount points /flash1 and /flash2. # mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume1 # mkdir /flash1 # mount -F vxfs /dev/vx/dsk/diskgroup/volume1 /flash1
# mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume2 # mkdir /flash2 # mount -F vxfs /dev/vx/dsk/diskgroup/volume2 /flash2

3 Open two terminal windows on the system. In one window, run the following: iostat nM -l 7 3 Note: Try various options to iostat in order to view the disk devices being used for DMP. See the iostat manual page for more information. 4 In the other terminal window, run the mount command to verify that the two file systems you created are still mounted. If so, run the following set of commands on the first file system, mounted at /flash1: # for i in 1 2 3 4 5 6 7 8 9 > do > mkfile 500m /flash1/testfile$i & > done

B-126

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

5 In the output of iostat, observe the megabytes per second (Mps) and transactions per second (tps) columns for each controller path that is receiving I/O. What do you notice? You should notice that while DMP is enabled, both paths are supporting the workload. Each path is supporting approximately 50 percent of the workload. Sample result output of iostat:
# iostat -nM -l 7 3 tty c0t0d0 72 0 9 0 0 12 0 30 0 c0t1d0 1 17 7 7 7 8 7 8 7 7 8 7 1 c1t1d0 17 7 c1t2d0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cpu
... tin tout Mps tps serv Mps tps serv Mps tps serv Mps tps serv

1 28 0 13 0 78 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 . . . 0 2 0 0 0 1 0

5 15 6 74 2 98 0 2 96 2 2 98 0 1 99 0 1 99 0 6 94 0 4 96 0 2 98 1 1 98 1 0 0 0 0 0 0 0 0 0

10 163 10 161 10 169 10 169 10 164 9 149 10 155 10 164 10 162

10 160 7 10 160 7 10 166 7 10 168 7 10 163 7 9 145 7 10 157 7 10 161 7 10 163 7

9 144

0 27 0 16

6 Next, simulate physical path failure by manually disabling one of the DMP paths. # vxdmpadm disable ctlr=c1 You can also use vxdiskadm option 17, Suppress all devices on a controller from DMP, select the c1 controller, and answer y to all confirmation prompts.

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-127

7 Observe the change in the output of the running iostat command, and pinpoint where the change occurs.
. . . tty c0t0d0 68 0 10 80 0 10 0 0 0 0 0 7 40 26 11 23 0 c0t1d0 2 34 15 8 7 8 7 8 7 8 8 8 7 7 51 56 58 57 54 56 58 2 c1t1d0 28 7 c1t2d0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cpu
... tin tout Mps tps serv Mps tps serv Mps tps serv Mps tps serv

1 24 0 11 0 78 0 0 0 27 0 12 0 27 0 37 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 0 27 0 . . . 0 0 0 0 0 0

4 23 5 68 1 99 0 6 94 2 6 94 0 2 98 0 2 98 0 3 96 0 0 99 1 4 96 0 2 98 0 2 98 0 5 95 0 0 0 0 0 0 0 0 0 0 0 0

10 166 9 141 9 140 10 162 10 166 10 159 11 170 10 161 10 164 10 161 10 160 12 193 15 249 16 250 16 251 15 251 15 244 15 249

10 164 7 9 142 7 8 135 7 10 163 8 10 163 7 10 155 7 10 168 8 10 159 7 10 164 7 10 158 7 10 161 7 1 0 0 0 0 0 0 17 5 0 0 0 0 0 0 0 0 0 0 0 0

0 27 0 10 132 0 27 0 10

8 74 0 13 2 0 2 0

13 66 2 19 2 70 2 26 0 66 2 32 2 61 3 35 1 62 7 30 6 67 9 18 4 66 5 25

7 218

0 27 0 16

8 Reenable the failed DMP path manually by using vxdmpadm, and observe the changes in iostat output. DMP should begin accessing the other path within a few seconds. # vxdmpadm enable ctlr=c1

B-128

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

DMP Benchmark Testing: Workload Balancing/Performance Gains Before you start: Review the sample output from the previous section. Compute the total transactions per second in one three-second interval from the first output (when both DMP paths were enabled) and compare it to any similar three-second interval from the second output (when one DMP path was disabled). What is the difference in total transactions per second when you disable DMP? What is the difference in total megabytes per second throughput when you disable DMP? c0t1d0 c1t1d0 First output using DMP: 10 161 10 160 161 + 160 = 321 Second output without DMP: 15 249 0 0 249 + 0 = 249 249 / 321 = .7757 1.0000 - 0.7757 = .22, or 22% increase in total throughput when you use DMP Verify that active/active DMP is enabled by running:
# vxdisk list disk_name

See if you can achieve a 20 percent or greater increase in throughput in the lab below. 1 On the second mounted file system, use vxbench to sequentially write a test file, called benchfile1, and note the output: # vxbench w write iosize=8k iocount=131072 /flash2/benchfile1 user 1: 0.032 sec 252.68 KB/s cpu: 0.00 sys 0.01 user user 2: 0.064 sec 124.60 KB/s cpu: 0.00 sys 0.00 user user 3: 0.034 sec 232.69 KB/s cpu: 0.01 sys 0.00 user
total: 0.067 sec 357.22 KB/s cpu: 0.01 sys 0.01 user

2 Now read the benchmark file back with vxbench and note the output: # vxbench w read iosize=8k iocount=131072 /flash2/benchfile1 user 1: 0.001 sec 6606.11 KB/s cpu: 0.00 sys 0.00 user user 2: 0.001 sec 6462.04 KB/s cpu: 0.00 sys 0.00 user user 3: 0.011 sec 727.01 KB/s cpu: 0.01 sys 0.00 user
total: 0.020 sec 1198.14 KB/s cpu: 0.01 sys 0.00 user

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-129

3 Run the following command, which copies several small files between directories: # time cp r /opt /flash2/opt$i 4 Disable one of the DMP paths by using vxdiskadm. Run vxdiskadm. You may have to press Return to display all of the options in the main menu. Select option 17, Prevent multipathing/suppress devices from VxVMs view. Answer y when prompted. Select option 1, Suppress all paths through a controller from VxVMs view. Type c1. Answer y when prompted and press Return. Exit from vxdiskadm. Do not reboot. 5 Repeat steps one through three above. What do you notice? Performance is much better when active/active DMP is being used. 6 Use similar vxdiskadm options to reenable DMP. More Practice (Optional) 1 Unmount one of the file systems and run several simultaneous block-level dumps on its raw volume. First perform this test with DMP disabled.
# umount /flash1 # time dd if=/dev/zero of=/dev/vx/rdsk/flashdg/flashvol & # time dd if=/dev/zero of=/dev/vx/rdsk/flashdg/flashvol & # time dd if=/dev/zero of=/dev/vx/rdsk/flashdg/flashvol &

2 Reenable DMP and run the tests again.

After Completing This Lab Unmount the file systems and remove the volumes used in this lab.
# umount mount_point # vxassist -g diskgroup remove volume volume_name

B-130

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Lab 20 Solutions: Controlling Users (Optional)


Introduction This lab enables you to practice setting user quotas and creating ACLs. Set Up 1 Begin with a clean file system. In a disk group named datadg, create a 1-GB volume called quotavol. Create and mount a VERITAS file system on the volume at the mount point /fs_quota. # vxassist -g datadg make quotavol 1g # mkfs -F vxfs /dev/vx/rdsk/datadg/quotavol # mkdir /fs_quota # mount -F vxfs /dev/vx/dsk/datadg/quotavol /fs_quota 2 Create the group training: a Open the Admintool utility: # admintool & b In the Browse menu, select Groups to display a list of groups. c In the Edit menu, select Add to open the Add Group dialog box. d In the Add Group dialog box, create a new group by specifying: Group Name: training Group ID: 101 Member list: root Note: The group ID should already be set to 101. e Click OK. 3 Create four users for the group training: a In the Admintool utility, from the Browse menu, select Users to display a list of users. b In the Edit menu, select Add to open the Add User dialog box. c In the Add User dialog box, create a new user by specifying: User Name: user1 Primary Group: 101 Login Shell: Korn Home DirectoryPath: /fs_quota/user1/home d Repeat this process for three more users with the names user2, user3, and user4. e Set the passwords of all four users to veritas by using the passwd command. For each user: # passwd user1 Changing password for user1

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-131

user1s New password: veritas Enter the new password again: veritas

Using Quotas 1 Create the files required for managing quotas for a file system. # touch /fs_quota/quotas # touch /fs_quota/quotas.grp 2 Turn on quotas for the file system. # /opt/VRTSvxfs/sbin/vxquotaon /fs_quota 3 Invoke the quota editor for the user with the username user1. # /opt/VRTSvxfs/sbin/vxedquota user1 4 Modify the quotas file to specify a hard limit of 200 blocks and 20 inodes and a soft limit of 100 blocks and 10 inodes. fs /fs_quota blocks (soft=100, hard=200) inodes (soft=10, hard=20) 5 Modify the time limit to be one minute. # /opt/VRTSvxfs/sbin/vxedquota -t fs /fs_quota blocks time limit = 1 min, files time limit = 1 min 6 Verify the quotas for the user user1. # /opt/VRTSvxfs/sbin/vxquota -v user1 The output displayed contains lines similar to:
Disk quotas for user1 (uid 1002): Filesystem usage quota limit timeleft files quota limit timeleft /fs_quota 1 100 200 2 10 20

7 In order to test the quota limits that you set, you must log on as user1: a Set read, write, and execute permissions for user1 on /fs_quota. b Log off and log back on as user1. c When prompted, type a password for user1. d Relog on as user1 using your new password. e After you log on, go to the file system that has the quotas set. 8 Test the quota limits that you set by creating files that exceed the disk usage limits. Delete the files between each test. To test the soft block limit, create or copy a file of size greater than 100K and less than 200K. To test the hard block limit, create or copy a file of size greater than 200K. To test the soft inode limit, use touch to create 11 empty files. To test the hard inode limit, use touch to create 21 files.

B-132

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

9 Log off and log back on as root and turn off quotas for the VERITAS file system mounted at /fs_quota. # /opt/VRTSvxfs/sbin/vxquotaoff /fs_quota 10 Exit from the superuser account and log back on as user1. Test the quota limits again, using the same tests as in step 9. What happens? You should now be allowed to create the files, because the quota limits are turned off. 11 Log out and log back on as root. ACLs 1 Create a file called file01 on the file system /fs_quota. # touch /fs_quota/file01 2 Add an ACL entry to file01 that gives user user1 read permission only. # setfacl -m user:user1:r-- /fs_quota/file01 3 View the ACLs for file01 to verify that the ACL entry was created. # getfacl /fs_quota/file01 4 Create a new file called file02 on the file system /fs_quota. # touch /fs_quota/file02 5 View the ACLs for file02. # getfacl /fs_quota/file02 6 Set the same ACL on file02 as the one on file01 using the standard input. # getfacl /fs_quota/file01 | setfacl -f /fs_quota/file02 7 Confirm that the same ACLs are set on file02 as on file01. # getfacl /fs_quota/file02 After Completing This Lab Unmount the file systems and remove the volumes used in this lab.
# umount mount_point # vxassist -g diskgroup remove volume volume_name

Appendix B: Lab Solutions


Copyright 2002 VERITAS Software Corporation. All rights reserved.

B-133

B-134

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM Command Quick Reference

Locations of VERITAS Volume Manager Commands


This section lists command directory locations and descriptions for VERITAS Volume Manager commands. For more information on specific commands, see the VERITAS Volume Manager manual pages. Most VERITAS-specific commands are installed in the directories: /usr/sbin /usr/lib/vxvm/bin /etc/vx/bin (a link to /usr/lib/vxvm/bin) Add these directories to your PATH environment variable to access the commands. The online manual pages are installed in the /opt/VRTS/man directory. This directory must be added to the MANPATH environment variable.
Command vea vradmin vrnotify vrport vxapslice vxassist vxbootsetup vxclustadm vxconfigd vxdarestore vxdco vxdctl vxddladm vxdg vxdisk vxdiskadd vxdiskadm vxdiskconfig vxdisksetup vxdiskunsetup vxdmpadm vxedit vxencap vxevac vxibc vxinfo vxinstall vxiod vxlicense C-2 Location
/opt/VRTSob/bin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /etc/vx/bin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/sbin/

Description
VEA startup script Administers VERITAS Volume Replicator (VVR) Displays VVR events Manages VVR ports Manages an area of disk for use by an AP database Creates and administers volumes Sets up boot information on a VxVM disk Starts, stops, and reconfigures clusters VxVM configuration daemon Restores simple or nopriv disk access records Administers DCO objects and DCO volumes Controls the VxVM configuration daemon Administers device discovery layer Manages VxVM disk groups Defines and manages VxVM disks Adds disks for use with VxVM Menu-driven VxVM disk administrator Configures devices for VxVM control Configures disks for use with VxVM Deconfigures a VxVM disk Administers the dynamic multipathing subsystem Creates, removes, and modifies VxVM records Encapsulates partitions on a new disk Evacuates all volumes from a disk Performs VVR in-band control messaging operations Displays accessibility and usability of volumes Menu-driven VxVM initial configuration utility Starts, stops, and reports on VxVM kernel daemons VERITAS license key utility (prior to VxVM 3.5)

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Command vxlicinst vxlicrep vxlictest vxmake vxmemstat vxmend vxmirror vxnotify vxplex vxprint vxr5check vxreattach vxrecover vxrelayout vxrelocd vxresize vxrlink vxrootmir vxrvg vxsd vxsparecheck vxspcshow vxstat vxsvc vxtask vxtrace vxunreloc vxunroot vxvol

Location
/opt/VRTS/bin/ /opt/VRTS/bin/ /opt/VRTS/bin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/sbin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/lib/vxvm/bin/ /usr/sbin/ /usr/sbin /usr/lib/vxvm/bin/ /usr/sbin/ /usr/sbin/ /opt/VRTSob/bin/ /usr/sbin/ /usr/sbin/ /usr/lib/vxvm/bin /usr/lib/vxvm/bin /usr/sbin

Description
VERITAS license key installer (VxVM 3.5 and later) VERITAS license key reporter (VxVM 3.5 and later) VERITAS license key tester (VxVM 3.5 and later) Creates VxVM configuration records Displays VxVM memory statistics Mends simple configuration record problems Mirrors volumes on a disk Displays VxVM configuration events Performs VxVM operations on plexes Displays VxVM configuration records Verifies RAID-5 volume parity Reattaches drives that have become accessible Performs volume recovery operations Converts online storage from one layout to another Monitors failure events and relocates subdisks Changes length of a volume with a file system Performs VxVM operations on RLINKs Mirrors areas needed for booting to a new disk Performs VxVM operations on RVGs Performs VxVM operations on subdisks Monitors VxVM for failures and replaces failed disks Adds SAN access layer (SAL) details Manages VxVM statistics Starts and stops the VEA server Lists and administers VxVM tasks Performs trace operations on volumes Moves hot relocated subdisks back to original disks Removes VxVM hooks for rootable volumes Performs VxVM operations on volumes

Appendix C: VxVM Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

C-3

VxVM Command Quick Reference


This section contains some frequently used commands and options described in the VERITAS Volume Manager for Solaris training. For more information on specific commands, see the VERITAS Volume Manager manual pages.
Disk Operations
Task Command

Initialize disk

vxdisksetup -i device vxdiskadd device or vxdiskadm option 1, Add or initialize one or more disks

Uninitialize disk List disks List disk header Evacuate a disk Rename a disk Set a disk as a spare Unrelocate a disk Disk Group Operations
Task

vxdiskunsetup device vxdisk list vxdisk list diskname|device vxevac -g diskgroup from_disk to_disk vxedit -g diskgroup rename oldname newname vxedit -g diskgroup set spare=on|off diskname vxunreloc -g diskgroup original_diskname

Command

Create disk group Add disk to disk group Deport disk group Import disk group Destroy disk group List disk groups List specific disk group details Remove disk from disk group Upgrade disk group version Move an object between disk groups Split objects between disk groups Join disk groups List objects affected by a disk group move operation

vxdg init diskgroup diskname=device vxdg -g diskgroup adddisk diskname=device vxdg deport diskgroup vxdg import diskgroup vxdg destroy diskgroup vxdg list vxdg list diskgroup vxdg g diskgroup rmdisk diskname vxdg [-T version] upgrade diskgroup vxdg move sourcedg targetdg object... vxdg split sourcedg targetdg object... vxdg join sourcedg targetdg vxdg listmove sourcedg targetdg object...

C-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Subdisk Operations
Task Command

Create a subdisk Remove a subdisk Display subdisk info

vxmake -g diskgroup sd subdiskname diskname offset length vxedit -g diskgroup rm subdisk_name vxprint -st vxprint -l subdisk_name

Associate a subdisk to a plex vxsd assoc plex_name subdisk_name Dissociate a subdisk Plex Operations
Task Command

vxsd dis subdisk_name

Create a plex Associate a plex (to a volume) Dissociate a plex Remove a plex List all plexes Detach a plex Attach a plex Volume Operations
Task

vxmake -g diskgroup plex plex_name sd=subdisk_name, vxplex g diskgroup att vol_name plex_name vxplex dis plex_name vxedit g diskgroup rm plex_name vxprint -lp vxplex g diskgroup det plex_name vxplex g diskgroup att vol_name plex_name

Command

Create a volume

vxassist -g diskgroup make vol_name size layout=format diskname or vxmake -g diskgroup vol vol_name len=size plex plex_name

Remove a volume

vxedit -g diskgroup -rf rm vol_name or vxassist -g diskgroup remove volume vol_name

Display a volume Change volume attributes

vxprint -g diskgroup -vt vol_name vxprint -g diskgroup -l vol_name vxedit -g diskgroup set attribute=value vol_name vxvol -g diskgroup set attribute=value vol_name

Appendix C: VxVM Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

C-5

Task

Command

Resize a volume

vxassist -g diskgroup growto vol_name new_length vxassist -g diskgroup growby vol_name length_change vxassist -g diskgroup shrinkto vol_name new_length vxassist -g diskgroup shrinkby vol_name length_change vxresize -g diskgroup vol_name [+|-]length

Change volume read policy

vxvol -g diskgroup rdpol round vol_name vxvol -g diskgroup rdpol prefer vol_name preferrred_plex_name vxvol -g diskgroup rdpol select vol_name vxvol start vol_name

Start/Stop volumes

Start all volumes vxvol startall Start all volumes in a dg vxvol -g diskgroup startall Stop a volume vxvol stop vol_name Stop all volumes vxvol stopall Recover a volume vxrecover -sn vol_name List unstartable volumes Mirror an existing plex vxinfo [vol_name] vxassist -g diskgroup mirror vol_name or vxmake -g diskgroup plex plex_name sd=subdisk_name vxplex -g diskgroup att vol_name plex_name Create a snapshot volume vxassist g diskgroup -b snapstart vol_name vxassist g diskgroup snapshot vol_name new_volume Abort a snapshot vxassist -g diskgroup snapabort orig_vol_name Reassociate a snapshot vxassist -g diskgroup snapback snapshot_vol Dissociate a snapshot vxassist -g diskgroup snapclear snapshot_vol Print snapshot information vxassist -g diskgroup snapprint vol_name Relayout a volume vxassist -g diskgroup relayout vol_name layout=new_layout [attributes...]

To or from a layered vxassist -g diskgroup convert vol_name layout layout=new_layout [attributes...] Add a log to a volume Create and mount a VxFS file system on a volume vxassist g diskgroup addlog vol_name mkfs -F vxfs /dev/vx/rdsk/diskgroup/vol_name mkdir /mount_point mount -F vxfs /dev/vx/dsk/diskgroup/vol_name /mount_point

C-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

DMP, DDL, and Task Management


Task Command

Manage tasks Manage device discovery layer (DDL)

vxtask list vxtask monitor

List supported disk arrays vxddladm listsupport Exclude support for an array vxddladm excludearray libname=library vxddladm excludearray vid=vid pid=pid Reinclude support vxddladm includearray libname=library vxddladm includearray vid=vid pid=pid List excluded arrays vxddladm listexclude List supported JBODs vxddladm listjbod Add/remove JBOD support vxddladm addjbod vid=vid pid=pid vxddladm rmjbod vid=vid pid=pid Manage dynamic multipathing (DMP) List controllers on system vxdmpadm listctlr all Display subpaths vxdmpadm getsubpaths ctlr=ctlr Display DMP nodes vxdmpadm getdmpnode nodename=nodename Enable/disable I/O to vxdmpadm enable ctlr=ctlr controller vxdmpadm disable ctlr=ctlr Display enclosure attributes vxdmpadm listenclosure all Rename an enclosure vxdmpadm setattr enclosure orig_name name=new_name

Appendix C: VxVM Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

C-7

Using VxVM Commands: Examples


Initialize disks c1t0d0, c1t1d0, c1t2d0, c2t0d0, c2t1d0, and c2t2d0:

# vxdisksetup -i c1t0d0 # vxdisksetup -i c1t1d0 # vxdisksetup -i c1t2d0 # vxdisksetup -i c2t0d0 # vxdisksetup -i c2t1d0 # vxdisksetup -i c2t2d0
Create a disk group named datadg and add the six disks:

# vxdg init datadg datadg01=c1t0d0 datadg02=c1t1d0 datadg03=c1t2d0 # vxdg -g datadg adddisk datadg04=c2t0d0 datadg05=c2t1d0 datadg06=c2t2d0
Using a top-down technique, create a RAID-5 volume named datavol of size 2 GB on the six disks (5 + log). Also, create and mount a UFS file system on the volume:

# vxassist -g datadg make datavol 2g layout=raid5 datadg01 datadg02 datadg03 datadg04 datadg05 datadg06 # newfs /dev/vx/rdsk/datadg/datavol # mkdir /datamnt # mount /dev/vx/dsk/datadg/datavol /datamnt
Remove the volume datavol:

# umount /datamnt # vxvol stop datavol # vxedit -g datadg -r rm datavol

C-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Using a bottom-up technique, create a two-way mirrored volume named datavol02 of size 1 GB using disks datadg01 and datadg04: 1 GB = 2097152 sectors Subdisks should be cylinder aligned. If disk uses 1520 sectors/cylinder, subdisk size = 2097600 sectors.

# vxmake -g datadg sd sd01 datadg01,0,2097600 # vxmake -g datadg sd sd02 datadg04,0,2097600 # vxmake -g datadg plex plex01 sd=sd01:0/0 # vxmake -g datadg plex plex02 sd=sd02:0/0 # vxmake -g datadg -U fsgen vol datavol02 plex=plex01,plex02 # vxvol start datavol02
Change the permissions of the volume so that dba is the owner and dbgroup is the group:

# vxedit set user=dba group=dbgroup mode=0744 datavol02


Destroy the volume and remove the disks from the disk group datadg. Also, remove disks from Volume Manager control:

# vxedit -g datadg -rf rm datavol02 # vxdg -g datadg rmdisk datadg01 datadg02 datadg03 datadg04 datadg05 # vxdg deport datadg # vxdiskunsetup c1t1d0 # vxdiskunsetup c1t2d0 # vxdiskunsetup c1t3d0...
Advanced vxmake Operation: Create a three-way striped volume:

# vxmake -g acctdg sd sd01 acctdg01,0,1520000 # vxmake -g acctdg sd sd02 acctdg02,0,1520000 # vxmake -g acctdg sd sd03 acctdg03,0,1520000 # vxmake -g acctdg plex plex1 layout=stripe ncolumn=3 stwidth=64k sd=sd01:0/0,sd02:1/0,sd03:2/0 # vxmake -g acctdg -U fsgen vol datavol05 plex=plex1 # vxvol -g acctdg start datavol05

Appendix C: VxVM Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

C-9

Advanced vxmake Operation: Create a RAID 0+1 volume with a DRL Log:

# vxmake -g acctdg sd sd01 acctdg01,0,194560 # vxmake -g acctdg sd sd02 acctdg02,0,194560 # vxmake -g acctdg sd sd03 acctdg03,0,194560 # vxmake -g acctdg sd sd04 acctdg04,0,194560 # vxmake -g acctdg sd logsd acctdg01,194560,2 # vxmake -g acctdg plex plex1 layout=stripe ncolumn=2 stwidth=64k sd=sd01:0/0,sd02:1/0 # vxmake -g acctdg plex plex2 layout=stripe ncolumn=2 stwidth=64k sd=sd03:0/0,sd04:1/0 # vxmake -g acctdg plex logplex log_sd=logsd # vxmake -g acctdg -U fsgen vol datavol06 plex=plex1,plex2,logplex # vxvol -g acctdg start datavol06

C-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxFS Command Quick Reference

Locations of VERITAS File System Commands


This section lists command directory locations and descriptions for VERITAS File System commands. For more information on specific commands, see the VERITAS File System manual pages. Most VERITAS-specific commands are installed in three directories: /opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /etc/fs/vxfs Add these directories to your PATH environment variable to access the commands. The online manual pages are installed in the /opt/VRTS/man directory. Add this directory to your MANPATH environment variable.
Command
cfscluster cfsdgadm cfsmntadm cfsmount cfsumount cp cpio df ff fsadm fscat fsck fsckptadm fsclustadm fsdb fstyp getext glmconfig ls mkfs mount mv ncheck qioadmin qiomkfile qiostat qlogadm

Location
/opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /usr/lib/fs/vxfs /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /usr/lib/fs/vxfs /opt/VRTSvxfs/sbin /sbin /opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /etc/fs/vxfs /opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin

Description
CFS cluster configuration command Adds or deletes shard disk groups in cluster configurations Adds, deletes, or modifies CFS policies Mounts a shared volume on CFS nodes Unmounts a shared volume on CFS nodes VxFS-specific copy command VxFS-specific cpio command Reports the number of free disk blocks and inodes for a VxFS file system Lists file names and inode information for a VxFS file system Resizes or reorganizes a VxFS file system Cats a VxFS file system Checks and repairs a VxFS file system VxFS Storage Checkpoint administration utility Manages cluster-mounted VxFS file systems VxFS file system debugger Returns the type of file system on a specified disk partition Gets extent attributes for a VxFS file system Group Lock Manager (GLM) configuration utility VxFS-specific list command Constructs a VxFS file system Mounts a VxFS file system VxFS-specific move command Generates path names from inode numbers for a VxFS file system VxFS Quick I/O for Databases cache administration utility Creates a VxFS Quick I/O device file VxFS Quick I/O for Databases statistics utility Low level IOCTL utility for the QuickLog driver

D-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Command
qlogattach qlogck qlogclustadm qlogdb qlogdetach qlogdisable qlogenable qlogmk qlogprint qlogrec qlogrm qlogstat qlogtrace setext umount_vxfs vxdump vxedquota vxfsconvert vxfsstat vxlicense vxlicinst vxlicrep vxlictest vxquot vxquota vxquotaoff vxquotaon vxrepquota vxrestore vxtunefs vxupgrade

Location
/etc/fs/vxfs /etc/fs/vxfs /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /etc/fs/vxfs /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /usr/lib/fs/vxfs /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /usr/sbin/ /opt/VRTS/bin/ /opt/VRTS/bin/ /opt/VRTS/bin/ /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin /opt/VRTSvxfs/sbin

Description
Attaches a previously formatted QuickLog volume to a QuickLog device Recovers QuickLog devices during the boot process Administers cluster QuickLog devices QuickLog debugging tool Detaches a QuickLog volume from a QuickLog device Remounts a VxFS file system with QuickLog logging disabled Remounts a VxFS file system with QuickLog logging enabled Creates and attaches a QuickLog volume to a QuickLog device Displays records from the QuickLog configuration Recovers the QuickLog configuration file during a system failover Removes a QuickLog volume from the configuration file Prints statistics for running QuickLog devices, QuickLog volumes, and VxFS file systems Prints QuickLog tracing Sets extent attributes on a file in a VxFS file system Unmounts a VxFS file system Incremental file system dump Edits quotas for a VxFS file system Converts an unmounted file system to VxFS Displays VxFS file system statistics VERITAS license key utility (prior to VxFS 3.5) VERITAS license key installer (VxFS 3.5 and later) VERITAS license key reporter (VxFS 3.5 and later) VERITAS license key tester (VxVM 3.5 and later) Displays file system ownership summaries for a VxFS file system Displays disk quotas and usage on a VxFS file system Turns quotas on and off for a VxFS file system Turns quotas on and off for a VxFS file system Summarizes quotas for a VxFS file system Restores a file system incrementally Tunes a VxFS file system Upgrades the disk layout of a VxFS file system

Notes: The qio- commands have functionality that is only available with the VERITAS Quick I/O for Databases feature. The qlog- commands have functionality that is only available with the VERITAS QuickLog feature. The cfs-, fsclustadm, glmconfig, and qlogclustadm commands have functionality that is only available with the VERITAS Cluster File System feature.

Appendix D: VxFS Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

D-3

Command Quick Reference


This section contains some VxFS commands and examples. For more information on specific commands, see the VERITAS File System manual pages.
Installing VERITAS File System Task
View installed license keys Add a license key Install VxFS packages

Command
vxlicrep vxlicinst pkgadd -d path_name product_packages... # pkgadd -d /cdrom/CD_name/pkgs VRTSvlic VRTSvxfs VRTSfsdoc pkgrm product_packages # pkgrm VRTSfsdoc VRTSvxfs pkginfo pkginfo | grep VRTS pkginfo -l product_package # pkginfo -l VRTSvxfs

Remove VxFS packages List all installed packages List installed VERITAS packages Display package details

Setting Up a File System Task


Create a VERITAS file system

Command
mkfs [-F vxfs] [generic_options] [-o specific_options] special [size] # mkfs -F vxfs /dev/vx/rdsk/datadg/datavol Options -o N -o largefiles -o version=n -o bsize=size -o logsize=size Check VxFS structure without writing to device. Create VxFS that supports large files. Create VxFS with different layout version. n can be 1, 2, or 4. Create VxFS with a specific block size. size is the block size in bytes. Create VxFS with a specific logging area size. size is the number of file system blocks to be used for the intent log.

Mount a VERITAS file system

mount [-F vxfs] [generic_options] [-r] [-o specific_options] special mount_point # mount -F vxfs /dev/vx/dsk/datadg/datavol /mydata mount -v mount -p umount special|mount_point # umount /mnt umount -a fstyp [-v] special # fstyp /dev/dsk/c0t6d0s0 df [-F vxfs] [generic_options] [-o s] [special|mount] # df -F vxfs /mnt fsck [-F vxfs] [generic_options] [-y|Y] [-n|N] special # fsck -F vxfs /dev/vx/rdsk/datadg/datavol

List mounted file systems List mounted file systems in the /etc/vfstab format Unmount a mounted file system Unmount all mounted file systems Determine the file system type Report free disk blocks and inodes Check the consistency of and repair a file system

D-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Online Administration Task


Resize a VERITAS file system

Command
fsadm [-b newsize] [-r rawdev] mount_point # /usr/lib/fs/vxfs/fsadm -b 1024000 -r /dev/vx/rdsk/datadg/datavol /mnt vxresize [-bsx] [-F vxfs] [-g diskgroup] [-t tasktag] volume new_length [medianame] # vxresize -F vxfs -g datadg datavol 5g Note: The vxresize command automatically resizes the underlying volume. The fsadm command does not.

Dump a file system Restore a file system Create a snapshot file system

vxdump [options] mount_point vxrestore [options] mount_point mount [-F vxfs] -o snapof=source,[snapsize=size] destination snap_mount_point # mount -F vxfs -o snapof=/dev/dsk/c0t6d0s2, snapsize=32768 /dev/dsk/c0t5d0s2 /snapmount # mount -F vxfs -o snapof=/dev/vx/dsk/datadg/datavol /dev/vx/dsk/datadg/snapvol /snapmount vxdump [options] snap_mount_point vxdump -cf /dev/rmt/0 /snapmount fsckptadm [-nruv] create ckpt_name mount_point fsckptadm [-clv] list mount_point fsckptadm [-cv] pathinfo path_name fsckptadm [-sv] remove ckpt_name mount_point mount -F vxfs -o ckpt=ckpt_name pseudo_device mount_pt umount mount_point fsckptadm [-sv] set [nodata|nomount|remove] ckpt_name vxupgrade [-n new_version] [-r rawdev] mount_point # vxupgrade -n 5 /mnt vxupgrade mount_point vxfsconvert [-s size] [-efnNvyY] special # vxfsconvert /dev/vx/rdsk/datadg/datavol

Back up a snapshot file system Create a storage checkpoint List storage checkpoints Display a checkpoint name Remove a storage checkpoint Mount a storage checkpoint Unmount a storage checkpoint Change checkpoint attributes Upgrade the VxFS layout Display layout version number Convert a file system to VxFS

Appendix D: VxFS Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

D-5

Benchmarking Task
Create different combinations of I/O workloads

Command
vxbench -w workload [options] filename . . . # vxbench -w write -i iosize=8,iocount=131072 /mnt/testfile01 # vxbench -w rand_write -i iosize=8,iocount=131072, maxfilesize=1048576 /mnt/testfile01 vxbench -h Options -i Suboptions nrep=n nthreads=n iosize=n fsync remove iocount=n reserveonly maxfilesiz=n randseed=n rdpct=n Repeat I/O loop n times Specify the number of threads Specify I/O size (in kilobytes) Perform an fsync on the file Remove each file after the test Specify the number of I/Os Only reserve space for the file Max offset for random I/O Seed value for random number generator Read percentage of the job mix

List vxbench command options Workloads read write rand_read rand_write rand_mixed mmap_read mmap_write

-h -P -p -t -m -s -v -k -M -i

Display help Use processes and threads (default) Use processes Use threads Lock I/O buffers in memory Print summary results Print per-thread results Print throughput in kbytes/sec Print throughput in mbytes/sec Specify suboptions

Managing Extents Task


List file names and inode information Generate path names from inode numbers for a VxFS file system Set extent attributes

Command
ff [-F vxfs] [generic_options] [-o s] special ncheck [-F vxfs] [generic_options] [-o options] special

setext [-e extent_size] [-f flags] [-r reservation] file Options -e -r -f Flags align chgsize contig noextend noreserve trim Specify a fixed extent size. Preallocate, or reserve, space for a file. Set allocation flags. Align extents to the start of allocation units. Add the reservation into the file. Allocate the reservation contiguously. File may not be extended after reservation is used. Space reserved is allocated only until the close of the file, and then is freed. Reservation is reduced to the current file size after the last close.

Display extent attributes

getext [-f] [-s] file . . . Options -f -s Do not print the filename. Do not print output for files without fixed extent sizes or reservations. Warns if extent information cannot be preserved (default). Causes a copy to fail if extent information cannot be preserved. Does not try to preserve extent information.

Use extent-aware versions of standard commands, such as mv, cp, and cpio

-e warn -e force -e ignore

D-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Defragmenting a File System Task


Report on directory fragmentation Report on extent fragmentation Defragment, or reorganize, a file system

Command
fsadm -D mount_point fsadm -E [-l largesize] mount_point fsadm [-d] [-D] [-e] [-E] [-s] [-v] [-l largesize] [a days] [-t time] [-p passes] [-r rawdev] mount_point # fsadm -edED /mnt1 Options -d -a -e -D -E -v -l -t -p -s -r Reorganize directories. Move aged files to the end of the directory. Default is 14 days. Reorganize extents. Report on directory fragmentation. Report on extent fragmentation. Report reorganization activity in verbose mode. Size of a file that is considered large. Default is 64 blocks. Maximum length of time to run, in seconds. Maximum number of passes to run. Default is five passes. Print a summary of activity at the end of each pass. Pathname of the raw device to read to determine file layout and fragmentation. Used when fsadm cannot determine the raw device.

Reorganize a file system to support files > 2 GB

# fsadm -o largefiles /mnt1

Intent Logging Task


Check the consistency of and repair a VERITAS file system. By default the fsck utility replays the intent log instead of doing a full structural file system check.

Command
fsck [-F vxfs] [generic_options] [-y|Y] [-n|N] [-o full,nolog] [-o p] special # fsck -F vxfs /dev/vx/rdsk/datadg/datavol Options -m -n|N -V -y|Y -o full -o nolog -o p Checks, but does not repair, a file system before mounting. Assumes a response of no to all prompts by fsck. Echoes the command line, but does not execute. Assumes a response of yes to all prompts by fsck. Perform a full file system check after log replay. Do not perform log replay. Check two file systems in parallel.

Perform a full file system check without the intent log

# fsck -F vxfs -o full,nolog special # fsck -F vxfs -o full,nolog /dev/vx/rdsk/datadg/datavol

Alter default logging behavior mount [-F vxfs] [generic_options] [-o specific_options] special mount_point mount -F vxfs -o tmplog /dev/vx/dsk/datadg/datavol /mnt Options -o log -o delaylog -o tmplog -o nodatainlog -o blkclear -o logiosize=size All structural changes are logged. Default option in which some logging is delayed. Intent logging is almost always delayed. Used on systems without bad block revectoring. Guarantees that storage is initialized before allocation. Sets a specific I/O size to be used for logging.

Appendix D: VxFS Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

D-7

I/O Types and Cache Advisories Task


Alter the way in which VxFS handles buffered I/O operations

Command
mount -F vxfs [generic_options] -o mincache=suboption special mount_point mount -F vxfs -o mincache=closesync /dev/vx/dsk/datadg/datavol /mnt Options mincache=closesync mincache=direct mincache=dsync mincache=unbuffered mincache=tmpcache

Alter the way in which VxFS handles I/O requests for files opened with the O_SYNC and O_DSYNC flags

mount -F vxfs [generic_options] -o convosync=suboption special mount_point mount -F vxfs -o convosync=closesync /dev/vx/dsk/datadg/datavol /mnt Options convosync=closesync convosync=direct convosync=dsync convosync=unbuffered convosync=delay

D-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

File System Tuning Task


Set tuning parameters for mounted file systems

Command
vxtunefs [-ps] [-f filename] [-o parameter=value] [{mount_point | block_special}]... # vxtunefs -o write_pref_io=32768 /mnt Options -f filename -p -s Tuning Parameters read_ahead read_pref_io read_nstream write_pref_io write_nstream discovered_direct_iosz hsm_write_prealloc initial_extent_size max_direct_iosz max_diskq max_seqio_extent_size qio_cache_enable Specifies a parameters file other than the default /etc/vx/tunefstab Prints tuning parameters Sets new tuning parameters Enables enhanced read ahead to detect patterns. Preferred read request size. Default is 64K. Desired number of parallel read requests to have outstanding at one time. Default is 1. Preferred write request size. Default is 64K. Desired number of parallel write requests to have outstanding at one time. Default is 1. I/O requests larger than this value are handled as discovered direct I/O. Default is 256K. Improves performance when using HSM applications with VxFS Default initial extent size, in file system blocks. Maximum size of a direct I/O requests. Maximum disk queue generated by a single file. Default is 1M. Maximum size of an extent. Default is 2048 file system blocks. Enables or disables caching on Quick I/O for Databases files. Default is disabled. To enable caching, you set qio_cache_enable=1. Limits dirty pages per file that a file system generates before flushing pages to disk.

write_throttle Display current tuning parameters Set read-ahead size vxtunefs mount_point # vxtunefs /mnt

Use vxtunefs to set the tuning parameters read_pref_io and read_nstream. Read-ahead size = (read_pref_io x read_nstream) Use vxtunefs to set the tuning parameters write_pref_io and write_nstream. Write-behind size = (write_pref_io x write_nstream)

Set write-behind size

Appendix D: VxFS Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

D-9

Controlling Users Task


Create a quotas files Turn on quotas for a mounted file system Mount a file system and turn on quotas at the same time Invoke the quota editor Modify the quota time limit View quotas for a user Display summary of quotas and disk usage Display a summary of ownership and usage Turn off quotas for a mounted file system Set or modify an ACL for a file

Command
touch /mount_point/quotas touch /mount_point/quotas.grp vxquotaon [-u|-g] mount_point # vxquotaon -u /mnt mount -F vxfs -o quota|usrquota|grpquota special mount_point # mount -F vxfs -o quota /dev/dsk/c0t5d0s2 /mnt vxedquota username|UID|groupname|GID # vxedquota rsmith vxedquota -t vxquota -v username|groupname # vxquota -v rsmith vxrepquota mount_point # vxrepquota /mnt vxquot mount_point # vxquot /mnt vxquotaoff [-u|-g] mount_point # vxquotaoff /mnt setfacl [-r] -s acl_entries file setfacl [-r] -md acl_entries file setfacl [-r] -f acl_file file # setfacl -m user:bob:r-- myfile # setfacl -d user:scott myfile # setfacl -s user::rwx,group::r--,user:maria:r--, mask:rw-,other:--- myfile Options -s -m -d Set an ACL for a file. Add new or modify ACL entries to a file. Remove an ACL entry for a user.

Elements in an ACL Entry entry_type:[uid|gid]:permissions entry_type uid|gid permissions Display ACL entries for a file Copy existing ACL entries from one file to another file getfacl filename # getfacl myfile getfacl file1 | setfacl -f file2 # getfacl myfile | setfacl -f - newfile Entry type: user, group, other, or mask. User or group name or identification number. Read, write, and/or execute indicated by rwx.

D-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

QuickLog Task
Create a volume or format a disk partition to contain the QuickLog device Build the QuickLog volume layout Enable a QuickLog device

Command
vxassist -g diskgroup make qlog_volume size [vxvm_disk] # vxassist -g datadg make qvol01 32m qlogmk -g diskgroup vxlog[x] qlog_volume # qlogmk -g datadg vxlog1 qvol01 # mount -F vxfs -o qlog= special mount_point # mount -F vxfs -o qlog= /dev/vx/dsk/datadg/datvol /mnt Or qlogenable [qlog_device] mount_point # qlogenable /mnt qlogdisable mount_point # qlogdisable /mnt qlogrm qlog_volume # qlogrm qvol01 vxedit -g diskgroup -rf rm qlog_volume # vxedit -g datadg -rf rm qvol01 qlogprint

Disable logging by QuickLog without unmounting a VERITAS File System Detach a QuickLog volume from its QuickLog device Remove the QuickLog volume from the underlying VxVM volume Display status of QuickLog devices, QuickLog volumes, and VxFS file systems Print statistical data for QuickLog devices, QuickLog volumes, and VxFS file systems

qlogstat [-dvf] [-l qlogdev] [-i interval] [-c count] Options -d -v -f -l qlogdev -i interval -c count Report statistics for all QuickLog devices only. Report statistics for all QuickLog volumes only. Report statistics for all logged VxFS file systems only. Report statistics for a specified QuickLog device only. Print the change in statistics after every interval seconds Default is 10 seconds. Stop after printing interval statistics count times. Default is 1.

Appendix D: VxFS Command Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

D-11

Quick I/O Task


Enable Quick I/O at mount time Disable Quick I/O Treat a file as a raw character device Create a Quick I/O file through a symbolic link

Command
mount -F vxfs -o qio mount_point mount -F vxfs -o noqio mount_point filename::cdev:vxfs: mydbfile::cdev:vxfs: qiomkfile [-h [headersize]] [-a] [-s size] [-e|-r size] file # qiomkfile -s 100m /database/dbfile Options -h -s -e -r -a For Oracle database files. Creates a file with additional space allocated for the Oracle header. Preallocates space for a file For Oracle database files. Extends the file by a specified amount to allow Oracle tablespace resizing. For Oracle database files. Increases the file to a specified size to allow Oracle tablespace resizing. Creates a symbolic link with an absolute pathname. Default behavior creates relative pathnames.

Obtain Quick I/O statistics

qiostat [-i interval] [-c count] [-l] [-r] file... # qiostat -i 5 /database/dbfile Options -c count -i interval -l -r Stop after printing statistics count times. Print updated I/O statistics after every interval seconds. Print the statistics in long format. Also prints the caching statistics when Cached Quick I/O is enabled. Reset statistics instead of printing them.

Enable Cached Quick I/O for all files in a file system Disable Cached Quick I/O for a file

vxtunefs -s -o qio_cache_enable=1 mount_point # vxtunefs -s -o qio_cache_enable=1 /oradata qioadmin -S filename=OFF mount_point # qioadmin -S /oradata/sal/hist.dat=OFF /oradata

D-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VERITAS Enterprise Administrator Quick Reference

VERITAS Enterprise Administrator Quick Reference


This section contains the VERITAS Enterprise Administrator (VEA) navigation paths to some frequently-used VERITAS Volume Manager and VERITAS File System commands and options. For detailed information on using the VEA graphical user interface, see the VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator. Note: You can perform tasks in VEA in a variety of ways. This appendix provides one method for performing tasks.
General VEA Administration and Use
Task Navigation Path

Set VEA preferences. Disable Wizard mode.

Select Tools>Preferences. Select Tools>Preferences. On the Volume Manager General tab, remove the check mark from the Enable wizard mode check box. Use Control-Click to select multiple individual objects.

Select more than one object. Use Shift-Click to select a range of objects. Search for an object. Disk Operations
Task Navigation Path

Select Tools>Search.

Scan for new devices.

Select Actions>Rescan.

Add a disk to a disk group. Select a disk, and select Actions>Add Disk to Dynamic Disk Group. Set a disk as a spare disk, reserved disk, or exclude from hot relocation. Rename a disk. Bring a disk online. Mirror all volumes on a disk. Select a disk, and select Actions>Set Disk Usage. Select a disk, and select Actions>Rename Disk. Select a disk, and select Actions>Online Disk. Select a disk, and select Actions>Mirror Disk.

Move contents of one disk Select a disk, and select Actions>Evacuate Disk. to another disk. Disconnect a disk in preparation for replacement. Replace a disk. Recover all volumes on a disk. Select a disk, and select Actions>Disconnect Disk. Select a disk, and select Actions>Replace Disk. Select a disk, and select Actions>Recover Disk.

Remove a disk from a disk Select a disk, and select Actions>Remove Disk from Dynamic group. Disk Group. Disable I/O to a controller. Select a controller, and select Actions>Disable. Enable I/O to a controller. Rename an enclosure.
E-2

Select a controller, and select Actions>Enable. Select an enclosure, and select Actions>Rename Enclosure.
VERITAS Foundation Suite 3.5 for Solaris

Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Group Operations


Task Navigation Path

Create a new disk group. Upgrade a disk group to the current disk group version. Rename a disk group. Deport a disk group. Import a disk group. Recover all volumes in a disk group. Undo hot relocation. Clear information about relocated subdisks. Destroy a disk group. Move a disk group. Split a disk group. Join a disk group.

Select a disk, and select Actions>New Dynamic Disk Group. Select a disk group, and select Actions>Upgrade Dynamic Disk Group Version. Select a disk group, and select Actions>Rename Dynamic Disk Group. Select a disk group, and select Actions>Deport Dynamic Disk Group. Select a host, and select Actions>Import Dynamic Disk Group. Select a disk group, and select Actions>Recover Dynamic Disk Group. Select a disk group, and select Actions>Undo Hot Relocation. Select a disk group, and select Actions>Clear Hot Relocation Info. Select a host, and select Actions>Destroy Dynamic Disk Group. Select a disk group, and select Actions>Move Dynamic Disk Group. Select a disk group, and select Actions>Split Dynamic Disk Group. Select a disk group, and select Actions>Join Dynamic Disk Group.

Volume Operations
Task Navigation Path

Create a volume. Resize a volume. Rename a volume. Add a mirror to a volume. Remove a mirror. Add a log to a volume. Remove a log. Prevent access to a volume.

Select a disk group, and select Actions>New Volume. Select a volume, and select Actions>Resize Volume. Select a volume, and select Actions>Rename Volume. Select a volume, and select Actions>Mirror>Add. Select a volume, and select Actions>Mirror>Remove. Select a volume, and select Actions>Log>Add. Select a volume, and select Actions>Log>Remove. Select a volume, and select Actions>Stop Volume.

Change the volume layout. Select a volume, and select Actions>Change Layout.

Create a volume snapshot. Select a volume, and select Actions>Snap>Snap Start. Select the volume, and select Actions>Snap>Snap Shot. Reattach a snapshot plex to the original volume. Select the original volume, and select Actions>Snap>Snap Back.

Appendix E: VERITAS Enterprise Administrator Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

E-3

Task

Navigation Path

Break the association between a snapshot plex and the original volume. Abort a snapstart procedure. Recover a volume. Remove a volume.

Select the original volume, and select Actions>Snap>Snap Clear. Select the volume, and select Actions>Snap>Snap Abort. Select a volume, and select Actions>Recover Volume. Select a volume, and select Actions>Delete Volume.

Viewing Objects and Properties


Task Navigation Path

Display a graphical view of a disk. Display the volumes on a disk. Display a graphical view of selected volumes layout, components, and properties. Display a tabular view of volumes and underlying disks. Display properties of a selected object. Managing Tasks
Task

Select a disk, and select Actions>Disk View. Select a volume, and select Actions>Volume View. Select a volume, and select Actions>Layout View. Select a disk group, and select Actions>Disk/Volume Map. Right-click an object, and select Properties.

Navigation Path

Display task history Abort a task Pause a task. Resume a paused task. Change a tasks priority. Display the underlying CLI command for a task. Display the command log file.

Click the Tasks tab at the bottom of the main window. Right-click a task in the Task History pane, and select Abort Task. Right-click a task in the Task History pane, and select Pause Task. Right-click a task in the Task History pane, and select Resume Task. Right-click a task in the Task History pane, and select Throttle Task. Right-click a task in the Task History pane, and select Properties. View the file /var/vx/isis/command.log.

E-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Subdisk Operations
Task Navigation Path

Move a subdisk. Split a subdisk. Join subdisks. File System Operations


Task

In the Volume to Disk Mapping window, select a subdisk, and select Action>Move Subdisk. In the Volume to Disk Mapping window, select a subdisk, and select Action>Split Subdisk. In the Volume to Disk Mapping window, select subdisks, and select Action>Join Subdisk.

Navigation Path

Add a new file system to a volume. Mount a file system. Unmount a file system. Remove a file system from the File System Table. Defragment a file system. Create a file system snapshot. Perform a file system consistency check. Monitor file system capacity. Display file system properties. Designate a QuickLog volume. Enable a QuickLog device. Disable a QuickLog device. Remove a QuickLog volume.

Select a volume, and select Actions>File System>New File System. Select a volume, and select Actions>File System>Mount File System. Select a volume, and select Actions>File System>Unmount File System. Select a file system, and select Actions>Remove from File System Table. Select a volume, and select Actions>File System>Defrag File System. Select a file system, and select Actions>Snapshot>Create. Select a volume, and select Actions>File System>Check File System. Select a file system, and select Actions>File System Usage. Right-click a file system, and select Properties. Select a volume, and select Actions>QuickLog>Make Log Select a file system, and select Actions>QuickLog>Enable. Select a file system, and select Actions>QuickLog>Disable. Select a QuickLog volume, and select Actions>QuickLog> Remove Log.

Appendix E: VERITAS Enterprise Administrator Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

E-5

Creating a File System: Options in VEA

Sets the file system type as vxfs or ufs

Sets the block size as 1024, 2048, 4096, or 8192 bytes

Sets the largefiles flag

Sets the size of the intent log

E-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Mounting a File System: Options in VEA

Sets a mount point Creates the mount point Mounts file system as read only Adds file system to /etc/vfstab Mounts at boot Sets the fsck pass number

Enables/disables Quick I/O Enables/disables QuickLog

Sets blkclear Sets noatime Sets mincache= closesync, direct, dsync, unbuffered, or tmpcache

Appendix E: VERITAS Enterprise Administrator Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

E-7

E-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VMSA Reference

Using the VMSA Interface


Volume Manager Storage Administrator The VERITAS Volume Manager Storage Administrator (VMSA) is the graphical user interface for Volume Manager versions prior to 3.5. You can use VMSA to administer disks, volumes, and file systems on local or remote machines. VMSA is a Java-based interface that consists of a server and a client. The VMSA server runs on a UNIX machine that is running the VERITAS Volume Manager. The VMSA client runs on any machine that supports the Java Runtime Environment, which can be Solaris, HP-UX, or Windows. Note: You can run VMSA through a web browser, but this is not recommended due to performance issues related to browsers. The VMSA utility has the following benefits for system administrators: Ease of Use: VMSA provides quick and easy access to tasks through GUI menus and a task list. Remote Administration: You can perform Volume Manager administration remotely or locally. The client runs on UNIX or Windows. Java-Based Interface: The VMSA client is a pure Java-based interface which you can run as a Java application. Scalability: VMSA can handle systems containing a large number of disks. Security: VMSA can only be run by users with appropriate privileges, and access can be restricted to a specific set of users. Read-Only Mode: You can run VMSA in read-only mode for monitoring, training, or browsing purposes. Multiple Host Support: The VMSA client can provide simultaneous access to multiple host machines. You can use a single VMSA client session to connect to multiple hosts, view the objects on each host, and perform administrative tasks on each host. Each host machine must be running the VMSA server. Multiple Views of Objects: VMSA provides multiple views of Volume Manager objects. You can view objects in a hierarchical tree layout, in a list format, and in a variety of graphical views.

F-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Using the VMSA Main Window VMSA provides a variety of ways to view and manipulate Volume Manager objects. When you launch VMSA, the VMSA main window is displayed. The main window contains these components: A hierarchical object tree A grid that lists objects and their properties A menu bar A toolbar A status area A Command Launcher (hidden by default)

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-3

Object Tree The object tree is located in the left pane of the main window. The object tree is a dynamic hierarchical display of Volume Manager objects and other objects on the system. Each node in the tree represents a group of objects of the same type. Nodes in the object tree typically include: Hosts: Any host machines connected to the current VMSA client session Controllers: All controllers on the system Disk Groups: All disk groups on the system Disks: All disks on the system Enclosures: All enclosures (disk arrays) on the system File Systems: All mounted file systems on the system Free Disk Pool: Any disks that are under Volume Manager control, but do not belong to a disk group Uninitialized Disks: Any disks that are not under Volume Manager control Volumes: All volumes on the system Clusters: Sets of hosts that share sets of disks (Clusters are only visible in a cluster environment with the optional VxVM cluster functionality.) Replicated Configurations: Used with VERITAS Volume Replicator Using the Object Tree To reveal the hierarchy under each node, you expand a node by clicking the plus sign to the left of the node icon. When you select a node in the object tree, objects of that type appear in the grid in the right pane.

Object Tree

F-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Grid The right pane contains a grid, which is a tabular display of objects and their properties. The grid displays objects that belong to the group that is currently selected in the object tree. The grid is dynamic and constantly updates its contents to reflect changes to objects. Using the Grid Sorting the grid: You can sort the contents of a property column in the grid by clicking the column heading. You can reverse the sort order by clicking the column heading again. Resizing the grid: The splitter is the vertical bar that separates the object tree from the grid. You can resize the left and right panes by pressing and holding the mouse button over the splitter and dragging the splitter to the left or right. Replicating the grid: You can replicate the grid in a separate window by selecting Window>Copy Main Grid. A copy of the grid appears in a separate window. By replicating the grid in this way, you can display different sets of objects at the same time. Printing grid contents: To print the contents of the grid, select File>Print Grid and complete the Print dialog box. Selecting objects: To select multiple objects, press the Control key while clicking objects. To select multiple adjacent objects, click the first object in the range and press the Shift key while clicking the last object in the range. Displaying objects: When you click an object group in the object tree, all objects in the selected group are displayed in the grid.

Grid

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-5

Menu Bar The menu bar at the top of the main window contains the following menus: File: The File menu provides access to the New menu, which creates volumes, disk groups, and file systems. The File menu also establishes new host connections, prints the contents of the main window, closes the main window, provides access to an object Properties window, and exits VMSA. Options: The Options menu provides access to the Customize window, which displays and sets user preferences for the components of VMSA. The Options menu also saves or loads user preferences, removes any alert icons from the status area, and sets VMSA to read-only mode. Window: The Window menu opens another VMSA main window, the Task Request Monitor window, the Alert Monitor window, the Object Search window, a copy of the main grid, or the Command Launcher. Selected: The Selected menu is a context-sensitive menu that launches tasks on a selected object. The Selected menu is dynamic and changes its options based on the type of object that is selected. By default, the Selected menu is greyed out. When an object is selected, the Selected menu is renamed and provides access to tasks appropriate for the selected object. For example, Selected becomes Volumes when a volume is selected. The Volumes menu provides access to volume tasks. Help: The Help menu provides access to online help for VMSA.

Menu Bar

F-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Toolbar The toolbar contains the following buttons: VMSA: The VMSA button launches an additional VMSA main window. Task: The Task button launches the Task Request Monitor window, which displays a list of tasks performed in the current session. Alert: The Alert button launches the Alert Monitor window, which identifies any objects that have experienced failures or errors and describes the problems. Search: The Search button launches the Object Search window, which is used to search for objects on the system. Grid: The Grid button launches a copy of the main grid in a new window. New: The New button launches the New Volume dialog box, which is used to create a volume. Host: The Host button launches the Connect to Host dialog box. Props: The Properties (Props) button launches the Object Properties window for a selected object. Print: The Print button launches the Print dialog box for a selected object. This dialog box is used to print details about a specific object. Custm: The Customize (Custm) button launches the Customize window, which you use to set preferences for the appearance of VMSA components. Moving the Toolbar: You can separate the toolbar from the main window or move the toolbar to the bottom, side, or top of the main window. To reposition the toolbar, press and hold the mouse button over the toolbar handle to the left of the toolbar and drag the toolbar to its new location.

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-7

Status Area The status area is located at the bottom right corner of the main window. When an object fails or experiences an error, an alert (error) icon appears in the status area. The Alert Monitor window provides details about the error. You can access the Alert Monitor window by clicking the alert icon in the status area.

Status Area

F-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Command Launcher The Command Launcher displays a list of tasks that can be performed on objects. Each task is listed with the object type, the command, and a description of the task. When you click a task in the Command Launcher list, the task is started, and the dialog box for the task appears. The Command Launcher is hidden by default. Using the Command Launcher Displaying the Command Launcher: You can display or hide the Command Launcher by selecting Window>Command Launcher. Docking the Command Launcher: You can separate or attach the Command Launcher and the main window by selecting Options>Customize and clicking Dock Command Launcher in the Customize windows Main Window tab. Resizing the Command Launcher: The splitter is the horizontal bar that separates the Command Launcher from the object tree and grid. When the Command Launcher is attached to the main window, you can adjust the Command Launcher height by placing the pointer over the horizontal splitter and then pressing and holding the mouse button to drag the splitter to the desired position. Sorting the Command Launcher: You can sort the items listed in the Command Launcher by object type, command, or task description by clicking the appropriate column heading. You can reverse the sort order by clicking the column heading again.

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-9

Other Views in VMSA The main window in VMSA provides the object tree and grid views of VxVM objects. You can also view objects and their details in other ways: Object View: The Object View window displays a graphical view of all VxVM objects. Note: The Object View is similar to the Classic View from the Volume Manager Visual Administrator (VxVA) motif graphical user interface. The VxVA software was available with older versions of VxVM, but is no longer available with VERITAS Volume Manager. Volume Layout Details: The Volume Layout Details window displays a close-up graphical view of the layout, components, and properties of a single volume. Volume to Disk Mapping: The Volume to Disk Mapping window displays a tabular view of volumes and their relationships to underlying disks. Object Properties: The Volume Properties window displays properties on a set of tabbed pages.

F-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Displaying Objects Graphically The Object View window displays a graphical view of volumes, disks, and other objects in a disk group. The object view window is dynamic, so the objects displayed are automatically updated when object properties change. You can select objects or perform tasks on objects in the Object View window. To display the Object View window, select a disk group in the main window, then select Disk Groups>Object View from the Selected menu.

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-11

Object View Window Components The Object View window consists of four main components: Volume pane, located in the upper part of the window Disk pane, located in the lower part of the window Menu bar Toolbar Volume Pane The volume pane is a graphical display of volumes in a particular disk group. This pane can display various levels of detail for volumes: Basic mode shows minimal information about a volume. This mode displays a compressed view of volumes. Layout mode shows a volumes components and layout. This mode displays a volumes subdisks and mirrors, as well as any columns or logs. Detailed mode shows detailed information about a volume and its components. This mode displays properties of the volume and its components. Disk Pane The disk pane is a graphical display of disks in a particular disk group. This pane can display various levels of detail for disks: Basic mode shows minimal information about a disk. This mode displays a compressed view of disks. Layout mode shows a disks regions and layout. This mode displays the subdisks and free space on a disk. Detailed mode shows detailed information about a disk and its subdisks and free space. This mode displays properties of the disk and its regions. Menu Bar The menu bar at the top of the Object View window contains the following menus: The File menu provides access to the New menu, which creates volumes and file systems. The File menu also displays another disk group, prints the properties of a selected object, closes the Object View window, or provides access to an object Properties window. The Options menu provides access to the Volumes and Disks menus, which set the display mode for all volumes or disks in the Object View and expand or collapse all volumes or disks. The Options menu also clears projection settings. The Window menu launches a window that displays any dissociated objects or launches the Projection window. The context-sensitive Selected menu accesses tasks or properties for a selected object. The Selected menus name and options depend on the type of object that is selected.

F-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Toolbar The toolbar at the top of the Object View window has the following set of buttons: Expand Volume: Click the Expand Volume button to show more detailed information about all volumes in the Object View. Collapse Volume: Click the Collapse Volume button to hide details for all volumes in the Object View. Expand Disk: Click the Expand Disk button to show more detailed information about all disks in the Object View. Collapse Disk: Click the Collapse Disk button to hide details for all disks in the Object View. Project: The Project button launches the Projection window. Print: The Print button prints the properties of a selected object. Projection Projection shows the relationships between objects by highlighting objects that are related to or part of a specific object. When both disks and volumes are in layout mode or detailed mode, clicking on a subdisk in a volume (or disk) highlights the location of the subdisk on the corresponding disk (or volume). Moving Subdisks You can move subdisks by dragging the subdisk icons to their new locations. You can drag a subdisk to another disk or a gap on the same disk. Note: Moving subdisks reorganizes a volumes disk space and should be done with caution. Refreshing the Object View To refresh the Object View window, right-click on the background area of the window and select Reset Server from the popup window. Click OK in the Reset Server dialog box to update the contents of the Object View window and fix any display problems. Under normal circumstances, the Object View window automatically updates its contents to reflect object changes and does not require a manual refresh. Printing Object Properties To print the properties of a particular object, select the object and then select File>Print.

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-13

Displaying Volume Layout Details The Volume Layout Details window displays a close-up graphical view of the layout, components, and properties of a single volume. This window is not dynamic, so the objects displayed are not automatically updated when the volume properties change. To display the Volume Layout Details window, select a volume in the main window, then select Volumes>Show Layout from the Selected menu.

Using the Volume Layout Details Window To update or refresh this view, select File>Update. To print the properties of a particular object, select the object, select File>Print, and complete the Print dialog box. To view a different volume, select File>Open and then specify another volume in the Open Volume dialog box. To hide the detailed information within each object, select View>Compress Display. To show details for a particular object in the compressed display, click on that object. To highlight objects that are related to or are part of a specific object, select View>Project On Selection and then click on an object. To highlight any subdisks on the same disk as a specific subdisk, select View>Subdisk Projection and then click on a subdisk.

F-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Displaying Volume to Disk Mapping The Volume to Disk Mapping window displays a tabular view of volumes and their relationships to underlying disks. Volumes are listed in the top row of the table and disks are listed in the left column of the table. Each circle icon in the table indicates that part of the corresponding volume is located on the corresponding disk. This window is dynamic, so the contents are automatically updated when objects are added, removed, or changed. The Volume to Disk Mapping window also has a performance monitoring feature that ranks volume response time. To display the Volume to Disk Mapping window, select a disk group in the main window grid, then select Disk Groups>Disk/Volume Map from the Selected menu.

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-15

Displaying Object Properties The Properties window contains detailed information about a selected object. The Properties window consists of a set of tab pages that contain information about objects and related objects. The tab labels and page contents vary with the type of object selected. To display the Properties window, select an object in the main window grid, then select Properties from the Selected menu. You can also access the Properties window by double-clicking on an object in the main window grid.

F-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VMSA Properties File VMSA properties can be found and changed in the properties file located in /opt/VRTSvmsa/vmsa/properties. The contents of the file are as follows:
#These are the resources for vmsa on Solaris # # gui properties # vrts.server.host=unknown vrts.codebase=./ vrts.iconbase=../../ vrts.client.localdir=.vmsa # # gui and server properties # vrts.osname=Solaris vrts.security=true vrts.debug.startup=false # # server configuration properties # vrts.allowRemoteConnections=true vrts.server.adminGroup=vrtsadm vrts.server.readonly=false vrts.server.readonlyGroup=vrtsro vrts.server.waitForAllTasks=false # The following property is used only if the server is automatically started # on a client connection. Specify timeout in minutes. It takes about half a # minute to bring up the server. A value of 0 means no timeout. vrts.server.noClientsTimeout=15 vrts.userPreferenceDir=preferences vrts.userBaseDir=/var/opt/vmsa vrts.taskLogFile=/var/opt/vmsa/logs/command vrts.accessLogFile=/var/opt/vmsa/logs/access
Appendix F: VMSA Reference
Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-17

# # os dependent properties # vrts.vfstab=/etc/vfstab vrts.vxbin=/usr/sbin/ vrts.vxldbin=/sbin/ vrts.vxetcbin=/etc/vx/bin/ vrts.vxfsbin=/usr/lib/fs/vxfs/ vrts.fsdir=/usr/lib/fs/ vrts.defaultfs=ufs # vrts.vxbdevdir=/dev/vx/dsk/ vrts.vxrdevdir=/dev/vx/rdsk/ # vrts.devdsk=/dev/dsk/ vrts.devrdsk=/dev/rdsk/ # vrts.devdmp=/dev/vx/dmp/ vrts.devrdmp=/dev/vx/rdmp/ # vrts.mountCmd=/sbin/mount vrts.umountCmd=/sbin/umount vrts.mkfsCmd=/usr/sbin/mkfs vrts.fsckCmd=/usr/sbin/fsck vrts.killCmd=/bin/kill vrts.shutdownCmd=/usr/sbin/shutdown -g60 -y -i6 vrts.scanDisksCmd=/usr/sbin/drvconfig;/usr/sbin/disks;/ usr/sbin/vxdctl enable vrts.startClusterCmd=/bin/ksh -c 'echo y|/opt/ SUNWcluster/bin/pdbadmin startcluster $(hostname) $(cat / etc/opt/SUNWcluster/conf/default_clustername)' vrts.stopClusterCmd=/bin/ksh -c '/opt/SUNWcluster/bin/ clustm stopall $(cat /etc/opt/SUNWcluster/conf/ default_clustername)' vrts.startNodeCmd=/opt/SUNWcluster/bin/pdbadmin startnode vrts.stopNodeCmd=/bin/ksh -c '/opt/SUNWcluster/bin/ pdbadmin stopnode $(cat /etc/opt/SUNWcluster/conf/ default_clustername)'
F-18 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

# vrts.vfstabDevCol=0 vrts.vfstabDevFsckCol=1 vrts.vfstabMountPointCol=2 vrts.vfstabFsTypeCol=3 vrts.vfstabFsckPassCol=4 vrts.vfstabMountAtBootCol=5 vrts.vfstabOptionsCol=6

Appendix F: VMSA Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

F-19

F-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Tunable Parameters

VxVM Tunable Parameters


Tunables for the VxVM System I/O Driver The kernel parameters described in this section define the behavior of Volume Managers system I/O driver (vxio). The vxio driver sets all of these parameters to internal default values. Viewing Tunable Parameters You can display the default value of an individual tunable parameter by using the command:
# echo parameter/D|adb -k

For example:
# echo vol_maxio/D|adb -k

You can view the internal default values of all tunable parameters by using the command:
# prtconf -vP

Setting Tunable Parameters You can change the value of a parameter, and override the internal default, by adding the parameter to /kernel/drv/vxio.conf and rebooting. For example, to change the tunable parameter vol_max_vol, add the parameter and the new value to the /kernel/drv/vxio.conf file: 1 Open the /kernel/drv/vxio.conf file in a text editor: # vi /kernel/drv/vxio.conf 2 Add the parameter and new value to the end of the file: name=vxio parent=pseudo instance=0 vol_max_vol=5000; 3 Save the file and quit. 4 Reboot the system: # /usr/sbin/shutdown -g0 -y -i6
Volume Manager Tunable Parameters
Parameter Description The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint. A system failure during such operations does not require a full recovery, but can continue from the last reached checkpoint. Internal Default 20480 sectors (10 MB) Implications and Limitations Increasing this size reduces the overhead of check-pointing on recovery operations at the expense of additional recovery following a system failure. Limited by RAM in system

vol_checkpt_default

G-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Tunable Parameters


Parameter Description Wait time for I/Os issued from processes directed to slow down, but not given any wait time. Utilities that are resynchronizing mirrors or rebuilding RAID-5 columns use this value. The maximum size in kilobytes of the bitmap that FastResync uses to track changed blocks in a volume. The number of blocks in a volume that are mapped to each bit in the bitmap depends on the size of the volume, and this value changes if the size of the volume is changed. For example, if the volume size is 1 GB and the system block size is 512 bytes, a vol_fmr_logsz value of 4 yields a map containing 32,768 bits, each bit representing one region of 64 blocks. The larger the bitmap size, the fewer the number of blocks that are mapped to each bit. This can reduce the amount of reading and writing required on resynchronization, at the expense of requiring more nonpagable kernel memory for the bitmap. Maximum number of volumes that can be created on the system Internal Default 50 clock ticks Implications and Limitations Increasing this value results in slower recovery operations and lower system impact while recoveries are being performed. Value is unlimited. The total memory overhead is one accumulator bitmap plus one bitmap for each mirror or snapshot that is tracked by FastResync. In configurations with thousands of mirrors and attached snapshot plexes, this can represent a significantly higher overhead in memory consumption than is usual for VxVM. Minimum: 1K Maximum: 32K Note: The value of this tunable does not have any effect on persistent FastResync.

vol_default_iodelay

vol_fmr_logsz

4096 bytes (4K)

vol_max_vol

131071 volumes (128K)

Caution: Increasing this parameter uses up configuration database register space. Minimum: 1 Maximum: Maximum number of minor numbers on the system Increase this value when doing Direct I/O. Do not increase beyond 20% of RAM, or else system could deadlock. Minimum: Size of largest full stripe in striped or RAID-5 volume Maximum: 20% of RAM

vol_maxio
Interdependency: The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.

Maximum size of logical I/O operations that can be performed without breaking up the request into separate operations

2048 sectors (1 MB)

Appendix G: Volume Manager Tunable Parameters


Copyright 2002 VERITAS Software Corporation. All rights reserved.

G-3

Volume Manager Tunable Parameters


Parameter Description Maximum data size that can be passed to VxVM with an ioctl call Internal Default 32768 bytes (32K)
(Do not reduce!)

Implications and Limitations Some utilities issue a minimum 32K size ioctl call and may fail. Minimum: 32768 bytes Maximum: RAM in system Increasing this value is unlikely to provide much benefit, because most process threads only issue one I/O at a time anyway. Limited by type of system and amount of RAM Changing this tunable does not have much effect. Minimum: 256 I/Os If ioctl exceeds this value, the request may fail or be broken up into separate operations. Minimum: Size of your largest stripe Below 32K is not recommended Increasing this value can help sequential I/O, because it causes less switching to alternate mirrors for reading. Large numbers of randomly distributed volume reads are generally best served by reading from alternate mirrors. Caution: Increasing this parameter uses up configuration database register space. Value is unlimited. Do not use this parameter (that is, ensure that it is set to 0) if Cluster Volume Manager is not used or enabled on the system.

vol_maxioctl

vol_maxkiocount

Maximum number of parallel I/Os that VxVM can perform at once

4096 I/Os

vol_maxparallelio

Number of I/Os which vxconfigd can request from kernel in a single read I/O per one write I/O Maximum size of an I/O that can be issued by an ioctl call

256 I/Os

vol_maxspecialio

2 MB

vol_mvr_maxround

Controls granularity of round robin read policy for mirrored volumes. A read will be serviced by the same mirror as the last read if its offset is within this much of the last read. Maximum number of subdisks which can be associated to a single plex Enables or disables SmartSync Recovery Accelerator. If set to 0, this parameter disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups.

512 sectors (256K)

vol_subdisk_num

4096 subdisks

volcvm_smartsync

G-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Tunable Parameters


Parameter Description Maximum number of dirty regions which can exist on the system at any time Number of dirty regions allowed for sequential DRL. Some volumes, such as those used for Oracle replay logs, are written sequentially and do not benefit from this lazy cleaning of the DRL bits. For these volumes, sequential DRL can be used to further restrict the number of dirty bits and speed up recovery. Amount of volume data that is represented by each Dirty Region Log bit Internal Default 2048 regions Implications and Limitations Increasing improves system performance, at the expense of recovery time. Value is unlimited. Increasing and using sequential DRL on volumes that are written sequentially may severely impact I/O throughput.

voldrl_max_drtregs

voldrl_max_seq_dirty

3 regions

voldrl_min_regionsz

1024 sectors (512K)

Increasing causes cache hitratio for regions to improve, but prolongs recovery time. Minimum: 512K No maximum A larger value reduces memory allocation overhead by allowing Volume Manager to maintain a larger amount of memory. Limited by system memory, and other drivers and applications on the system Increasing this size can allow additional tracing to be performed at the expense of system memory usage. Limited by RAM in system.

voliomem_chunk_size

Granularity value that defines how VxVM releases and acquires system memory

65536 bytes (64K)

voliomem_maxpool_sz
Interdependency: The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.

Prevents one I/O from using all the memory in the system. When a write for a RAID-5 volume is greater than voliomem_maxpool_sz/10, it is broken up into chunks of voliomem_maxpool_sz/10. When a write for a mirrored volume is greater than voliomem_maxpool_sz/2, it is broken up into chunks of voliomem_maxpool_sz/2.

5% of memory, up to a maximum of 128 MB

Appendix G: Volume Manager Tunable Parameters


Copyright 2002 VERITAS Software Corporation. All rights reserved.

G-5

Volume Manager Tunable Parameters


Parameter Description Sets default size of error trace buffer Internal Default 16384 bytes (16K) Implications and Limitations Increasing this buffer provides storage for more error events at the expense of system memory. Decreasing the size of the buffer could lead to a situation where an error cannot be detected using the tracing device. Applications that depend on error tracing to perform some responsive action are dependent on this buffer. If trace data is lost because this buffer size is too small, then this value can be increased. Increasing this size can allow additional tracing to be performed at the expense of system memory usage. It is not advisable to set this value to a size greater than that which can readily be accommodated on the system. Increasing this buffer provides for larger traces to be taken without loss for very heavily used volumes. Never set to more than the voliot_iobuf_limit. The allocation of each channel takes up approximately 20 bytes even when not in use. Limited by RAM in system The maximum size of this memory is limited by the value of voliomem_maxpool_sz.

voliot_errbuf_default

voliot_iobuf_dflt

Sets size of trace buffer if nothing else is defined Controls maximum amount of memory used by trace buffering

8192 bytes (8K) 4194304 bytes (4 MB)

voliot_iobuf_limit

voliot_iobuf_max

Controls maximum amount of memory used by a single trace buffer

1048576 bytes (1 MB)

voliot_max_open

Sets limit of simultaneous vxtrace channels (threads)

32 channels

volraid_minpool_sz
Note: This parameter is used internally by VxVM and cannot be modified manually.

Sets the initial amount of memory requested from the system for RAID-5 operations

4194304 bytes (4 MB)

volraid_rsrtransmax

Maximum number of transient reconstructs on a RAID-5 volume at one time

1 reconstruct operation

Increasing this size may improve the initial performance on the system when a failure first occurs and before a detach of a failing object is performed, but can lead to possible memory starvation conditions. Limited by RAM in system

G-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Tunables for the VxVM DMP Driver The parameters described in this section define the behavior of Volume Managers dynamic multipathing (DMP) driver (vxdmp). The vxdmp driver sets these parameters to internal default values. You can modify these parameters by using the same method used for editing the /kernel/drv/vxio.conf file.

Volume Manager Tunable Parameters


Parameter Description The number of contiguous I/O blocks (expressed as an integer power of 2) that are sent along a DMP path to an Active/Active array before switching to the next available path. Internal Default The default value of this parameter is set to 11 so that 2048 blocks (1MB) of contiguous I/O are sent over a DMP path before switching. Implications and Limitations For intelligent disk arrays with internal data caches, better throughput may be obtained by increasing the value of this tunable. For example, for the HDS 9960 A/A array, the optimal value is between 15 and 17 for an I/O activity pattern that consists mostly of sequential reads or writes.

dmp_pathswitch_blks_shift

Appendix G: Volume Manager Tunable Parameters


Copyright 2002 VERITAS Software Corporation. All rights reserved.

G-7

G-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Troubleshooting Quick Reference

H-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Disk Failures and Solutions


This table summarizes possible disk failures and the steps to solve the problem. In each example, the disk that fails is c1t1d0.
Disk Failures and Solutions Problem The drive was turned off, and turned back on. # # # # Safe Solution prtvtoc /dev/rdsk/c1t1d0s2 vxdctl enable vxreattach vxrecover -s Quick Solution # vxdctl enable # vxreattach -r For nonredundant volumes: # vxvol -g diskgroup -f start volume (Check data consistency.)

For nonredundant volumes: # vxvol -g diskgroup -f start volume (Check data consistency.) The drive failed, and was replaced with a new drive in the same slot. # prtvtoc /dev/rdsk/c1t1d0s2 # vxdctl enable # vxdiskadm (option 5) For nonredundant volumes: # vxvol -g diskgroup -f start volume (Restore data from backup.) The drive failed, and was replaced with a new drive in a new SCSI location. # # # # # drvconfig disks prtvtoc /dev/rdsk/c1t1d0s2 vxdctl enable vxdiskadm (option 5)

For nonredundant volumes: # vxvol -g diskgroup -f start volume (Restore data from backup.) The drive is experiencing intermittent failures. The drive is experiencing intermittent failures and the system has slowed down significantly. # vxdiskadm (option 7) # vxdiskadm (option 3) # vxdiskadm (option 4) # vxdiskadm (option 5) For nonredundant volumes: # vxvol -g diskgroup -f start volume (Restore data from backup.) # vxdiskadm (option 3)

# vxdiskadm (option 4)

Appendix H: Troubleshooting Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

H-3

Volume and Plex State Problems and Solutions


This table summarizes some of the common problems you may experience related to volume and plex states. This table is intended as a quick reference and is not comprehensive.
Object State Problems and Solutions

Object: Kernel State/Object State volume: DISABLED/EMPTY plex1: DISABLED/EMPTY plex2: DISABLED/EMPTY volume: DISABLED/CLEAN plex1: DISABLED/CLEAN plex2: DISABLED/STALE volume: ENABLED/ACTIVE plex1: ENABLED/ACTIVE plex2: DISABLED/STALE volume: DISABLED/CLEAN plex1: DISABLED/CLEAN plex2: DISABLED/CLEAN volume: DISABLED/ACTIVE plex1: DISABLED/RECOVER plex2: DISABLED/STALE volume: DISABLED/ACTIVE plex1: DISABLED/RECOVER plex2: DISABLED/STALE

Known Problem
Volume has just been created.

Solution
# vxvol init clean volume plex1 # vxrecover -s volume Quick Solution: # vxvol start volume # vxrecover -s volume

Normal

Normal

# vxrecover volume

Normal

# vxrecover -s volume Quick Solution: # vxvol start volume # vxmend fix stale plex1 # vxmend fix clean plex1 # vxrecover -s volume # vxmend fix stale plex1 # vxmend fix clean plex2 # vxrecover -s volume Note: You have to verify that the volume has your data; if not, you will have to restore from backup. # vxvol -f start volume (Check data consistency.) Note: Not redundant, so -f will not hurt. # vxvol -f start volume (Restore from backup.) Note: Not redundant, so -f will not hurt, must restore from backup. # vxmend fix stale plex1 # vxmend fix clean plex1 # vxrecover -s volume

plex1 disk turned off, then back on, data okay plex1 disk failed and replaced, data nonexistent

volume: DISABLED/ACTIVE plex1: DISABLED/RECOVER volume: DISABLED/ACTIVE plex1: DISABLED/RECOVER

plex1 disk turned off, then back on, data okay plex1 disk failed and replaced, data nonexistent

volume: ENABLED/ACTIVE plex0: ENABLED/ACTIVE subvolume: DISABLED/ACTIVE plex1: DISABLED/RECOVER plex2: DISABLED/STALE

plex1 disk turned off, then back on, data okay

H-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Object State Problems and Solutions

Object: Kernel State/Object State volume: ENABLED/ACTIVE plex0: ENABLED/ACTIVE subvolume: DISABLED/ACTIVE plex1: DISABLED/RECOVER plex2: DISABLED/STALE volume: ENABLED/ACTIVE plex0: ENABLED/ACTIVE subvolume: ENABLED/ACTIVE plex1: ENABLED/ACTIVE plex2: DISABLED/STALE volume: ENABLED/ACTIVE plex0: ENABLED/ACTIVE subvolume: DISABLED/CLEAN plex1: DISABLED/CLEAN plex2: DISABLED/CLEAN

Known Problem
plex1 disk failed and replaced; data nonexistent

Solution
# vxmend fix stale plex1 # vxmend fix clean plex2 # vxrecover -s volume Note: You have to verify that the volume has your data; if not, you will have to restore from backup. # vxrecover -s volume Quick Solution: # vxrecover

Normal

Normal

# vxrecover -s volume

Appendix H: Troubleshooting Quick Reference


Copyright 2002 VERITAS Software Corporation. All rights reserved.

H-5

H-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Start-Up Scripts

I-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Volume Manager Start-Up Scripts


The /etc/rcS.d Directory The following scripts are from the /etc/rcS.d directory: S25vxvm-sysboot The restore daemon checks the health of disabled device node paths at a polling interval of 300 seconds. vxconfigd is started in boot mode. The rootdg disk group is found and imported. Volumes that are needed for booting are started, but no resynchronization is performed. If you booted on a stale root plex, the boot process is stopped, and the user is given instructions regarding the locations of the other known mirrors. S35vxvm-startup1 This script is run after / and /usr have been mounted read-only by S25vxvm-sysboot. Any special volumes (such as /var, /var/adm, and /usr/kvm) that are separately mountable are started, but no resynchronization is performed. The dump device is set up. S85vxvm-startup2 This script runs ten vxio kernel daemons. vxconfigd is transitioned from boot to enabled mode. DMP is initialized, and DMP device nodes are created or refreshed. All disks that can be autoconfigured (sliced-type disks) are defined. All disk groups marked for autoimport are imported. All failed disk media (DM) records are attached to their disk access (DA) records. All volumes are started, but no resynchronization is performed. S86vxvm-reconfig Any new disks are added. Any upgrades are finished. Any reconfigurations or encapsulations are performed. System checks for valid ELAN licenses. System is rebooted if needed.

Appendix I: Volume Manager Start-Up Scripts


Copyright 2002 VERITAS Software Corporation. All rights reserved.

I-3

The /etc/rc2.d Directory The system transitions from single-user to multi-user mode by running the scripts in the /etc/rc2.d directory. The following scripts are from the /etc/rc2.d directory: S94vxnm-host_infod For VERITAS Volume Replicator (VVR) only RPC server is started. S94vxnm-vxnetd For VERITAS Volume Replicator (VVR) only vxnetd daemon is started. S95vxvm-recover All recovery and resynchronizations are started. The relocation daemons are started.

I-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Operating VxVM and VxFS in a Linux Environment

Objectives
After reviewing this appendix, you will be able to: List the VERITAS solutions currently available for enterprise Linux environments. Describe key differences between the Linux and Solaris operating systems. List supported features of VxVM and VxFS on Linux. Identify VxVM and VxFS for Linux installation prerequisites. Install VxVM and VxFS on Linux. Identify VxVM operational differences on Linux. Identify VxFS operational differences on Linux.
FOS35_Sol_R1.0_20020930 J-2

Copyright 2002 VERITAS

Introduction
Overview For companies who are integrating the Linux platform into their enterprise, this appendix describes the differences that exist between VERITAS Foundation Suite components (VERITAS Volume Manager and VERITAS File System) on Linux and on Solaris. This appendix highlights key differences in installation, operation, and configuration. Importance As Linux becomes more appropriate for the enterprise, many organizations are integrating the Linux operating system into their IT infrastructures. Enterprise users are attracted by the ability of Linux to run on a variety of hardware platforms, its ability to reliably support network and server loads, and its expanding role in storage networks. VERITAS brings storage management solutions for UNIX environments to the Linux platform, enabling Linux servers to better handle the most demanding, data-intensive applications. Outline of Topics VERITAS Solutions for Enterprise Linux Comparing Linux and Solaris VxVM and VxFS for Linux: Supported Features VxVM and VxFS for Linux: Installation Prerequisites Installing VxVM and VxFS on Linux

J-2

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Operating VxVM on Linux Operating VxFS on Linux

Objectives List the VERITAS solutions currently available for enterprise Linux environments. Describe key differences between the Linux and Solaris operating systems. List supported features of VxVM and VxFS on Linux. Identify VxVM and VxFS for Linux installation prerequisites. Install VxVM and VxFS on Linux. Identify VxVM operational differences on Linux. Identify VxFS operational differences on Linux.

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-3

VERITAS Solutions for Linux


Current VERITAS products for Linux include:
VERITAS Foundation Suite for Linux VERITAS Cluster Server for Linux VERITAS NetBackup DataCenter for Linux VERITAS NetBackup BusinesServer for Linux VERITAS ServPoint NAS for Linux For more information on these products, visit: http://www.veritas.com

FOS35_Sol_R1.0_20020930

J-3

Copyright 2002 VERITAS

VERITAS Solutions for Enterprise Linux


Current VERITAS products for Linux include: VERITAS Foundation Suite for Linux: VERITAS Foundation Suite enables companies to manage more storage with fewer resources, and to increase application availability and productivity. VERITAS Cluster Server for Linux: VERITAS Cluster Server delivers high availability clustering solutions for any application running on open systems platforms. Through proactive, policy-based provisioning and load balancing, VERITAS Cluster Server also optimizes resource and server usage. VERITAS NetBackup DataCenter for Linux: VERITAS NetBackup provides enterprise-class backup and recovery capabilities and now provides data protection for Linux environments. VERITAS NetBackup DataCenter is a mainframe-strength solution designed for the most data-intensive environments. VERITAS NetBackup BusinesServer for Linux: VERITAS NetBackup BusinesServer is an easy-to-use solution designed for small- to medium-size enterprise installations, combining the performance necessary for high-speed backups with an intuitive, centralized administrative interface. VERITAS ServPoint NAS for Linux: VERITAS ServPoint storage appliance software transforms industry-standard hardware components into open, enterprise-class appliances that simplify storage administration and lower the total cost of storage ownership. For more information on VERITAS solutions for Linux, visit http://www.veritas.com.
J-4 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Linux OS Differences
Linux is an open source operating system.
Source code is freely downloadable. Packaged distributions are available through companies such as RedHat and SuSE. Operating system roots are in SVR2 UNIX and Minix. Differences between Linux and Solaris depend on experience level: User-level programmers notice no significant differences. Experienced programmers notice kernel similarities. New programmers see very few similarities.
FOS35_Sol_R1.0_20020930 J-4

Copyright 2002 VERITAS

Comparing Linux and Solaris


Linux: Open Source Operating System Linux is an open source operating system that has evolved over the past decade through the contributions of many individuals, schools, and companies. While the Linux source code is freely downloadable, companies such as RedHat and SuSE offer packaged distributions of the Linux operating system. The initial design of the Linux operating system combined concepts and features of the standard SVR2 UNIX and the Minix operating systems. However, the Linux operating system today has diverged significantly from both of those platforms. The differences that you notice between the Linux and Solaris operating systems varies depending on your experience level: User-level Linux programmers are likely to notice no significant differences in the Solaris and Linux interfaces. At this level, Linux and Solaris are, for all practical purposes, the same UNIX platform. Experienced Linux kernel programmers are likely to find similarities between Solaris and Linux at the kernel level, given that you can trace the original roots of Linux to SVR2. New Linux kernel developers, familiar with SVR4, are likely to find very little in common with a standard SVR4 UNIX system.

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-5

Linux Device Naming


On Linux, device names are displayed in the format:

sdx[N]
SCSI disk Partition (1 to 15)

or

hdx[N]
EIDE disk Partition (1 to 15)

Order of Disks (a, b, c, ...)


# vxdg free GROUP DISK DEVICE rootdg disk01 sda rootdg disk02 sdb newdg newdg01 sdc
FOS35_Sol_R1.0_20020930

Order of Disks (a, b, c, ...)

TAG sda sdb sdc

OFFSET 0 0 0

LENGTH FLAGS 4444228 4443310 4443310 J-5

Copyright 2002 VERITAS

Linux Device Naming On Linux, device names are displayed in the format: sdx[N] hdx[N] In the syntax: sd refers to a SCSI disk, and hd refers to an EIDE disk. x is a letter that indicates the order of disks detected by the operating system. For example, sda refers to the first SCSI disk, sdb references the second SCSI disk, and so on. N is an optional parameter that represents a partition number in the range 1 through 15. For example, sda7 references partition 7 on the first SCSI disk. If the partition number is omitted, the device name indicates the entire disk. On Linux, VxVM commands display device names in this format. For example, the vxdg free command displays the following output:
GROUP rootdg rootdg newdg newdg oradg DISK disk01 disk02 newdg01 newdg02 oradg01 DEVICE sda sdb sdc sdd sde TAG sda sdb sdc sdd sde OFFSET 0 0 0 0 0 LENGTH 4444228 4443310 4443310 4443310 4443310 FLAGS -

Note: On Solaris, device names are displayed in the c#t#d#s# format (with the partition) and device tags are displayed in the c#t#d# format (without the partition).

J-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxVM for Linux Features


VxVM for Linux supports:
Concatenation and spanning Striped, mirrored, striped mirror, and RAID-5 layouts Disk groups CLI and VEA Relayout, task monitoring, and hot relocation DMP, DDL, and enclosure-based naming FlashSnap and cluster functionality (both separately licensed)
Not yet supported: Rootability (boot disk encapsulation and mirroring); rootability patch available before next release
FOS35_Sol_R1.0_20020930 J-6

Copyright 2002 VERITAS

VxVM and VxFS for Linux: Supported Features


The major differences between Linux and Solaris are in the way that you install and configure VxVM and VxFS, rather than in functional and operational aspects. Supported Features: VxVM This release of VxVM for Linux supports almost all of the features that are available on Solaris, including: Concatenation and spanning Striped, mirrored, striped mirror, and RAID-5 layouts Disk groups Command line interface and VERITAS Enterprise Administrator (Java-based graphical user interface) Online relayout and task monitoring Hot relocation Dynamic multipathing Device discovery layer (DDL) Enclosure-based naming FlashSnap, including FastResync and disk group split/join functionality, as a separately licensed feature set Cluster functionality (support for two nodes only), as a separately licensed feature set

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-7

One key feature that is not supported in the current release of VxVM for Linux is boot disk encapsulation and mirroring. A rootability patch will be available before the next release.

J-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VxFS for Linux Features


VxFS for Linux supports:
1 TB maximum file system size; 2 TB maximum file size Journaling and fast recovery Dynamic metadata and extent-based allocation Online file system resizing and parallel fsck

Quotas Snapshots and checkpoints QuickLog

VxFS for Linux does not currently support:


Quick I/O ACLs
FOS35_Sol_R1.0_20020930 J-7

Copyright 2002 VERITAS

Supported Features: VxFS This release of VxFS for Linux supports almost all of the features that are available on Solaris, including: 1 TB maximum file system size 1 TB maximum file size (up to 2 TB for sparse files) Journaling and fast file system recovery Dynamic metadata allocation and extent-based allocation Online file system resizing Parallel file system checking and repair Quotas File system snapshots Storage checkpoints QuickLog Features that are not supported in this release of VxFS for Linux include: Quick I/O Access control lists (ACLs)

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-9

Product Versions and Kernels


VERITAS Foundation Suite 2.0 for Linux, includes:
VxVM 3.2 Update 1 VxFS 3.4 Update 1

Supported Linux kernels:


RedHat Linux 7.2 (2.4.7-10 kernel) RedHat Advanced Server (2.4.9-e3 kernel) Obtain RedHat kernels from:

ftp://ftp.redhat.com/pub/redhat/support/ enterprise/isv/kernel-archive/
Obtain the latest patch information from: http://support.veritas.com
FOS35_Sol_R1.0_20020930 J-8

Copyright 2002 VERITAS

VxVM and VxFS for Linux: Installation Prerequisites


All of the recommended preinstallation practices for VxVM and VxFS on Solaris also apply to installing these products on Linux. This section highlights differences in installation prerequisites for the Linux operating system. Product Versions and Supported Kernels VxVM and VxFS for Linux are available through VERITAS Foundation Suite and VERITAS Foundation Suite HA for Linux. VxVM and VxFS are not available as separately licensed products for the Linux operating system. VxVM and VxFS for Linux currently support the UP, SMP, and Enterprise versions of the following RedHat Linux kernels:
VERITAS Product Version VERITAS Foundation Suite 2.0 for Linux, which includes: Supported Linux Kernels

VxVM 3.2 Update 1 VxFS 3.4 Update 1

RedHat Linux 7.2 with the 2.4.7-10 kernel RedHat Advanced Server with the 2.4.9-e.3 kernel

Note: VERITAS Foundation Suite HA includes the same versions of VxVM and VxFS and adds VERITAS Cluster Server (VCS).

J-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Obtaining RedHat Kernels You can obtain the appropriate RedHat kernel for VERITAS Foundation Suite for Linux from the following site:
ftp://ftp.redhat.com/pub/redhat/support/enterprise/isv/ kernel-archive/

A password is not needed to access this site. Installing Linux Patches Installing the patch Command If not already present on the system, install the patch RPM, which contains the patch command, before installing VxVM or VxFS on a RedHat Advanced Server system. The patch RPM, patch-2.5.4-10.i386.rpm, is available in the RedHat/RPMS directory on the RedHat Advanced Server CD-ROM, redhatlinux_i3862.1as#1. Installing Required Patches There is an NFS kernel bug in the Red Hat 7.2 kernel that affects VxFS. If you are using your system as a server and you want to export VxFS file systems using NFS, you must install a Linux kernel patch to ensure that VxFS operates correctly. Install the corrective binary patch package to replace the problematic nfsd module. The patch will restore the original nfsd module when it is deinstalled. The nfsd packages, including the source code patches for Red Hat 7.2 kernel 2.4.7-10 are: 2.4.7-10: nfsdfix-2.4.7-10.i686.rpm 2.4.7-10smp: nfsdfix-smp-2.4.7-10.i686.rpm 2.4.7-10enterprise: nfsdfix-enterprise-2.4.7-10.i686.rpm The nfsd patches are available in compressed tar file format from the VERITAS anonymous FTP site:
ftp://ftp.veritas.com/pub/support/fst.1.1.linux.nfsdfix.tar.gz

See the VERITAS File System Release Notes for information on obtaining, installing, and deinstalling Linux kernel patches. Staying Informed Before installing VxVM or VxFS for Linux, read the product release notes and installation guides that are available on the VERITAS CD-ROM. These documents contain important information about specific systems and configurations. For the latest information about patches and other issues, visit the VERITAS Support Web site at http://support.veritas.com.

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-11

Before Installing VxVM and VxFS


Confirm memory and space requirements:
A minimum of 512 MB of RAM is recommended. Minimum space requirements are:
VxVM 14.5 MB in / 6 MB in /usr 120.8 MB in /opt VxFS 1 MB in /etc 5 MB in /lib 1 MB in /sbin 7 MB in /usr

Check the disks on your system by using: # fdisk -l Obtain license keys.
FOS35_Sol_R1.0_20020930 J-9

Copyright 2002 VERITAS

Confirming Sufficient Memory and Space Before installing VxVM and VxFS for Linux, you should confirm that your system has enough memory and free space to accommodate the installation. A minimum of 512 MB of RAM is recommended. Free space requirements are detailed below. VxVM for Linux Space Requirements The following table shows the approximate minimum space requirements for each package and for each file system:
Package Contents Driver and utilities Licensing utilities Manual pages Documentation VERITAS Enterprise Administrator GUI VxVM provider for VEA VxFS provider for VEA Size 18 MB 3 MB 300K 6 MB 21 MB 81 MB 8 MB 4 MB File System 12 MB in / 6 MB in /usr 2.5 MB in / 500K in /opt /opt /opt /opt /opt /opt

VRTSvxvm VRTSvlic VRTSvmman VRTSvmdoc VRTSob VRTSobgui VRTSvmpro VRTSfspro

J-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Total minimum space requirements: 14.5 MB in / 6 MB in /usr 120.8 MB in /opt VxFS for Linux Space Requirements The VxFS packages have the following minimum space requirements: 1 MB in /etc 5 MB in /lib 1 MB in /sbin 7 MB in /usr Checking the Disks on Your System Before installing VxVM, you should check the disks installed on your system to ensure that all disks are detected by the operating system and are functioning normally. On Linux, you can check your disks by running:
# fdisk -l

Obtaining a License Key The process to obtain a license key on Linux is the same process as on Solaris. Ensure that you have the appropriate license keys before you install VxVM and VxFS for Linux. As on Solaris, you use the vxlicrep command to display installed licenses and the vxlicinst command to add a new license.

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-13

Installing VxVM and VxFS on Linux


To add the VxVM and VxFS packages, use rpm: rpm -ihv package_name
1. Log on as superuser. 2. Mount the VERITAS CD. 3. Add the VxVM and VxFS packages: a. Add the VERITAS licensing package:
# rpm -ihv VRTSvlic-3.00-007.i386.rpm

b. Add the VxVM and documentation packages:


# rpm -ihv VRTSvxvm-3.2-update1_GA.i686.rpm # rpm -ihv VRTSvmdoc-3.2-update1_GA.i686.rpm # rpm -ihv VRTSvmman-3.2-update1_GA.i686.rpm
FOS35_Sol_R1.0_20020930 J-10

Copyright 2002 VERITAS

Installing VxVM and VxFS on Linux


The rpm Command Adding the VxVM and VxFS package on Linux is very similar to the process on Solaris. However on Linux, you use the RedHat package manager, rpm, to add the required packages (Solaris uses pkgadd -d.):
rpm -ihv package_name

The -i option signifies installation mode. You use the -h and -v options to format the installation output. VxVM and VxFS Packages The names of VxVM and VxFS packages are the same names on Linux as on Solaris. However, on Linux, the package names are appended with package version numbers and .rpm; for example, the VRTSvlic package name is:
VRTSvlic-3.00-007.i386.rpm

Adding VxVM and VxFS for Linux Packages To add the VxVM software packages: 1 Log on as superuser. 2 Mount the VERITAS CD-ROM and change to the mount point directory: # mount -o ro /dev/cdrom /mnt # cd /mnt In this example, /dev/cdrom is the default device file for the CD-ROM, and /mnt is the mount point.
J-14 VERITAS Foundation Suite 3.5 for Solaris
Copyright 2002 VERITAS Software Corporation. All rights reserved.

Installing (continued)
c. Add the VEA packages:
# rpm -ihv VRTSob-3.0.2-261.i686.rpm # rpm -ihv VRTSobgui-3.0.2-261.i686.rpm # rpm -ihv VRTSvmpro-3.2-update1_GA.i686.rpm # rpm -ihv VRTSfspro-3.4.2-R7_GA.i686.rpm

d.

Add the VxFS packages:


# rpm -ihv VRTSvxfs-3.4.2-R7_GA_2.4.7-10.i686.rpm # rpm -ihv VRTSfsdoc-3.4.2-R7_GA.i686.rpm

To verify package installation: rpm -q package_name Running vxinstall on Linux is the same as on Solaris; however, you cannot encapsulate your boot disk on Linux.
FOS35_Sol_R1.0_20020930 J-11

Copyright 2002 VERITAS

3 Add the VxVM and VxFS packages by using the rpm command. a Add the VERITAS licensing package: # rpm -ihv VRTSvlic-3.00-007.i386.rpm b Add the VxVM package, followed by documentation and manual page packages:
# rpm -ihv VRTSvxvm-3.2-update1_GA.i686.rpm # rpm -ihv VRTSvmdoc-3.2-update1_GA.i686.rpm # rpm -ihv VRTSvmman-3.2-update1_GA.i686.rpm c Add the VERITAS Enterprise Administrator packages if you plan to run the VEA GUI: # rpm -ihv VRTSob-3.0.2-261.i686.rpm # rpm -ihv VRTSobgui-3.0.2-261.i686.rpm # rpm -ihv VRTSvmpro-3.2-update1_GA.i686.rpm # rpm -ihv VRTSfspro-3.4.2-R7_GA.i686.rpm d Add the VxFS packages. Add the VxFS software package before the optional documentation package: # rpm -ihv VRTSvxfs-3.4.2-R7_GA_2.4.7-10.i686.rpm # rpm -ihv VRTSfsdoc-3.4.2-R7_GA.i686.rpm

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-15

Verifying Package Installation To verify package installation on the system, you can use the rpm command to display information about installed packages:
rpm -q[al] package_name

For example, to verify that the VRTSvxvm package is installed:


# rpm -q VRTSvxvm VRTSvxvm-3.2-update1_GA.i686

The -al option lists detailed information about the package. Running vxinstall To set up and configure VxVM for the first time, you run the vxinstall utility. The vxinstall process on Linux is the same process as on Solaris. However, this release of VxVM on Linux does not support rootability. Therefore, you cannot encapsulate your boot disk when you run vxinstall.

J-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Operating VxVM on Linux


Commands and manual pages are in the same locations as Solaris. Add these directories to the PATH and MANPATH variables: Commands /usr/sbin /etc/vx/bin /usr/lib/vxvm/bin Man Pages /opt/VRTS/man

Differences on Linux:
When administering DDL, Linux requires a reboot when you add a new disk array. You modify VxVM tunable parameters with the /proc file system or the sysctl command. Some tunable defaults are different from Solaris defaults.
J-12

FOS35_Sol_R1.0_20020930

Copyright 2002 VERITAS

Operating VxVM on Linux


Most functional and operational features of VxVM on Linux are the same as on Solaris. This section provides general operational information and highlights some key differences. Location of VxVM Commands Most VxVM commands are installed in the directories: /usr/sbin /etc/vx/bin /usr/lib/vxvm/bin Add these directories to your PATH environment variable to access the commands. Location of VxVM Manual Pages Online manual pages for all VxVM commands are installed in the directory:
/opt/VRTS/man

Add this directory to your MANPATH environment variable to access the VxVM manual pages. Administering the Device Discovery Layer When adding a new disk array, Linux requires a reboot to ensure that the operating system detects any newly added disks.

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-17

Changing Tunable Parameters On Linux, to modify the VxVM tunable parameters, you can use the /proc file system or the sysctl command. To change the value of a tunable parameter by using the sysctl command, you type: # sysctl -w vxvm.vxio.tunable=value Alternatively, if the /proc file system is enabled, add the new value into the appropriate entry under /proc: # echo value > /proc/sys/vxvm/vxio/tunable For example, either of the following commands sets the value of vol_maxio to 4:
# sysctl -w vxvm.vxio.vol_maxio=4

or
# echo 4 > /proc/sys/vxvm/vxio/vol_maxio

Differences in Tunable Parameters Between Linux and Solaris Some tunable parameters on Linux are set to different default values than on Solaris. The following table lists differences in Linux and Solaris tunable parameter defaults:
Parameter vol_max_vol vol_maxio vol_maxspecialio voliomem_chunk_size voliomem_maxpool_sz voliot_iobuf_limit voliot_iobuf_max Linux Default: 255 volumes Default: 8 sectors (4K) Default: 8 sectors (4K) Default: 4K Default: 4 MB Default: 131072 bytes (128K) Default: 65536 bytes (64K) Solaris Default: 131071 volumes Default: 2048 sectors (1 MB) Default: 2 MB (1024K) Default: 64K Default: 128 MB Default: 4194304 bytes (4 MB) Default: 1048576 bytes (1 MB)

J-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Operating VxFS on Linux


Commands and manual pages are in the same locations as Solaris. Add these directories to the PATH and MANPATH variables: Commands /sbin /usr/lib/fs/vxfs /opt/VRTS/bin Man Pages /opt/VRTS/man

Differences on Linux:
The file system selection switch is -t. To mount a file system at boot, add an entry in /etc/fstab. The mount options ioerror, nodev, and noexec are not supported. You must explicitly mount file system snapshots as read-only.
J-13

FOS35_Sol_R1.0_20020930

Copyright 2002 VERITAS

Operating VxFS on Linux


Most functional and operational features of VxFS on Linux are the same as on Solaris. This section provides general operational information and highlights some key differences. Location of VxFS Commands VxFS commands are installed in the directories: /sbin /usr/lib/fs/vxfs /opt/VRTS/bin Add these directories to your PATH environment variable to access the commands. Location of VxFS Manual Pages Online manual pages for all VxFS commands are installed in the directory:
/opt/VRTS/man

Add this directory to your MANPATH environment variable to access the VxFS manual pages. Running VxFS-Specific Commands On Linux, the file system selection switch is -t. Use the -t switch to specify a VxFS file system in administrative commands. For example, to create and mount a VxFS file system:
# mkfs -t vxfs /dev/vx/rdsk/diskgroup/volume 500m
Appendix J: Operating VxVM and VxFS in a Linux Environment
Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-19

# mount -t vxfs /dev/vx/dsk/diskgroup/volume /mount_point

Note: On Solaris, the file system selection switch is -F. Mounting a File System Automatically To mount a file system automatically at boot time, you add an entry for the file system in the /etc/fstab file. Unsupported Command Options This release of VxFS for Linux does not support the following: The ioerror option of the VxFS mount command The nodev and noexec options of the mount command Forced unmount functionality Other Administrative Notes When operating VxFS in a Linux environment, additional notes about this release include: Some directories cannot be VxFS file systems: The following directories cannot be VxFS file systems: / (root) /boot /etc /lib /var /usr Swap files are not supported: Linux allows swap areas to be created in files, but VxFS does not support this functionality. You must use physical devices instead. You must mount snapshots as read-only: When mounting snapshots, you must explicitly specify the ro (read-only) option. The generic Linux mount command by default mounts file systems as rw (read-write) if neither ro nor rw are specified. VxFS has no way to determine whether the rw option was specified or selected by default. VxFS cannot coexist with FreeVxFS: FreeVxFS is a Linux file system that has read-only support for VxFS file systems created on UnixWare. You cannot mount VxFS file systems if the freevxfs module is loaded. To determine whether you have FreeVxFS on your system, type: # lsmod | grep freevxfs If any file systems are mounted using FreeVxFS, unmount them and remove the freevxfs module using the rmmod command: # rmmod freevxfs

J-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Summary
You should now be able to: List the VERITAS solutions currently available for enterprise Linux environments. Describe key differences between the Linux and Solaris operating systems. List supported features of VxVM and VxFS on Linux. Identify VxVM and VxFS for Linux installation prerequisites. Install VxVM and VxFS on Linux. Identify VxVM operational differences on Linux. Identify VxFS operational differences on Linux.
FOS35_Sol_R1.0_20020930 J-14

Copyright 2002 VERITAS

Summary
This appendix described the differences that exist between VERITAS Foundation Suite components (VERITAS Volume Manager and VERITAS File System) on Linux and on Solaris. This appendix highlighted key differences in installation, operation, and configuration. Additional Resources VxVM for Linux VERITAS Volume Manager Administrators Guide This guide provides information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. VERITAS Volume Manager Installation Guide This guide provides detailed procedures for installing and initializing VERITAS Volume Manager and VERITAS Enterprise Administrator. VERITAS Volume Manager Users GuideVERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager and VERITAS File System. VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager.

Appendix J: Operating VxVM and VxFS in a Linux Environment


Copyright 2002 VERITAS Software Corporation. All rights reserved.

J-21

VERITAS Volume Manager Troubleshooting Guide This guide provides information about common troubleshooting procedures for VERITAS Volume Manager. VERITAS FlashSnap Point-In-Time Copy Solutions Administrators Guide This guide provides information on using FastResync, volume snapshots, and disk group split/join functionality to perform offline and off-host processing. VERITAS Volume Manager Hardware Notes This document provides specific information about VERITAS Volume Manager supported hardware configurations. Online manual pages for VxVM commands in /opt/VRTS/man

VxFS for Linux VERITAS File System Administrators Guide This guide provides information on procedures and concepts involving file system management and system administration using VERITAS File System. VERITAS File System Installation Guide This guide provides detailed procedures for installing and initializing VERITAS File System and VERITAS Enterprise Administrator. VERITAS File System Release Notes This document provides software version release information for VERITAS File System. Online manual pages for VxFS commands in /opt/VRTS/man Other Resources VERITAS technical support Web site at http://support.veritas.com The VERITAS Support Web site provides more information about VERITAS Volume Manager and VERITAS File system and includes support services, technical notes, patches, alerts, and e-mail notification services.

J-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary

Glossary

Glossary
A
access control list (ACL) A list of users or groups who have access privileges to a specified file. A file may have its own ACL or may share an ACL with other files. ACLs allow detailed access permissions for multiple users and groups. Active/Active disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. Active/Passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays. agent A process that manages predefined VERITAS Cluster Server (VCS) resource types. Agents bring resources online, take resources offline, and monitor resources to report any state changes to VCS. When an agent is started, it obtains configuration information from VCS and periodically monitors the resources and updates VCS with the resource status. alert An indication that an error or failure has occurred on an object on the system. When an object fails or experiences an error, an alert icon appears. alert icon An icon that indicates that an error or failure has occurred on an object on the system. Alert icons usually appear in the status area of the main window and on the affected objects group icon.

Alert Monitor A window that provides information about objects that have failed or experienced errors. allocation unit A basic structural component of VxFS. The VxFS Version 4 file system layout divides the entire file system space into fixed size allocation units. The first allocation unit starts at block zero and all allocation units are a fixed length of 32K blocks. associate The process of establishing a relationship between Volume Manager objects; for example, a subdisk that has been created and defined as having a starting point within a plex is referred to as being associated with that plex. associated plex A plex associated with a volume. associated subdisk A subdisk associated with a plex. asynchronous writes A delayed write in which the data is written to a page in the systems page cache, but is not written to disk before the write returns to the caller. This improves performance, but carries the risk of data loss if the system crashes before the data is flushed to disk. atomic operation An operation that either succeeds completely or fails and leaves everything as it was before the operation was started. If the operation succeeds, all aspects of the operation take effect at once and the intermediate states of change are invisible. If any aspect of the operation fails, then the operation aborts without leaving partial changes. attached A state in which a VxVM object is both associated with another object and enabled for use.

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary-3

Glossary

B
block The minimum unit of data transfer to or from a disk or array. Block-Level Incremental Backup (BLI Backup) A VERITAS backup capability that does not store and retrieve entire files. Instead, only the data blocks that have changed since the previous backup are backed up. boot disk A disk used for booting purposes. This disk may be under VxVM control. browse dialog box A dialog box that is used to view and/or select existing objects on the system. Most browse dialog boxes consist of a tree and grid. buffered I/O During a read or write operation, data usually goes through an intermediate file system buffer before being copied between the user buffer and disk. If the same data is repeatedly read or written, this file system buffer acts as a cache, which can improve performance. See unbuffered I/O and direct I/O. button A window control that the user clicks to initiate a task or display another object (such as a window or menu).

cluster A set of host machines (nodes) that shares a set of disks. cluster file system A VxFS file system mounted on a selected volume in cluster (shared) mode. cluster mounted file system A shared file system that enables multiple hosts to mount and perform file operations on the same file. A cluster mount requires a shared storage device that can be accessed by other cluster mounts of the same file system. Writes to the shared device can be done concurrently from any host on which the cluster file system is mounted. To be a cluster mount, a file system must be mounted using the mount o cluster option. See local mounted file system. cluster manager An externallyprovided daemon that runs on each node in a cluster. The cluster managers on each node communicate with each other and inform VxVM of changes in cluster membership. cluster-shareable disk group A disk group in which the disks are shared by multiple hosts (also referred to as a shared disk group). column A set of one or more subdisks within a striped plex. Striping is achieved by allocating data alternately and evenly across the columns within a plex. command log A log file that contains a history of VEA tasks performed in the current session and previous sessions. Each task is listed with the task originator, the start/finish times, the task status, and the low-level commands used to perform the task. concatenation A layout style characterized by subdisks that are arranged sequentially and contiguously.

C
CFS VERITAS Cluster File System. check box A control button used to select optional settings. A check mark usually indicates that a check box is selected. children Objects that belong to an object group. clean node shutdown The ability of a node to leave the cluster gracefully when all access to shared volumes has ceased.

Glossary-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary

configuration copy A single copy of a configuration database. configuration database A set of records containing detailed information on existing Volume Manager objects (such as disk and volume attributes). A single copy of a configuration database is called a configuration copy. contiguous file A file in which data blocks are physically adjacent on the underlying media. CVM The cluster functionality of VERITAS Volume Manager.

defragmentation Reorganizing data on disk to keep file data blocks physically adjacent so as to reduce access times. detached A state in which a VxVM object is associated with another object, but not enabled for use. device name The device name or address used to access a physical disk, such as c0t0d0s2. The c#t#d#s# syntax identifies the controller, target address, disk, and slice (or partition). In a SAN environment, it is more convenient to use enclosure-based naming, which forms the device name by concatenating the name of the enclosure (such as enc0) with the disks number within the enclosure, separated by an underscore (for example, enc0_2). The term disk access name can also be used to refer to a device name. dialog box A window in which the user submits information to VxVM. Dialog boxes can contain selectable buttons and/or fields that accept information. direct extent An extent that is referenced directly by an inode. direct I/O An unbuffered form of I/O that bypasses the file systems buffering of data. With direct I/O, the file system transfers data directly between the disk and the user-supplied buffer. See buffered I/O and unbuffered I/O. dirty region logging The procedure by which the Volume Manager monitors and logs modifications to a plex. A bitmap of changed regions is kept in an associated subdisk called a log subdisk. disabled path A path to a disk that is not available for I/O. A path can be disabled due to real hardware failures or if the user has used the vxdmpadm disable command on that controller.

D
data blocks Blocks that contain the actual data belonging to files and directories. data change object (DCO) A VxVM object that is used to manage information about the FastResync maps in the DCO log volume. Both a DCO object and a DCO log volume must be associated with a volume to implement Persistent FastResync on that volume. data stripe This represents the usable data portion of a stripe and is equal to the stripe minus the parity region. data synchronous writes A form of synchronous I/O that writes the file data to disk before the write returns, but only marks the inode for later update. If the file size changes, the inode will be written before the write returns. In this mode, the file data is guaranteed to be on the disk before the write returns, but the inode modification times may be lost if the system crashes. DCO log volume A special volume that is used to hold Persistent FastResync change maps.

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary-5

Glossary

discovered direct I/O Discovered Direct I/O behavior is similar to direct I/O and has the same alignment constraints, except writes that allocate storage or extend the file size do not require writing the inode changes before returning to the application. disk A collection of read/write data blocks that are indexed and can be accessed fairly quickly. Each disk has a universally unique identifier. disk access name The name used to access a physical disk, such as c0t0d0. The c#t#d#s# syntax identifies the controller, target address, disk, and partition. The term device name can also be used to refer to the disk access name. disk access records Configuration records used to specify the access path to particular disks. Each disk access record contains a name, a type, and possibly some type-specific information, which is used by the Volume Manager in deciding how to access and manipulate the disk that is defined by the disk access record. disk array A collection of disks logically arranged into an object. Arrays tend to provide benefits such as redundancy or improved performance. disk array serial number This is the serial number of the disk array. It is usually printed on the disk array cabinet or can be obtained by issuing a vendor specific SCSI command to the disks on the disk array. This number is used by the DMP subsystem to uniquely identify a disk array. disk controller The controller (HBA) connected to the host or the disk array that is represented as the parent node of the disk by the Operating System, is called the disk controller by the multipathing subsystem of Volume Manager.

For example, if a disk is represented by the device name: /devices/sbus@1f,0/ QLGC,isp@2,10000/sd@8,0:c then the disk controller for the disk sd@8,0:c is: QLGC,isp@2,10000 This controller (HBA) is connected to the host.
disk enclosure An intelligent disk array that usually has a backplane with a built-in Fibre Channel loop, and which permits hot-swapping of disks. disk group A collection of disks that are under VxVM control and share a common configuration. A disk group configuration is a set of records containing detailed information on existing Volume Manager objects (such as disk and volume attributes) and their relationships. Each disk group has an administrator-assigned name and an internally defined unique ID. The root disk group (rootdg) is a special private disk group that always exists. disk group ID A unique identifier used to identify a disk group. disk ID A universally unique identifier that is given to each disk and can be used to identify the disk, even if it is moved. disk media name A logical or administrative name chosen for the disk, such as disk03. The term disk name is also used to refer to the disk media name. disk media record A configuration record that identifies a particular disk, by disk ID, and gives that disk a logical (or administrative) name.

Glossary-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary

disk name A logical or administrative name chosen for a disk that is under the control of VxVM, such as disk03. The term disk media name is also used to refer to a disk name. dissociate The process by which any link that exists between two Volume Manager objects is removed. For example, dissociating a subdisk from a plex removes the subdisk from the plex and adds the subdisk to the free space pool. dissociated plex A plex dissociated from a volume. dissociated subdisk A subdisk dissociated from a plex. distributed lock manager A lock manager that runs on different systems and ensures consistent access to distributed resources. dock To separate or attach the main window and a subwindow.

disks number within the enclosure, separated by an underscore (for example, enc0_2).
extent A group of contiguous file system data blocks that are treated as a unit. An extent is defined by a starting block and a length. extent attributes The extent allocation policies associated with a file. external quotas file A quotas file (named quotas) must exist in the root directory of a file system for quota-related commands to work. See quotas file and internal quotas file.

F
fabric mode disk A disk device that is accessible on a Storage Area Network (SAN) through a Fibre Channel switch. FastResync A fast resynchronization feature that is used to perform quick and efficient resynchronization of stale mirrors, and to increase the efficiency of the snapshot mechanism. Fibre Channel A collective name for the fiber optic technology that is commonly used to set up a Storage Area Network (SAN). file system A collection of files organized together into a structure. The UNIX file system is a hierarchical structure consisting of directories and files. file system block The fundamental minimum size of allocation in a file system. This is equivalent to the ufs fragment size. fileset A collection of files within a file system.

E
enabled path A path to a disk that is available for I/O. encapsulation A process that converts existing partitions on a specified disk to volumes. If any partitions contain file systems, /etc/vfstab entries are modified so that the file systems are mounted on volumes instead. Encapsulation is not applicable on some systems. enclosure A disk array. enclosure-based naming An alternative disk naming method, beneficial in a SAN environment, which forms the device name by concatenating the name of the enclosure (such as enc0) with the

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary-7

Glossary

fixed extent size An extent attribute associated with overriding the default allocation policy of the file system. free disk pool Disks that are under Volume Manager control, but do not belong to a disk group. free space An area of a disk under VxVM control that is not allocated to any subdisk or reserved for use by any other Volume Manager object. free subdisk A subdisk that is not associated with any plex and has an empty putil[0] field.

H
hard limit The hard limit is an absolute limit on system resources for individual users for file and data block usage on a file system. See quota. host A machine or system.

hostid

A string that identifies a host to the Volume Manager. The hostid for a host is stored in its volboot file, and is used in defining ownership of disks and disk groups.

G
gap A disk region that does not contain Volume Manager objects (subdisks). GB Gigabyte (230 bytes or 1024 megabytes). graphical view A window that displays a graphical view of objects. In VEA, the graphical views include the Object View window and the Volume Layout Details window. grid A tabular display of objects and their properties. The grid lists Volume Manager objects, disks, controllers, or file systems. The grid displays objects that belong to the group icon that is currently selected in the object tree. The grid is dynamic and constantly updates its contents to reflect changes to objects. group icon The icon that represents a specific object group. GUI Graphical User Interface.

hot relocation A technique of automatically restoring redundancy and access to mirrored and RAID-5 volumes when a disk fails. This is done by relocating the affected subdisks to disks designated as spares and/or free space in the same disk group. hot swap Refers to devices that can be removed from, or inserted into, a system without first turning off the power supply to the system.

I
I/O clustering The grouping of multiple I/O operations to achieve better performance. indirect address extent An extent that contains references to other extents, as opposed to file data itself. A single indirect address extent references indirect data extents. A double indirect address extent references single indirect address extents. indirect data extent An extent that contains file data and is referenced via an indirect address extent. initiating node The node on which the system administrator is running a utility that requests a change to Volume Manager objects. This node initiates a volume. reconfiguration.

Glossary-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary

inode A unique identifier for each file within a file system which also contains metadata associated with that file. inode allocation unit A group of consecutive blocks that contain inode allocation information for a given fileset. This information is in the form of a resource summary and a free inode map. intent logging A logging scheme that records pending changes to the file system structure. These changes are recorded in a circular intent log file. internal quotas file VxFS maintains an internal quotas file for its internal usage. The internal quotas file maintains counts of blocks and inodes used by each user. See quotas and external quotas file.

local mounted file system A file system mounted on a single host. The single host mediates all file system writes to storage from other clients. To be a local mount, a file system cannot be mounted using the mount o cluster option. See cluster mounted file system. log plex A plex used to store a RAID-5 log. The term log plex may also be used to refer to a Dirty Region Logging plex. log subdisk A subdisk that is used to store a dirty region log.

M
main window The main VEA window. This window contains a tree and grid that display volumes, disks, and other objects on the system. The main window also has a menu bar and a toolbar. master node A node that is designated by the software as the master node. Any node is capable of being the master node. The master node coordinates certain Volume Manager operations. mastering node The node to which a disk is attached. This is also known as a disk owner. MB Megabyte (220 bytes or 1024 kilobytes). menu A list of options or tasks. A menu item is selected by pointing to the item and clicking the mouse. menu bar A bar that contains a set of menus for the current window. The menu bar is typically placed across the top of a window. metadata Structural data describing the attributes of files on a disk.

J
JBOD The common name for an unintelligent disk array which may, or may not, support the hot-swapping of disks. The name is derived from just a bunch of disks.

K
K Kilobyte (210 bytes or 1024 bytes).

L
large file A file larger than 2 gigabytes. VxFS supports files up to two terabytes in size. large file system A file system more than 2 gigabytes in size. VxFS supports file systems up to 32 terabytes in size. latency For file systems, this typically refers to the amount of time it takes a given file system operation to return to the user. launch To start a task or open a window.

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary-9

Glossary

mirror A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each mirror is one copy of the volume with which the mirror is associated. The terms mirror and plex can be used synonymously. mirroring A layout technique that mirrors the contents of a volume onto multiple plexes. Each plex duplicates the data stored on the volume, but the plexes themselves may have different layouts. multipathing Where there are multiple physical access paths to a disk connected to a system, the disk is called multipathed. Any software residing on the host, (for example, the DMP driver) that hides this fact from the user is said to provide multipathing functionality.

are actually two types of disk objectsone for the physical aspect of the disk and the other for the logical aspect.
object group A group of objects of the same type. Each object group has a group icon and a group name. In VxVM, object groups include disk groups, disks, volumes, controllers, free disk pool disks, uninitialized disks, and file systems. object location table (OLT) The information needed to locate important file system structural elements. The OLT is written to a fixed location on the underlying media (or disk). object location table replica A copy of the OLT in case of data corruption. The OLT replica is written to a fixed location on the underlying media (or disk). object tree A dynamic hierarchical display of Volume Manager objects and other objects on the system. Each node in the tree represents a group of objects of the same type. Object View Window A window that displays a graphical view of the volumes, disks, and other objects in a particular disk group. The objects displayed in this window are automatically updated when object properties change. This window can display detailed or basic information about volumes and disks.

N
node In an object tree, a node is an element attached to the tree. In a cluster environment, a node is a host machine in a cluster. node abort A situation where a node leaves a cluster (on an emergency basis) without attempting to stop ongoing operations. node join The process through which a node joins a cluster and gains access to shared disks. nonpersistent FastResync A form of FastResync that cannot preserve its maps across reboots of the system because it stores its change map in memory.

P
page file A fixed-size block of virtual address space that can be mapped onto any of the physical addresses available on a system. parity A calculated value that can be used to reconstruct data after a failure. While data is being written to a RAID-5 volume, parity is also calculated by performing an exclusive OR (XOR)

O
object An entity that is defined to and recognized internally by the Volume Manager. The VxVM objects are: volume, plex, subdisk, disk, and disk group. There

Glossary-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary

procedure on data. The resulting parity is then written to the volume. If a portion of a RAID-5 volume fails, the data that was on that portion of the failed volume can be recreated from the remaining data and the parity.
parity stripe unit A RAID-5 volume storage region that contains parity information. The data contained in the parity stripe unit can be used to help reconstruct regions of a RAID-5 volume that are missing because of I/O or disk failures. partition The standard division of a physical disk device, as supported directly by the operating system and disk drives. path When a disk is connected to a host, the path to the disk consists of the Host Bus Adapter (HBA) on the host, the SCSI or fibre cable connector and the controller on the disk or disk array. These components constitute a path to a disk. A failure on any of these results in DMP trying to shift all I/Os for that disk onto the remaining (alternate) paths. pathgroup In case of disks which are not multipathed by vxdmp, VxVM will see each path as a disk. In such cases, all paths to the disk can be grouped. This way only one of the paths from the group is made visible to VxVM. persistent FastResync A form of FastResync that can preserve its maps across reboots of the system by storing its change map in a DCO log volume on disk. persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes and prevents failed mirrors from being selected for recovery. This is also known as kernel logging.

physical disk The underlying storage device, which may or may not be under Volume Manager control. plex A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each plex is one copy of the volume with which the plex is associated. The terms mirror and plex can be used synonymously. popup menu A context-sensitive menu that only appears when you click on a specific object or area. preallocation The preallocation of space for a file so that disk blocks will physically be part of a file before they are needed. Enabling an application to preallocate space for a file guarantees that a specified amount of space will be available for that file, even if the file system is otherwise out of space. primary fileset A fileset that contains the files that are visible and accessible to users. primary path In Active/Passive type disk arrays, a disk can be bound to one particular controller on the disk array or owned by a controller. The disk can then be accessed using the path through this particular controller. private disk group A disk group in which the disks are accessed by only one specific host. private region A region of a physical disk used to store private, structured Volume Manager information. The private region contains a disk header, a table of contents, and a configuration database.

The table of contents maps the contents of the disk. The disk header contains a disk ID. All data in the private region is duplicated for extra reliability.

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary-11

Glossary

properties window A window that displays detailed information about a selected object. public region A region of a physical disk managed by the Volume Manager that contains available space and is used for allocating subdisks.

R
radio buttons A set of buttons used to select optional settings. Only one radio button in the set can be selected at any given time. These buttons toggle on or off. RAID A Redundant Array of Independent Disks (RAID) is a disk array set up with part of the combined storage capacity used for storing duplicate information about the data stored in that array. This makes it possible to regenerate the data if a disk failure occurs. read-writeback mode A recovery mode in which each read operation recovers plex consistency for the region covered by the read. Plex consistency is recovered by reading data from blocks of one plex and writing the data to all other writable plexes. reservation An extent attribute associated with preallocating space for a file. root configuration The configuration database for the root disk group. This is special in that it always contains records for other disk groups, which are used for backup purposes only. It also contains disk records that define all disk devices on the system. root disk The disk containing the root file system. This disk may be under VxVM control. root disk group A special private disk group that always exists on the system. The root disk group is named rootdg. root file system The initial file system mounted as part of the UNIX kernel startup sequence. root partition The disk region on which the root file system resides.

Q
Quick I/O file A regular VxFS file that is accessed using the ::cdev:vxfs: extension. Quick I/O for Databases Quick I/O is a VERITAS File System feature which improves database performance by minimizing read/write locking and eliminating double buffering of data. This allows online transactions to be processed at speeds equivalent to that of using raw disk devices, while keeping the administrative benefits of file systems. QuickLog VERITAS QuickLog is a high performance mechanism for receiving and storing intent log information for VxFS file systems. QuickLog increases performance by exporting intent log information to a separate physical volume. quotas Quota limits on system resources for individual users for file and data block usage on a file system. See hard limit and soft limit. quotas file The quotas commands read and write the external quotas file to get or change usage limits. When quotas are turned on, the quota limits are copied from the external quotas file to the internal quotas file. See quotas, internal quotas file, and external quotas file.

Glossary-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary

root volume The VxVM volume that contains the root file system, if such a volume is designated by the system configuration. rootability The ability to place the root file system and the swap device under Volume Manager control. The resulting volumes can then be mirrored to provide redundancy and allow recovery in the event of disk failure.

slice The standard division of a logical disk device. The terms partition and slice are sometimes used synonymously. snapshot file system An exact copy of a mounted file system at a specific point in time. Used to do online backups. snapped file system A file system whose exact image has been used to create a snapshot file system. soft limit The soft limit is lower than a hard limit. The soft limit can be exceeded for a limited time. There are separate time limits for files and blocks. See hard limit and quota. spanning A layout technique that permits a volume (and its file system or database) too large to fit on a single disk to span across multiple physical disks. sparse plex A plex that is not as long as the volume or that has holes (regions of the plex that do not have a backing subdisk). splitter A bar that separates two panes of a window (such as the object tree and the grid). A splitter can be used to adjust the sizes of the panes. status area An area of the main window that displays an alert icon when an object fails or experiences some other error. Storage Area Network (SAN) A networking paradigm that provides easily reconfigurable connectivity between any subset of computers, disk storage and interconnecting hardware such as switches, hubs and bridges. storage checkpoint A facility that provides a consistent and stable view of a file system or database image and keeps track of modified data blocks since the last checkpoint.

S
scroll bar A sliding control that is used to display different portions of the contents of a window. Search window The VEA search tool. The Search window provides a set of search options that can be used to search for objects on the system. secondary path In Active/Passive type disk arrays, the paths to a disk other than the primary path are called secondary paths. A disk is supposed to be accessed only through the primary path until it fails, after which ownership of the disk is transferred to one of the secondary paths. sector A unit of size, which can vary between systems. A sector is commonly 512 bytes. shared disk group A disk group in which the disks are shared by multiple hosts (also referred to as a clustershareable disk group). shared volume A volume that belongs to a shared disk group and is open on more than one node at the same time. shared VM disk A VM disk that belongs to a shared disk group. slave node A node that is not designated as a master node.

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary-13

Glossary

stripe A set of stripe units that occupy the same positions across a series of columns. stripe size The sum of the stripe unit sizes comprising a single stripe across all columns being striped. stripe unit Equally-sized areas that are allocated alternately on the subdisks (within columns) of each striped plex. In an array, this is a set of logically contiguous blocks that exist on each disk before allocations are made from the next disk in the array. A stripe unit may also be referred to as a stripe element. stripe unit size The size of each stripe unit. The default stripe unit size is 32 sectors (16K). A stripe unit size has also historically been referred to as a stripe width. striping A layout technique that spreads data across several physical disks using stripes. The data is allocated alternately to the stripes within the subdisks of each plex. structural fileset A special fileset that stores the structural elements of the file system in the form of structural files. These files define the structure of the file system and are visible only when using utilities such as the file system debugger. subdisk A consecutive set of contiguous disk blocks that form a logical disk segment. Subdisks can be associated with plexes to form volumes. super-block A block containing critical information about the file system such as the file system type, layout, and size. The VxFS super-block is always located 8192 bytes from the beginning of the file system and is 8192 bytes long.

swap area A disk region used to hold copies of memory pages swapped out by the system pager process. swap volume A VxVM volume that is configured for use as a swap area. synchronous writes A form of synchronous I/O that writes the file data to disk, updates the inode times, and writes the updated inode to disk. When the write returns to the caller, both the data and the inode have been written to disk.

T
task properties window A window that displays detailed information about a task listed in the Task Request Monitor window. Task Request Monitor A window that displays a history of tasks performed in the current VEA session. Each task is listed with the task originator, the task status, and the start/ finish times for the task. TB Terabyte (240 bytes or 1024 gigabytes). throughput For file systems, this typically refers to the number of I/O operations in a given unit of time. toolbar A set of buttons used to access VEA windows. These include another main window, a task request monitor, an alert monitor, a search window, and a customize window. transaction A set of configuration changes that succeed or fail as a group, rather than individually. Transactions are used internally to maintain consistent configurations. tree A dynamic hierarchical display of objects on the system. Each node in the tree represents a group of objects of the same type.

Glossary-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary

U
ufs The UNIX file system type. Used as parameter in some commands. UFS The UNIX file system; derived from the 4.2 Berkeley Fast File System. unbuffered I/O I/O that bypasses the file system cache to increase I/O performance. This is similar to direct I/O, except when a file is extended; for direct I/O, the inode is written to disk synchronously, for unbuffered I/O, the inode update is delayed. See buffered I/O and direct I/O. uninitialized disks Disks that are not under Volume Manager control.

addressable range of disk blocks used by applications such as file systems or databases. A volume is a collection of from one to 32 plexes.
volume configuration device The volume configuration device (/dev/vx/ config) is the interface through which all configuration changes to the volume device driver are performed. volume device driver The driver that forms the virtual disk drive between the application and the physical device driver level. The volume device driver is accessed through a virtual disk device node whose character device nodes appear in /dev/vx/rdsk, and whose block device nodes appear in /dev/vx/dsk. volume event log The volume event log device (/dev/vx/event) is the interface through which volume driver events are reported to the utilities. Volume Layout Window A window that displays a graphical view of a volume and its components. The objects displayed in this window are not automatically updated when the volumes properties change. Volume to Disk Mapping Window A window that displays a tabular view of volumes and their underlying disks. This window can also display details such as the subdisks and gaps on each disk.

V
VCS VERITAS Cluster Server. VEA VERITAS Enterprise Administrator graphical user interface. VM disk A disk that is both under Volume Manager control and assigned to a disk group. VM disks are sometimes referred to as Volume Manager disks or simply disks. In the graphical user interface, VM disks are represented iconically as cylinders labeled D. VMSA Volume Manager Storage Administrator, an earlier version of the VxVM GUI used prior to VxVM version 3.5.

volboot file A small file that is used

to locate copies of the root configuration. The file may list disks that contain configuration copies in standard locations, and can also contain direct pointers to configuration copy locations. volboot is stored in a system-dependent location.
volume A virtual disk or entity that is made up of portions of one or more physical disks. A volume represents an

The Volume Manager configuration daemon, which is responsible for making changes to the VxVM configuration. This daemon must be running before VxVM operations can be performed.
vxfs The VERITAS File System type. Used as a parameter in some commands. VxFS VERITAS File System.

vxconfigd

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Glossary-15

Glossary

VxVM VERITAS Volume Manager.

Glossary-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

Index

Files and Directories


/dev/vx/config 13-6 /dev/vx/dsk 18-25, 18-35 /dev/vx/dsk/usr 18-10 /dev/vx/rdmp 4-22 /dev/vx/rdsk 18-35 /dev/vx/rdsk/usr 18-10 /etc/default/fs 9-7 /etc/default/vxassist 6-26 /etc/dumpdates 10-12 /etc/fs/vxfs 9-6 /etc/group 3-30 /etc/rc2.d/S50isisd 3-24 /etc/rc2.d/S94vxnm-host_infod 18-15 /etc/rc2.d/S94vxnm-vxnetd 18-15 /etc/rc2.d/S95vxvm-recover 18-15 /etc/rcS.d/S25vxvm-sysboot 18-9 /etc/rcS.d/S30rootusr 18-10 /etc/rcS.d/S35vxvm-startup1 18-11 /etc/rcS.d/S40standardmounts 18-12 /etc/rcS.d/S50devfsadm 18-12 /etc/rcS.d/S70buildmnttab 18-12 /etc/rcS.d/S85vxvm-startup2 18-13 /etc/rcS.d/S86vxvm-reconfig 18-13 /etc/system 2-8, 2-30, 2-39, 17-5, 17-29, 18-7, 18-16, 18-25, 18-36 after encapsulation 17-15 forceload entries 18-22 root encapsulation entries 18-22 saving 14-39, 18-21 troubleshooting 18-21 using an alternate 18-23 VxVM entries in 18-21 /etc/vfstab 2-39, 6-24, 6-46, 7-23, 9-21, 9-22, 9-23, 10-6, 10-12, 17-5, 17-29, 17-44, 18-10, 18-16, 18-25, 18-29, 18-36 after root encapsulation 17-17 before root encapsulation 17-16

/etc/volboot 18-9 /etc/vx/bin 3-15 /etc/vx/cntrls.exclude 2-36 /etc/vx/disks.exclude 2-36 /etc/vx/elm 2-25 /etc/vx/enclr.exclude 2-36 /etc/vx/isis/Registry 3-31 /etc/vx/licenses/lic 2-22, 2-25, 18-16, 18-31 /etc/vx/reconfig.d/disk.d/ device 17-14 /etc/vx/reconfig.d/disk.d/disk/ vtoc 18-42 /etc/vx/reconfig.d/disks.d 2-39 /etc/vx/reconfig.d/state.d 18-13 /etc/vx/reconfig.d/state.d/ install-db 18-16, 18-20, 18-36 /etc/vx/volboot 13-4, 13-18, 18-16 troubleshooting 18-27 /lost+found 11-12 /opt 2-48 /opt/VRTS/man 3-17, 9-8 /opt/VRTS/man/man1 3-17 /opt/VRTS/man/man1m 3-17 /opt/VRTS/man/man4 3-17 /opt/VRTS/man/man7 3-17 /opt/VRTSob/bin 3-23 /opt/VRTSob/bin/vxsvc 3-24, 3-28 /opt/VRTSvxfs/sbin 9-6 /proc 18-12 /sbin/init phase 18-8 troubleshooting 18-20 /usr 2-39, 2-48, 18-9 /usr/kvm 18-11 /usr/lib/fs/vxfs 9-6 /usr/lib/vxvm/bin 3-15 /usr/sbin 3-15 /var 2-39, 2-48, 18-11, 18-12 /var/adm 18-11, 18-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-3

Index

/var/run 18-12 /var/vx/isis/command.log 3-12, 8-40 /var/vx/isis/vxisis.lock 3-24, 3-28 /var/vx/isis/vxisis.log 3-24, 3-29 /var/vxvm/tempdb 18-16, 18-33 /VXVM#.#.#-UPGRADE/.start_runed 18-16, 18-20

A
aborting a task 3-10, 8-47 aborting online relayout 8-33 access control lists 11-21, 20-15 entries 20-17 setting 20-16 viewing 20-18 ACTIVE 14-6 active option 16-5 ACTIVE state 15-9, 15-11, 15-16, 15-18, 15-20, 15-22, 16-8, 16-13, 16-23 active/active disk arrays 19-11 active/passive disk array 19-12 adding a disk to a disk group 4-15 CLI 4-22 methods 4-17 VEA 4-18 vxdiskadm 4-20 adding a log CLI 7-14 VEA 7-13 adding a mirror 7-4 CLI 7-6 VEA 7-5 adding a new disk 14-24 adding packages with pkgadd 2-29 allocating storage for volumes 7-24 allocation units 11-14 alternate boot disk creating 17-18 creating in CLI 17-25

creating in VEA 17-23 creating in vxdiskadm 17-24 determining which is booting 17-26 reasons for creating 17-19 alternate mirror booting 17-21 Alternate Pathing driver 17-31 alternate system file using 18-23 architecture of VxVM 13-4 array 1-6 active/active 19-11 active/passive 19-12 adding support for 19-5 excluding support for 19-7 listing 19-7 reincluding 19-8 removing support for 19-6 assigning space for a volume 6-21 atomic-copy resynchronization 14-5, 16-19 autoconfig 4-34 autoimport 4-34, 5-22

B
backing up a file system 10-11, 10-14, 10-18 bad block revectoring 17-6 before-image log 10-19 bitmap 10-20 blkclear mount option 12-13 block clustering 11-5 block device file 6-25 block size default 9-17 setting 9-17 block-based allocation 11-4, 11-5 blocking factor 10-13 blockmap 10-20 boot -a 18-24 boot block 1-4

Index-4

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

boot device cannot be opened 18-17 boot disk 1-4, 2-34, 2-39, 2-41 creating an alternate 17-18 creating an alternate in CLI 17-25 creating an alternate in VEA 17-23 creating an alternate in vxdiskadm 17-24 creating an emergency 18-43 encapsulating 18-43 reasons for creating an alternate 17-19 boot disk encapsulation 2-48, 17-10 boot disk errors 17-20 boot disk failure nonencapsulated 18-51 only disk in rootdg 18-53 other disks in rootdg 18-55 protecting against 17-18 boot process 18-4 /sbin/init phase 18-8 boot program phase 18-6 boot PROM phase 18-5 kernel initialization phase 18-7 troubleshooting 18-16 boot program 18-6 troubleshooting 18-19 boot PROM troubleshooting 18-17 bootblk 18-5, 18-6, 18-19 booting from alternate mirror 17-21 booting from an alternate root disk 17-26 bsize 9-12

check_disabled 19-27

C
changing plex states 16-20 changing the volume layout CLI 8-34 VEA 8-32 character device file 6-25 check_all 19-27

checking VxFS structure 9-13 clean option 16-5 CLEAN state 16-8, 16-13, 16-22 clearing import locks CLI 5-27 clearing task history 3-11 cleartempdir 18-33 CLI commands 3-18 CLI commands in VEA 3-11 cluster 2-15 cluster environment 5-22 Cluster File System 2-10, 2-16, 2-17, 9-8 cluster functionality 2-15 cluster management 4-6 Cluster Volume Manager 2-10 licensing 2-17 col_switch 7-35 columns 6-10 changing the number of 8-36 command line interface 3-4, 3-15 command locations 9-6 command log file 3-10, 3-12, 8-40 command syntax for VxFS commands 9-7 complete plex 1-16 concatenated 6-19 Concatenated Pro 6-8, 6-19, 7-46, 7-48 concatenated volume 1-19, 1-20, 6-9 creating 6-26 concatenation 6-5 advantages 6-9 disadvantages 6-9 concat-mirror 7-43, 7-46 condition flags 16-11 IOFAIL 16-11 NODEVICE 16-11 RECOVER 16-12 REMOVED 16-11

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-5

Index

config 4-23

configuration daemon 13-6 controlling 13-12 configuration database 13-5, 13-6, 15-8 copies 13-8 disk data 13-9 disk group status 13-7 log entries 13-7 protecting 14-37 quotas 13-8 saving 14-38 size 13-7 configuring a disk 4-10 consistency checking 12-7 checking in VEA 12-9 consistency checking 9-30 console messages 15-6 Console/Task History 3-6 controller 1-5 displaying paths for 19-20 listing 19-18 converting partitions into volumes 17-4 converting to a layered volume CLI 8-38 converting UFS to VxFS 11-16 cpio 10-24, 18-44 creating a disk group CLI 5-13 methods 5-9 VEA 5-10 vxdiskadm 5-11 creating a file system 9-10 creating a layered volume 7-48 CLI 7-49 creating a volume 6-16, 16-4 CLI 6-25 methods 6-17 VEA 6-18 creating a volume snapshot 8-14 CLI 8-21

methods 8-16 creating an alternate boot disk 17-18 CLI 17-25 VEA 17-23 vxdiskadm 17-24 creating an emergency boot disk 18-43 creating spare disks 5-14 cron 11-41 custom installation 2-46, 2-47 cylinder group 11-13

D
daemons starting 18-32 data blocks 10-20 data change object 6-44 data consistency maintaining 14-4 data disk encapsulation 17-10 data flow 9-5 data redundancy 6-4 Database Edition for DB2 2-10 Database Edition for Oracle 2-10 Database Edition for Sybase 2-10 database resynchronization 14-11 databases on file systems 2-16 debug mode 18-34 debugging 18-34 default disk media names 4-16 defragmentation 11-32 scheduling 11-40 defragmenting a file system in VEA 11-41 defragmenting directories 11-37 defragmenting extents 11-34 DEGRADED mode 15-12 degraded plex 15-11 delaylog mount option 12-13

Index-6

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

deporting a disk group 5-16 and renaming 5-16, 5-20 CLI 5-20 methods 5-17 to a new host 5-20 to new host 5-16 VEA 5-18 vxdiskadm 5-19 destroying a disk group 5-32 CLI 5-34 VEA 5-33 DETACHED state 16-15 devalias 17-21 devfsadm 14-24, 15-19, 19-4 Device 4-31 device discovery 19-4 device discovery layer 19-7 device file 11-15 device naming enclosure-based 4-5 traditional 4-4 device naming scheme 4-4 selecting 4-7 device node 6-16 devicetag 4-31, 13-11 df 9-28, 10-5 directory fragmentation 11-24 reporting 11-26 dirty region log bitmaps 14-9 dirty region log size 14-9 dirty region logging 7-11, 14-8, 17-8 DISABLED state 16-15 disabling I/O to a controller 19-23 disk access name 5-5 disk access record 1-13, 5-5 disk array 1-6 active/active 19-11 active/passive 19-12 adding support for 19-5 excluding support for 19-7

listing 19-7 multipathed 1-6 reincluding 19-8 removing support for 19-6 disk configuration 4-10 stages 4-10 disk device naming 4-4 disk encapsulation 2-38, 17-4 disk enclosure 2-35 disk failure 14-13, 15-4 impact of 14-13 intermittent 14-13, 15-13 partial 14-14 permanent 14-13, 15-13 resolving intermittent failure 15-23 resolving permanent failure 15-15 resolving temporary failure 15-19 temporary 14-13, 15-13 volume and plex states before and after 15-9 disk failure handling 15-4 disk failure types 15-13 disk flags 4-34 disk formatting 1-4 disk group 1-11, 5-4 clearing host locks 5-21 configuration database data 13-7 creating 5-9 creating from CLI 5-13 creating with VEA 5-10 creating with vxdiskadm 5-11 definition 1-12 deporting 5-16 deporting in VEA 5-18 deporting in vxdiskadm 5-19 destroying 5-32 destroying in CLI 5-34 destroying in VEA 5-33 displaying deported 5-38 displaying free space in 5-39 displaying properties for 5-38 forcing an import 5-22, 5-27 forcing an import in VEA 5-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-7

Index

high availability 1-12, 5-4 importing and clearing locks in CLI 5-27 importing and renaming 5-21 importing and renaming in CLI 5-26 importing and renaming in VEA 5-24 importing as temporary in CLI 5-26 importing as temporary in VEA 5-24 importing in CLI 5-26 importing in VEA 5-24 importing in vxdiskadm 5-25 moving between systems 5-28 moving in CLI 5-29 moving in VEA 5-28 moving in vxdiskadm 5-29 purpose 1-12, 5-4 renaming in CLI 5-31 renaming in VEA 5-30 temporary import 5-22 upgrading the version 5-43 upgrading the version in CLI 5-44 versioning 5-40 viewing information about 5-35 disk group properties viewing 5-36 viewing in CLI 5-37 Disk Group Properties window 5-36 disk group split and join licensing 2-17 disk group versions supported features 5-42 unsupported features 5-41 disk groups displaying all 4-30 disk header 13-5 disk initialization 2-38, 4-11 disk label 1-4, 13-5 disk media name 1-13, 4-12, 4-15, 5-5, 15-8 changing 4-46 disk media record 15-7 disk name 4-31 disk naming 4-15 enclosure-based 2-35

disk naming method selecting 2-45 disk properties 4-28 disk records after a failure 15-7 before a failure 15-7 disk replacement 14-23 disk spanning 6-4 disk status Deported 4-26 Disconnected 4-26 error 4-30 External 4-26 Free 4-26 Imported 4-26 Not Setup 4-26 online 4-30 Disk View window 3-7, 4-27, 6-37 disk-naming scheme changing 4-9 disks adding in CLI 4-22 adding in VEA 4-18 adding in vxdiskadm 4-20 adding new 14-24 adding to a disk group 4-15 configuration data 13-9 displaying detailed information 4-31 displaying summary information 4-33 evacuating data 4-39 excluding from hot relocation 14-20, 14-21 excluding from hot relocation in VEA 14-18 excluding from VxVM 2-36 failing 15-5 forced removal 15-25 initialized 4-12 making available for hot relocation 14-18, 14-20, 14-22 managing spares in CLI 14-21 managing spares in vxdiskadm 14-19 moving empty 4-48

Index-8

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

naming 1-5 offlining 5-19 recognizing 14-24 removing 4-38 removing in CLI 4-44 removing in VEA 4-42 removing in vxdiskadm 4-43 removing spare designation 14-21 renaming 4-46 renaming in CLI 4-47 renaming in VEA 4-46 replacement methods 14-25 replacing 14-23, 15-16 replacing failed in vxdiskadm 14-27 replacing in CLI 14-28 replacing in VEA 14-26 reserving 14-18, 14-22 scanning 19-5 setting up as spare in VEA 14-17 setting up spares in CLI 14-21 types 13-5 uninitialized 4-11 uninitializing 4-45 unrelocating 14-29 unrelocating in CLI 14-32 unrelocating in VEA 14-30 unrelocating in vxdiskadm 14-31 viewing encapsulated 17-13 viewing in CLI 4-29 viewing information about 4-25 dissociating a snapshot volume CLI 8-25 VEA 8-20 DMP 19-9 benefits 19-9 disabling I/O to a controller 19-23 displaying controllers 19-18 displaying nodes 19-22 displaying paths 19-20 enabling 19-10 managing 19-16 preventing 19-13 restore daemon 19-27 restore daemon policies 19-27 starting the restore daemon 19-27

stopping the restore daemon 19-28 documentation package 2-13 drlseq 6-32 drvconfig 9-11, 14-24, 15-19 dump devices 18-11 dump level 10-13 dumping a file system 10-14 dynamic multipathing 1-8, 2-35, 13-11, 19-4, 19-9 benefits 19-9 disabling I/O to a controller 19-23 displaying nodes 19-22 displaying paths 19-20 enabling 19-10 enabling I/O to a controller 19-24 listing controllers 19-18 managing 19-16 preventing 2-37, 2-46, 19-13 restore daemon 19-27 dynamic multipathing management 4-6

E
Edition products 2-10 eeprom 17-21 emergency boot disk 18-44 booting from 18-46 creating 18-43 EMPTY state 16-8, 16-13, 16-25 enable option 16-5 ENABLED state 15-9, 15-11, 15-16, 15-18, 15-20, 15-22, 16-15 enabling I/O to a controller 19-23 encapsulate 4-19 encapsulating root benefits 17-6 limitations 17-7 VEA 17-11 vxdiskadm 17-12 encapsulating the boot disk 2-48, 18-43

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-9

Index

encapsulation 2-38, 2-40, 4-11, 17-4 effect on /etc/system 17-15 effect on /etc/vfstab 17-17 requirements 17-10 requirements for boot disk 17-10 requirements for data disk 17-10 root disk 17-5, 18-36 unencapsulating root disk 17-28 enclosure 2-35, 4-7 listing information about 19-25 renaming 19-25 enclosure-based naming 2-35, 2-45, 4-5 administering 4-8 benefits 4-6 error status 4-30, 15-8 evacuating a disk 4-39 CLI 4-40 VEA 4-39 vxdiskadm 4-40 evaluation license 2-18 excluding a disk from hot relocation 14-21 excluding controllers 2-36 excluding disks 2-36 excluding disks from hot relocation 14-18, 14-20 excluding enclosures 2-36 exclusive OR 6-14 expanding a file system 10-7, 10-9 expanding a volume 8-4 expired license replacing 18-31 extent 11-6 extent allocation unit state file 11-15 extent allocation unit summary file 11-15 extent fragmentation 11-24 reporting 11-28 extent size 11-6 extent-based allocation 11-4, 11-6 benefits 11-8

extents defragmenting 11-34

F
fabric mode disks 4-7 FAILED disks 15-5 failed root repairing 18-48 FAILING disks 15-5 failing drive removing 15-24 failing flag 15-26 FastResync 2-15, 8-17 licensing 2-17 favorite host adding 3-26 removing 3-27 Fibre Channel 2-35, 4-7 file system adding to a volume 6-23, 7-19 adding to a volume in CLI 7-22 adding to a volume in VEA 7-20 data flow 9-5 defragmenting in VEA 11-41 mounting in VEA 7-21 resizing 10-4, 10-8 resizing with vxresize 8-11 restoring 10-15 snapshot 10-18, 10-19 type-dependent 9-4 type-independent 9-4 types 9-4 UNIX 9-4 unmounting in VEA 7-21 file system check in VEA 12-9 file system corruption troubleshooting 18-29 file system layout 9-16, 11-9 upgrading 11-10 file system layout version displaying 11-11

Index-10

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

file system size 10-4 file system structure 11-13 file system type 6-23, 9-26 displaying 9-26 fileset header file 11-15 find utility 18-44 flags disk 4-34 FlashSnap 2-15 forced removal of a disk 15-25 forceload 18-7 forceload entries 18-22 forcing a volume to start 16-19 forcing an unmount 9-25 format 1-5, 10-5 Foundation Suite 2-10 Foundation Suite HA 2-10 Foundation Suite packages adding 2-26 Foundation Suite QuickStart 2-10 fragmentation 11-5, 11-23 controlling 11-23 directory 11-24 extent 11-24 interpreting 11-30 monitoring 11-25 free disk pool 4-11 free extent map file 11-15 free space identifying 9-28 free space pool 4-12 fsadm 10-5, 10-6, 11-23, 11-25, 11-26, 11-28, 11-32 options 11-33 fscat 10-24 fsck 9-30, 11-20, 11-21, 12-4, 12-7, 15-20 output 12-10 fsck pass 6-24, 9-23 fstyp 9-26

full 12-8

G
getfacl 20-18

ghost subdisk 18-38 grid 3-6 group name 4-31 growby 8-9 growto 8-9

H
hard limit 20-5 help information in VEA 3-14 high availability 1-8, 2-16, 5-8, 17-6 High Sierra File System 9-5 host adding favorites 3-26 removing favorites 3-27 host ID changing 13-19 changing in volboot 18-28 clearing at import 5-24 conflicting 18-28 obtaining 2-20 host locks clearing 5-21 host machine type obtaining 2-20 hostid 2-20, 4-31, 13-11 hot relocation 1-8, 14-13 creating spare disks 5-14 creating spare disks in VEA 5-14 creating spare disks in vxdiskadm 5-15 definition 14-14 excluding disks 14-20, 14-21 excluding disks in VEA 14-18 failure detection 14-15 making disks available for 14-18, 14-20 making disks available 14-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-11

Index

notification 14-15 process 14-15 recovery 14-15 removing spare designation 14-21 selecting space 14-16 unrelocating a disk 14-29 hot-relocation daemons starting 18-15 HSFS 9-5

I
I/O daemons 18-13 starting 18-32 I/O failure identifying 15-4 nonredundant volume 15-4 redundant volume 15-5 I/O rate controlling 8-51 I/O size controlling 8-51 imported 4-34 importing a disk group 5-21 and clearing host locks 5-21 and clearing locks in CLI 5-27 and renaming 5-21 and renaming in CLI 5-26 as temporary in CLI 5-26 CLI 5-26 forcing 5-22 forcing in CLI 5-27 methods 5-23 temporarily 5-22 VEA 5-24 vxdiskadm 5-25 importing rootdg temporarily 18-47 init active 16-5 initialization 2-38, 2-40 initialize 4-19 initialize zero 6-20 initialized disks 4-12

initializing a volume 16-4 initializing plexes 16-5 initializing rootdg manually 18-41 inode 11-6 inode allocation unit file 11-15 inode list file 11-15 installboot 18-5, 18-44 Installer utility 2-26 installing Foundation Suite verifying package installation 2-31 installing VEA 3-21 installing VxVM 2-33 first-time setup 2-42 license keys 2-18 package space requirements 2-12 Solaris compatibility 2-4 vxinstall program 2-42 intent log 12-4, 12-6 contents 12-6 size 12-11 intent log replay 12-5 parallel 12-8 intent logging 12-4 interfaces 3-4 command line interface 3-4 VERITAS Enterprise Administrator 3-4 vxdiskadm 3-4 intermittent disk failure 14-13, 15-13 resolving 15-23 resolving for nonredundant volumes 15-23 resolving for redundant volumes 15-23 invalid UNIX partition 18-19 ioctl functions 16-15 IOFAIL condition flag 15-5 IOFAIL flag 16-11 IOFAIL state 15-21

Index-12

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

J
JBOD adding 19-8 listing 19-8 removing 19-8 journaling 12-4

K
kernel 18-7 kernel file troubleshooting 18-19 kernel initialization phase 18-7 kernel issues and VxFS 2-8 kernel logs 13-5 kernel states 16-15 DETACHED 16-15 DISABLED 16-15 displaying 16-6 ENABLED 16-15

L
label file 11-15

large file enabling 9-14 largefiles 9-12, 10-6 largefiles option 9-14 largesize 11-28 layered volume 1-19, 1-20, 6-5, 6-7, 7-38 advantages 7-42 changing the layout 8-38 creating in CLI 7-49 creating in VEA 7-48 disadvantages 7-42 examples of creating 7-52 fixing 16-27 layouts 7-43 preventing creation 6-20 viewing in CLI 7-53

viewing in VEA 7-53 layout of a file system 9-16 layouts ensuring consistent 18-41 license files 18-32 replacing 18-32 license key 2-20 adding 2-22 entering in vxinstall 2-44 viewing 2-23 license key files 2-25 license key path 2-25 license keys troubleshooting 18-31 licenses replacing expired 18-31 licensing 2-18 checking 13-16 for evaluation 2-18 for optional features 2-17 for Sun A5x00 2-19 for Sun StorEdge 2-18 for upgrades 2-18, 17-30 generating a license key 2-21 management utilities 2-25 obtaining a license key 2-20 viewing license keys 2-23 listing installed packages 2-31 load balancing 1-8, 6-11 log adding in CLI 7-14 adding in VEA 7-13 removing in CLI 7-15 removing in VEA 7-13 log file 11-15 log mount option 12-13 log plex 1-16, 14-8 log size default 9-18 maximum 9-18 minimum 9-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-13

Index

selecting 9-19 setting 9-18 using large 9-19 using small 9-19 log subdisks 14-8 logdisk 7-31 logging 6-19, 7-11, 14-8 dirty region logging 14-8 for mirrored volumes 7-11 performance 12-14 RAID-5 7-12, 14-10 logging mount options 12-12 logical record size 10-13 logical unit number 1-5 logiosize 12-16 logsize 9-12, 12-11 logtype 6-32, 7-14

mirror=target 7-26 mirror-concat 7-43, 7-44

M
man 3-17

manual pages 2-11, 3-17 maxgrow 6-34 maxsize 6-33 menu bar 3-6 metadata 11-5 mirror 1-15 adding 7-4 adding to a volume in CLI 7-6 adding to existing volume 7-5 booting from alternate 17-21 removing 7-8 removing by disk 7-9 removing by mirror name 7-9 removing by quantity 7-9 removing in CLI 7-10 removing in VEA 7-9 mirror=ctlr 7-26 mirror=disk 7-26 mirror=enclr 7-26

mirrored layout changing 8-37 mirrored volume 1-19, 1-20, 6-12 creating 6-31, 6-32 mirroring 6-5 advantages 6-13 controlling with trigger points 7-50 default behavior 7-51 disadvantages 6-13 enhanced 7-38 mirroring a volume 6-19 mirroring all volumes 7-6 mirroring the boot disk 18-43 mirroring the root disk 17-18 CLI 17-25 errors 17-20 requirements 17-18 VEA 17-23 vxdiskadm 17-24 mirrors adding 6-31 mirror-stripe 7-43, 7-45 mirror-stripe layout 6-7, 7-39 mkdir 7-22 mkfs 7-22, 9-10, 9-11 mkfs options 9-12 bsize 9-12 largefiles 9-12 logsize 9-12 N 9-12 version 9-12 mount 7-22, 9-20, 12-12 mount at boot 6-24 CLI 7-23 mount options 12-12 blkclear 12-12, 12-13 delaylog 12-12, 12-13 log 12-12, 12-13 nodatainlog 12-12, 12-13

Index-14

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

quota 20-9 tmplog 12-12, 12-13

noconfig 4-23 nodatainlog mount option 12-13

mount point 6-23, 9-20 mounted file systems displaying 9-21 mounting a file system 9-20 automatically 9-22 VEA 7-21 mounting a snapshot file system 10-20, 10-23 mounting all file systems 9-21 moving a disk 4-48 CLI 4-48, 4-49 VEA 4-48 vxdiskadm 4-49 moving a disk group 5-28 CLI 5-29 VEA 5-28 vxdiskadm 5-29 multipathed disk array 1-6 multipathing preventing 2-46 multiported disk array 19-11 multiuser startup scripts 18-15

node 2-15 NODEVICE flag 16-11 NODEVICE state 15-5, 15-11, 15-25, 16-14 NODEVICE status 15-10 nolargefiles option 9-14 nolog 7-31, 12-8 nonencapsulated boot disk failure 18-51 nonredundant volumes resolving intermittent failure 15-23 NOPRIV disk 13-5 noraid5log 7-31 nostripe 6-26

O
object location table 11-14 object location table file 11-15 Object Properties window 3-7 object states 16-4 object tree 3-6 off-host processing 2-16 OFFLINE state 16-10 offlining disks 5-19 online disk status 15-8 online ready 4-34 online relayout 8-26 aborting 8-33 and log plexes 8-30 and sparse plexes 8-30 and volume length 8-30 and volume snapshots 8-30 continuing 8-33 in CLI 8-34 in VEA 8-32 monitoring 8-33 pausing 8-33 reversing 8-30, 8-33 supported transformations 8-27

N
naming disk devices 4-4 naming disks defaults 4-15 ncol 6-28, 8-35 NDEV state 15-11 NEEDSYNC 14-6 NEEDSYNC state 16-13 NetBackup 2-6 Network File System 9-5 newfs 7-22 NFS 9-5 nlog 6-32, 7-14 nmirror 6-31, 7-7

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-15

Index

temporary storage space 8-29 online status 4-30 opt 8-9, 17-7, 17-8 optional features for VxVM and VxFS 2-17 ordered allocation 6-21, 7-29 order of columns 7-32 order of mirrors 7-33 specifying in CLI 7-30 specifying in VEA 7-30 ordered option 7-30

P
packages 2-11 installing 2-29 listing 2-31 space requirements 2-12 VEA 3-21 VEA GUI 2-11 VEA service 2-11 VERITAS Enterprise Administrator 2-11 VERITAS licensing 2-11 VERITAS Volume Replicator documentation 2-15 VERITAS Web GUI engine 2-15 VVR Web Console 2-15 VxFS component to VEA 2-11 VxVM component to VEA 2-11 VxVM documentation 2-11 VxVM manual pages 2-11 VxVM software 2-11 parallel log replay 12-8, 12-9 parent task 8-42 parity 1-20, 6-5, 6-14 partial disk failure 14-14 partition tags 1-10 partitions 1-4 after encapsulation 17-13 invalid 18-19 PATH 3-23, 9-6, A-11, B-13 paths controlled by DMP 19-20

pausing a task 3-10, 8-47 pausing online relayout 8-33 permanent disk failure 14-13, 15-13 resolving 15-15 volume states after 15-14 physical disk naming 1-5 physical storage device 1-4 physical storage objects 1-4 pkgadd 2-26, 2-29, 3-23 pkginfo 2-31, 2-32, 3-22 pkgrm 3-22, 17-44, 17-45 plex 1-11, 1-15 complete 1-16 definition 1-15 identifying problems 16-6 initializing 16-5 log 1-16 naming 1-15 recovering 16-17 resolving problems 16-16 sparse 1-16 troubleshooting 18-25 types 1-16 plex kernel states 16-6, 16-15 plex problems analyzing 16-28 good plex is known 16-28 good plex is not known 16-30 plex states 16-6, 16-8 ACTIVE 16-8 changing 16-20 CLEAN 16-8 displaying 16-6 EMPTY 16-8 OFFLINE 16-10 setting to ACTIVE 16-23 setting to CLEAN 16-22 setting to STALE 16-21 SNAPATT 16-10 SNAPDONE 16-9 STALE 16-10

Index-16

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

TEMP 16-10

Preferences window 3-9 preferred plex read policy 7-16 private 4-34 private region 1-9, 2-34, 4-11, 4-35, 5-5, 15-5, 17-4, 17-10 partition tag 1-10 privlen 4-23 privoffset 4-23 privpaths 13-11 probe-scsi-all 18-17 Process File System 9-5 PROCFS 9-5 projection 6-37 PROM 18-5 protecting the VxVM configuration 14-37 prtvtoc 4-35, 10-5, 13-5, 14-24, 15-20 PTID 8-42 publen 4-23 public region 1-10, 1-13, 2-34, 4-35, 15-5, 17-4, 17-10 partition tag 1-10 puboffset 4-23 pubpaths 13-11

hard limit 20-5 modifying 20-12 soft limit 20-5 time limit 20-5 quota mount option 20-9 quotas 11-21 benefits 20-4 enabling 20-11 setting 20-10 turning off 20-14 viewing 20-14 quotas file 11-15, 20-7, 20-10 external 20-7 internal 20-7 quotas.grp 20-7, 20-10

R
RAID 6-4, 6-6 RAID levels 6-6 RAID-0 6-6, 6-7 RAID-0+1 6-6, 6-7 RAID-1 6-6, 6-7 RAID-1+0 6-6, 6-7 RAID-5 6-6, 6-7, 6-19 advantages 6-15 disadvantages 6-15 logging 7-12 RAID-5 column 6-14 default size 6-19 RAID-5 layout changing 8-37 RAID-5 log 14-10 RAID-5 volume 1-19, 1-20, 6-14 creating 6-30 degraded plex 15-11 fixing after disk failure 15-12 read policies 7-16 changing in CLI 7-18 changing in VEA 7-17 read-writeback synchronization 14-6, 16-19

Q
Quick I/O 2-6, 2-16, 2-17, 9-8 Quick installation 2-47 QuickLog 2-6, 2-14, 2-17, 9-8, 9-19, 12-15 quota commands 20-7 vxedquota 20-9 vxquot 20-9 vxquota 20-9 vxquotaoff 20-9 vxquotaon 20-9 vxrepquota 20-9 quota editor 20-11 quota limits 20-5

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-17

Index

reassociating a snapshot volume CLI 8-24 VEA 8-19 RECOVER flag 16-12 RECOVER state 15-21 recovering a volume 14-33 CLI 14-34 VEA 14-33 recovering plexes 16-17 recovering rootdg 18-47 recovering volumes and volume states 15-18 redo log volumes 14-11 redundancy 6-4 redundant volumes resolving intermittent failure 15-23 registry settings modifying 3-31 relayout 8-26 aborting 8-33 pausing 8-33 resuming 8-33 reversing 8-33 Relayout Status Monitor window 8-33, 8-40 relocated subdisks viewing 14-32 relocating subdisks 14-16 REMOVED flag 16-11 REMOVED state 15-25 removing a disk 4-38 CLI 4-44 forced 15-25 methods 4-41 VEA 4-42 vxdiskadm 4-43 removing a failing drive 15-24 removing a log CLI 7-15 VEA 7-13 removing a mirror 7-8 by disk 7-9

by mirror 7-9 by quantity 7-9 CLI 7-10 VEA 7-9 removing a snapshot volume CLI 8-24 VEA 8-19 removing a volume 6-46 CLI 6-48 VEA 6-47 renaming a disk 4-46 CLI 4-47 VEA 4-46 renaming a disk group 5-30 CLI 5-31 VEA 5-30 renaming an enclosure 19-25 repairing failed root 18-48 replacing a disk 14-23, 15-16 CLI 14-28 methods 14-25 VEA 14-26 replacing a failed disk vxdiskadm 14-27 replacing license files 18-32 replicated volume group 6-44 Rescan option 14-24 reserving a disk 14-18, 14-22 resilience level 6-4 changing 8-38 resilient volume 6-5 resilvering 14-11 resizing a file system 10-4, 10-8, 10-10 resizing a volume 8-4 CLI 8-8 methods 8-6 VEA 8-7 with vxassist 8-9 with vxresize 8-11 resizing a volume with a file system 8-5 resolving intermittent disk failure 15-23

Index-18

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

resolving permanent disk failure 15-14 resolving temporary disk failure 15-19 restarting a task 8-47 restore daemon 18-9 restoring a file system 10-15, 10-16 resuming a task 3-10 resyncfromreplica 8-24 resynchronization 14-4, 18-15 atomic-copy 14-5 of databases 14-11 read-writeback 14-6 reusing a disk 4-48 revectoring 17-6 reversing online relayout 8-33 rlink 6-44 root 17-8, 17-13 encapsulating in VEA 17-11 encapsulating in vxdiskadm 17-12 repairing failed 18-48 root disk 17-5 determining which is booting 17-26 mirroring 17-18 mirroring in CLI 17-25 mirroring in VEA 17-23 mirroring in vxdiskadm 17-24 mirroring requirements 17-18 unencapsulating 17-28 root disk encapsulation 18-36 free space at end of drive 18-37 no free space on disk 18-39 root encapsulation 17-5 and /etc/system 18-22 benefits 17-6 effect on /etc/system 17-15 effect on /etc/vfstab 17-17 limitations 17-7 root file system 2-34 mounted as read-only 18-30 root mirror verification 17-21 root plex errors 17-20

root_done 18-14 rootdev 18-7, 18-9 rootdg 1-13, 2-33, 2-34, 2-36, 2-38, 2-42, 4-10, 5-7, 18-9, 18-36 default disk media names 5-7 failure and recovery 18-49, 18-57 recovering 18-47 temporarily importing 18-47 rootdisk 2-48 rootdisk-B0 18-38 rootvol 8-9, 17-7, 18-9, 18-23, 18-38

round robin read policy 7-16 run control scripts 18-8

S
s2 slice 2-38 S25vxvm-sysboot 18-9 S30rootusr 18-10 S35vxvm-startup1 18-11 S40standardmounts 18-12 S50devfsadm 18-12 S70buildmnttab 18-12 S85vxvm-startup2 18-13 S86vxvm-reconfig 18-13, 18-20, 18-36 S94vxnm-host_infod 18-15 S94vxnm-vxnetd 18-15 S95vxvm-recover 18-15 S95vxvm-recover file 14-15

SAN 2-35 SAN management 4-6 SANPoint Control QuickStart 2-14 SANPoint Foundation Suite 2-10 SANPoint Foundation Suite HA 2-10 saving /etc/system 14-39 saving the database configuration 14-37 scanning for disks 19-5 scratch pad 8-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-19

Index

scripts for VxVM startup 18-8 security for VEA 3-30 selected plex read policy 7-16 setfacl 20-16 shared 4-34 shrinkby 8-9 shrinking a file system 10-7, 10-9 shrinking a volume 8-4 shrinkto 8-9 simple disk 13-5, 18-28 single-user startup scripts 18-9 size of a file system 10-4 size of a volume 6-19 slice 1-5 sliced 4-31 sliced disk 13-5 slow attribute 8-52 SmartSync Recovery Accelerator 14-11 snap object 6-44 snapabort 8-23 SNAPATT state 16-10 snapback 8-19, 8-24 snapclear 8-20, 8-25 SNAPDONE 8-23 SNAPDONE state 16-9 snapof 10-23 snapshot 8-14 aborting 8-18 creating 8-14 creating in CLI 8-21 creating in VEA 8-17 dissociating in CLI 8-25 dissociating in VEA 8-20 methods for creating 8-16 read-only 8-18 reassociating in CLI 8-24 reassociating in VEA 8-19 removing in CLI 8-24

removing in VEA 8-19 snapshot file system 10-18, 10-19 backing up 10-25 contents 10-19 creating 10-23 disk structure 10-20 managing 10-27 mounting 10-20 multiple snapshots 10-28 performance 10-28 reading 10-22 restoring from 10-25 size 10-27 troubleshooting 10-29 unmounting 10-28 using for backup 10-19 snapshot phase 8-15 snapshot volume creating in CLI 8-21 dissociating in CLI 8-25 dissociating in VEA 8-20 reassociating in CLI 8-24 reassociating in VEA 8-19 removing in CLI 8-24 removing in VEA 8-19 snapsize 10-23 snapstart 8-17 snapstart phase 8-15 snapwait 8-23 soft limit 20-5 software packages 2-9 Solaris compatibility with VxFS 2-5 VxVM compatibility 2-4 Solaris boot process 18-4 Solaris disk 1-4 sorting tasks 3-10 space requirements 2-12 for VxFS 2-13 spanning 1-8

Index-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

spare disks creating 5-14 creating in CLI 5-15 creating in VEA 5-14 creating in vxdiskadm 5-15 including in space availability 14-22 managing 14-17 managing in CLI 14-21 managing in VEA 14-17 managing in vxdiskadm 14-19 removing spare designation 14-18, 14-20, 14-21 setting up in CLI 14-21 using only 14-22 sparse plex 1-16 special volumes starting 18-11 stale plexes troubleshooting 18-25 STALE state 15-18, 15-22, 16-10, 16-21 starting a volume 16-4, 16-19 starting all volumes after renaming a disk group 5-31 starting I/O daemons 18-32 startup scripts multiuser 18-15 single-user 18-9 troubleshooting 18-20 VxVM 18-8 STATE fields 16-6 states kernel 16-15 plex 16-8 volume 16-13 states of VxVM objects 16-4 status area 3-6 storage allocating for volumes 7-24 Storage Area Network 2-35 Storage Area Networking 7-37

storage attributes specifying in CLI 7-26 specifying in VEA 7-25 Storage Checkpoints licensing 2-17 Storage Migrator 2-6 stripe unit 6-10, 6-14 default size 6-19 stripe width changing 8-36 striped 6-19 striped layout changing 8-36 Striped Pro 6-8, 6-19, 7-47, 7-48 striped volume 1-19, 1-20, 6-10 creating 6-28 stripe-mirror 7-43, 7-47 stripe-mirror layout 7-40 stripeunit 6-28, 8-35 striping 6-5 advantages 6-11 disadvantages 6-11 structural files 11-14 structure of VxFS 9-13 subdisk 1-11, 1-14 definition 1-14 ghost 18-38 naming 1-14 subvolumes 7-38 Sun Alternate Pathing 17-31 SUNW packages 17-31 superblock 10-20 support for Foundation Suite 2-7 swap 2-39, 17-10, 17-13, 18-11 swapvol 8-9, 17-7 SYNC 14-6 SYNC state 16-13 system file using an alternate 18-23

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-21

Index

T
tag 14 4-35, 17-4, 17-13 tag 15 4-35, 17-4, 17-13

TEMPRMSD state 16-10

tags 1-10 tape density 10-13 tar 10-24 target 1-5 task 8-41 aborting in CLI 8-47 controlling progress rate 8-51 pausing in CLI 8-47 resuming in CLI 8-47 Task History window 3-10, 8-40 task identifier 8-41 Task Properties window 3-11 task slowing 8-52 task tag 8-41 task throttling 8-52 TASKID 8-42 tasks aborting 3-10 accessing through VEA 3-8 clearing history 3-11 managing 8-39 managing in VEA 8-40 pausing 3-10 resuming 3-10 sorting 3-10 throttling 3-10 viewing 3-10 technical support for Foundation Suite 2-7 temp space size 8-32 TEMP state 16-10 temporarily importing rootdg 18-47 temporary disk failure 14-13, 15-13 resolving 15-19 temporary imports 18-48 temporary storage area 8-29 TEMPRM state 16-10

third mirror break off 8-15 throttling tasks 3-10, 8-52 time limit 20-5 editing 20-12 tmplog mount option 12-13 tmpsize 8-35 toolbar 3-6 trigger points 7-50 troubleshooting the boot process 18-16 true mirroring 1-20, 6-12 type-dependent file systems 9-4 type-independent file systems 9-4

U
UFS 6-23, 17-8 allocation 11-4, 11-5 backup 10-18 bootblock 11-13 converting to VxFS 11-16 cylinder group 11-13 cylinder group map 11-13 inodes 11-13 resizing 8-5 resizing with vxresize 8-11 storage blocks 11-13 structure 11-13 superblock 11-13 ufsboot 18-6 ufsdump 10-11, 10-17 ufsrestore 10-11, 10-15, 10-17 umount 5-20, 9-24 uname 2-20 unencapsulating a root disk 17-28 uninitialized disks 4-11 UNIX File System 6-23, 7-20, 9-4 resizing 8-5 unmounting a file system 9-24 forcing 9-25

Index-22

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

VEA 7-21 unmounting all file systems 9-24 unrelocating a disk 14-29 CLI 14-32 VEA 14-30 vxdiskadm 14-31 upgrade_finish 17-33, 18-20 upgrade_start 8-9, 17-7, 17-32, 18-20 upgrading from SUNWvxvm 17-37 Solaris operating system 17-38 VMSA to VEA 3-22 VxVM 17-30 VxVM and Solaris 17-40 VxVM software only 17-34 upgrading a disk group 5-40 CLI 5-44 VEA 5-43 upgrading Solaris only 17-45 upgrading the file system layout 11-10 upgrading VxFS 17-43 upgrading VxFS and Solaris 17-45 upgrading VxFS only 17-44 use-nvramrc? 17-21 user interfaces 3-4 usr 8-9, 17-7, 17-8, 17-13

V
var 8-9, 17-7, 17-8, 17-13

VEA 3-4, 9-9 accessing tasks 3-8 adding a disk 4-18 adding a file system to a volume 7-20 adding a log 7-13 adding a mirror 7-5 changing volume layout 8-32 changing volume read policy 7-17 command log file 3-10, 3-12 confirming server startup 3-28 connecting automatically 3-26

Console/Task History 3-6 controlling user access 3-30 creating a disk group 5-10 creating a layered volume 7-48 creating a spare disk 5-14 creating a volume 6-18 creating a volume snapshot 8-17 creating an alternate boot disk 17-23 defragmenting a file system 11-41 deporting a disk group 5-18 destroying a disk group 5-33 disabling Wizard mode 3-9 disk properties 4-28 disk view 3-7 Disk View window 6-37 displaying the version 3-28 dissociating a snapshot volume 8-20 encapsulating root 17-11 grid 3-6 help information 3-14 importing a disk group 5-24 installing 3-21, 3-22 installing client on Windows 3-23 main window 3-6 managing spare disks 14-17 menu bar 3-6 modifying registry settings 3-31 monitoring events and tasks 3-29 mounting a file system 7-21 moving a disk 4-48 moving a disk group 5-28 multiple host support 3-5 multiple views of objects 3-5 object properties 3-7 object tree 3-6 reassociating a snapshot volume 8-19 recovering a volume 14-33 Relayout Status Monitor window 8-40 remote administration 3-5 removing a disk 4-42 removing a log 7-13 removing a mirror 7-9 removing a snapshot volume 8-19 removing a volume 6-47 renaming a disk 4-46

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-23

Index

renaming a disk group 5-30 replacing a disk 14-26 resizing a volume 8-7 scanning disks 14-24 security 3-5, 3-30 setting preferences 3-9 software packages 3-21 specifying ordered allocation 7-30 starting 3-24 starting the client 3-25 starting the server 3-24 status area 3-6 stopping the server 3-28 Task History window 3-10, 8-40 throttling a task 8-52 toolbar 3-6 unmounting a file system 7-21 unrelocating a disk 14-30 upgrading a disk group version 5-43 upgrading from VMSA 3-22 viewing a layered volume 7-53 viewing CLI commands 3-11 viewing disk group properties 5-36 viewing disk information 4-26 viewing tasks 3-10 Volume Layout window 6-40 Volume Properties window 6-41 Volume to Disk Mapping window 6-39 volume to disk mapping window 3-7 volume view 3-7 Volume View window 6-38 VEA packages installing 3-23 VEA server lock file 3-24 VEA server log file 3-24 VERITAS Cluster File System 2-10, 2-16 VERITAS Cluster Server 2-9, 2-10, 2-16 VERITAS Cluster Server QuickStart 2-9 VERITAS Cluster Server Traffic Director 2-9 VERITAS Cluster Volume Manager 2-10 VERITAS Database Edition for DB2 2-10

VERITAS Database Edition for Oracle 2-9, 2-10 VERITAS Database Edition for Sybase 2-10 VERITAS Database Edition/Advanced Cluster for Oracle9i 2-9 VERITAS Enterprise Administrator 3-4, 3-5, 9-9 packages 2-11 VERITAS FastResync 2-15 VERITAS File System 2-9, 6-23, 7-20 resizing 8-5 VERITAS FlashSnap 2-15 VERITAS Foundation Suite 2-9, 2-10 VERITAS Foundation Suite HA 2-10 VERITAS Foundation Suite QuickStart 2-10 VERITAS Quick I/O for Databases 2-16 VERITAS QuickLog 2-14 VERITAS SANPoint Control QuickStart 2-14 VERITAS SANPoint Foundation Suite 2-9, 2-10, 2-16 VERITAS SANPoint Foundation Suite HA 2-10 VERITAS Volume Manager 2-9, 2-16 VERITAS Volume Replicator 2-9, 2-15, 18-15 version 9-12 Version 1 layout 11-9 Version 2 layout 11-9 Version 3 layout 11-9 Version 4 layout 9-16, 11-9 Version 5 layout 9-16, 11-9 versioning and disk groups 5-40 versions VxVM and Solaris 2-4 vfstab 9-22 viewing disk group information 5-35

Index-24

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

viewing disk group properties CLI 5-37 VEA 5-36 viewing disk information 4-25 CLI 4-29 methods 4-25 summary of all disks 4-33 VEA 4-26, 4-27 vxdiskadm 4-37 viewing encapsulated disks 17-13 Virtual File System 9-4 virtual objects 1-11 virtual storage objects 1-7 vLicense 2-21 VMSA 2-7 vol_subdisk_num 1-14 volboot 18-36 volboot file 13-18, 18-35 changing host ID 13-19 re-creating 13-19, 18-28 troubleshooting 18-27 viewing contents 13-18 voldrl_max_seq_dirty 6-32 volume 1-7, 1-11 accessing 1-7 adding a file system 6-23, 7-19 adding a file system in CLI 7-22 adding a file system in VEA 7-20 adding a mirror 7-4 adding a mirror in CLI 7-6 adding a mirror in VEA 7-5 allocating storage for 7-24 assigning disk space 6-21 changing read policy in CLI 7-18 changing read policy in VEA 7-17 creating 6-16, 16-4 creating a layered volume 7-48 creating in CLI 6-25 creating in VEA 6-18 creating layered in CLI 7-49 creating layered in VEA 7-48 creating mirrored and logged 6-32

definition 1-7, 1-17 disk requirements 6-16 displaying information for 6-44 estimating expansion 6-34 estimating size 6-33 expanding the size 8-4 force starting 16-19 initializing 16-4 layered layouts 7-43 logging 6-19 managing tasks 8-39 methods for creating 6-17 mirroring 6-19 mirroring all 7-6 mirroring across devices 6-21 mounting a file system in VEA 7-21 naming 1-17 ordered allocation 6-21 recovering 14-33, 14-38 recovering in CLI 14-34 recovering in VEA 14-33 reducing the size 8-4 removing 6-46 removing in CLI 6-48 removing in VEA 6-47 resizing 8-4 resizing in CLI 8-8 resizing in VEA 8-7 resizing methods 8-6 resizing with vxassist 8-9 resizing with vxresize 8-11 starting 16-19 starting after disk group renaming 5-31 specifying ordered allocation 7-29 starting manually 5-26 striping across devices 6-21 writing to 1-18 volume kernel states 16-6, 16-15 volume layout 1-19, 6-4 changing in CLI 8-34 changing in VEA 8-32 changing online 8-26 concatenated 1-20 layered 1-20 methods for changing 8-31

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-25

Index

mirrored 1-20 RAID-5 1-20 striped 1-20 volume layout information displaying 6-35 displaying in CLI 6-42 volume layout types 6-8 Volume Layout window 6-40 Volume Manager excluding disks 2-36 initial setup 2-41 setup planning 2-33 Volume Manager control 1-9, 2-34, 2-36, 2-38 Volume Manager disk 1-11, 1-13, 4-12 naming 1-13 Volume Manager Storage Administrator 2-7 Volume Manager Support Operations 3-4, 3-19 Volume Manager Visual Administrator 2-7 Volume Properties window 6-41 volume read policies 7-16 volume read policy changing 7-16 changing in CLI 7-18 changing in VEA 7-17 volume recovery 14-23 volume replication licensing 2-17 Volume Replicator 2-15 volume size 6-19 volume snapshot 8-14 creating 8-14 creating in VEA 8-17 methods for creating 8-16 snapshot phase 8-15 snapstart 8-15 volume states 16-6, 16-13 ACTIVE 16-13 after a disk failure 15-10 after attaching disk media 15-17

after permanent disk failure 15-14 after recovering volumes 15-18 after recovery 15-22 after running vxreattach 15-21 after temporary disk failure 15-21 after volume recovery 15-18 after vxrecover 15-22 before a failure 15-9 CLEAN 16-13 displaying 16-6 EMPTY 16-13 NEEDSYNC 16-13 NODEVICE 16-14 setting to EMPTY 16-25 SYNC 16-13 volume table of contents 1-4, 4-35, 17-4 volume tasks managing in VEA 8-40 methods for managing 8-39 Volume to Disk Mapping window 3-7, 6-39 Volume View window 3-7, 6-38 vrtsadm 3-25, 3-30 VRTSfsdoc 2-13 VRTSfspro 2-11, 2-12, 3-21 VRTSlic 2-24, 2-25 VRTSob 2-11, 2-12, 3-21 VRTSobadmin 2-11, 3-21 VRTSobgui 2-11, 2-12, 3-21 VRTSobgui.msi 3-21 VRTSqio 2-16 VRTSqlog 2-16 VRTSspc 2-14 VRTSspcq 2-14 VRTSvlic 2-11, 2-12, 2-24, 2-25, 2-29, 3-22 VRTSvmdoc 2-11, 2-12 VRTSvmman 2-11, 2-12 VRTSvmpro 2-11, 2-12, 3-21 VRTSvrdoc 2-15 VRTSvrw 2-15 VRTSvxfs 2-13

Index-26

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

VRTSvxvm 2-11, 2-12, 2-15, 3-22 VRTSweb 2-15

VTOC 1-4, 1-9, 2-39, 4-35, 13-5 before data disk encapsulation 17-14 before encapsulating root disk 17-13 protecting 18-38 VVR 2-15 vxassist 3-15, 4-8, 6-25, 8-8, 8-9, 9-11, 11-19, 16-4 SAN awareness 7-37 vxassist addlog 7-14 vxassist convert 8-34 vxassist growby 8-9 vxassist growto 8-9 vxassist make 6-25, 7-49, 9-11 vxassist maxgrow 6-34 vxassist maxsize 6-33 vxassist mirror 7-6, 17-25 vxassist relayout 8-34, 8-35 vxassist remove log 7-15 vxassist remove mirror 7-10 vxassist remove volume 6-48, 8-24 vxassist shrinkby 8-9 vxassist shrinkto 8-9 vxassist snapabort 8-23 vxassist snapback 8-24 vxassist snapclear 8-25 vxassist snapshot 8-21 vxassist snapstart 8-21 vxassist snapwait 8-23 vxbootsetup 17-22 vxconfig 13-6 vxconfigd 2-34, 4-8, 13-4, 13-6, 13-12, 16-20, 17-42, 18-9, 18-13, 18-25, 18-31, 18-32, 18-33, 18-34, 18-46, 19-4, 19-5 disabling 13-15 displaying status 13-14 enabling 13-14 mode options 18-35

running in debug mode 18-34 starting 13-15 stopping 13-15 vxconfigd modes 13-13 vxdctl 13-14 vxdctl disable 13-15 vxdctl enable 5-5, 13-4, 13-14, 14-24, 15-8, 15-20, 18-13, 18-32, 19-4, 19-5 vxdctl hostid 13-19, 18-28 vxdctl init 13-19, 18-28 vxdctl initdmp 18-13 vxdctl license 13-16 vxdctl list 13-18 vxdctl mode 13-14 vxdctl stop 13-15 vxdctl support 13-16 vxddladm 19-7 vxddladm addjbod 19-8 vxddladm excludearray 19-7 vxddladm includearray 19-8 vxddladm listexclude 19-8 vxddladm listjbod 19-8 vxddladm listsupport 19-7 vxddladm rmjbod 19-8 vxdg 3-15, 4-24, 5-37 vxdg adddisk 4-24, 4-49, 14-28, 15-15 vxdg deport 4-47, 5-20, 18-47 vxdg destroy 5-34 vxdg free 5-39 vxdg import 4-47, 5-26, 18-47 vxdg init 5-13 vxdg list 5-38, 5-44, 13-7 vxdg rmdisk 4-44, 4-49 vxdg upgrade 5-45 vxdisk 3-16, 4-30, 5-37 vxdisk clearimport 13-19 vxdisk list 4-29, 4-31, 4-33, 5-13, 13-9, 14-24, 15-8, 18-47

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-27

Index

vxdisk -o alldgs 5-38 vxdiskadd 17-25 vxdiskadm 3-4, 3-19, 4-9 adding a disk 4-20 creating a disk group 5-11 creating a spare disk 5-15 creating an alternate boot disk 17-24 deporting a disk group 5-19 displaying disk information 4-37 displaying help information 3-20 encapsulating root 17-12 evacuating a disk 4-40 exiting a process 3-20 importing a disk group 5-25 list option 4-37 managing spare disks 14-19 menu options 3-20 moving a disk 4-48 moving a disk group 5-29 removing a disk 4-43 replacing a failed disk 14-27 starting 3-19 unrelocating a disk 14-31 vxdiskadm option 3 15-24 vxdiskadm option 4 15-25 vxdiskadm option 5 15-16, 15-20, 15-25 vxdiskadm option 7 15-24 vxdiskconfig 19-4 vxdisksetup 4-22, 15-15 vxdiskunsetup 4-45 vxdmp 17-31 vxdmpadm 19-16 vxdmpadm disable 19-17, 19-23 vxdmpadm enable 19-17, 19-23 vxdmpadm getdmpnode 19-17, 19-22 vxdmpadm getsubpaths 19-17, 19-20 vxdmpadm listctlr 19-17, 19-18 vxdmpadm listenclosure 19-17, 19-25 vxdmpadm setattr 19-17, 19-25 vxdmpadm start restore 18-9, 19-17, 19-27

vxdmpadm stat restored 19-28 vxdmpadm stop restore 19-17, 19-28 vxdump 10-11, 10-12, 10-17, 10-24, 10-25 options 10-13 vxedit 7-10 vxedit rename 4-47 vxedit set 15-26 vxedit set nohotuse 14-21 vxedit set reserve 14-22 vxedit set spare 5-15, 14-21 vxedquota 20-9, 20-11, 20-12 vxeeprom 17-22 vxevac 4-40

VxFS 6-23, 17-8 allocation 11-4, 11-6 and product compatibility 2-5 and Solaris compatibility 2-5 backing up 10-18 backup utility 10-11 checking consistency 9-30 checking consistency in VEA 12-9 checking structure 9-13 command syntax 9-7 commands 9-6 converting from UFS 11-16 creating a file system 9-10 definition 9-4 defragmentation 11-32 displaying mounted 9-21 dumping 10-12 expanding 10-7, 10-9 kernel issues 2-8 layout versions 11-9 maintaining consistency 12-7 mounting 9-20 mounting all 9-21 mounting automatically 9-22 optional features 2-16 resizing 8-5, 10-4, 10-8, 10-10 resizing with vxresize 8-11 restore utility 10-11 restoring 10-15

Index-28

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index

shrinking 10-7, 10-9 space requirements 2-13 structural components 11-14 unmounting 9-24 upgrading 17-43 VxFS commands 9-8 vxfsconvert 11-18, 11-19 process 11-22 vxinfo 15-12, 16-6, 16-7 vxinstall 2-42, 4-8, 4-10, 18-36 custom installation 2-47 displaying help information 2-47 encapsulating all disks 2-50 encapsulating boot disk 2-48 entering license keys 2-44 exiting 2-47 initializing all disks 2-50 installing individual disks 2-50 leaving disks unaltered 2-50 quick installation 2-47 selecting a naming method 2-45 selecting an installation method 2-47 setting up disks 2-49 shutdown and reboot 2-52 suppressing multipathing 2-46 verifying setup choices 2-51 vxinstall process 2-43 vxintro 3-18 vxiod 18-13, 18-32, 18-46 vxiod set 18-31 vxlicense 2-25, 18-46 vxlicinst 2-22, 2-25, 18-31 vxlicrep 2-23, 2-25 vxmake 7-41, 14-37, 14-38 vxmend 16-20, 16-27 vxmend fix 16-16, 16-20 vxmend fix active 16-23 vxmend fix clean 16-22 vxmend fix empty 16-25 vxmend fix stale 16-21 vxmend off 16-16, 16-26

vxmend on 16-16, 16-26 vxmirror 7-6, 17-25 vxnetd 18-15 vxnotify 17-42 vxplex 7-10, 17-29 vxplex att 16-17 vxprint 3-15, 5-44, 6-42, 7-53, 10-5, 14-32, 14-37, 14-38, 15-9, 15-10, 15-11, 15-12, 15-14, 15-18, 15-22, 16-6, 16-7, 17-29, 17-42 options 6-43 vxquot 20-9 vxquota 20-9, 20-14 vxquotaoff 20-9, 20-14 vxquotaon 20-9, 20-11 vxreattach 14-34, 15-20, 18-13 vxrecover 14-35, 14-38, 15-15, 15-20, 15-22, 16-16, 16-17, 16-27, 18-11, 18-13, 18-46, 18-48 vxregctl 3-32 vxrelayout 8-49 vxrelocd 13-4, 14-15, 17-42, 18-15 vxrepquota 20-9 vxresize 8-8, 8-11, 10-5, 10-8 vxrestore 10-11, 10-15, 10-17, 10-24, 10-25 options 10-16 vxrootmir 17-25 vxsvc 3-24 vxtask 8-41, 8-42 vxtask abort 8-47 vxtask list 8-42, 8-44 vxtask monitor 8-45 vxtask pause 8-47 vxtask resume 8-47 vxunreloc 14-29, 14-32 vxunroot 17-28, 17-29, 17-42 vxupgrade 11-11, 11-20

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

Index-29

Index

VxVA 2-7 VxVM architecture 13-4 first-time setup 2-33 initializing 18-36 installation 2-33 versions 2-4 VxVM configuration daemon 5-5 VxVM I/O daemons starting 18-32 VxVM license files 18-32 VxVM object states 16-4 VxVM software packages 2-11 VxVM startup scripts 18-8 multiuser 18-15 single-user 18-9 troubleshooting 18-20 vxvol 5-31 vxvol init 16-4, 16-5 vxvol init options active 16-5 clean 16-5 enable 16-5 zero 16-5 vxvol rdpol 7-18, 15-24 vxvol resync 16-17 vxvol start 15-16, 15-20, 16-5, 16-13, 16-16, 16-17, 16-19, 16-27 vxvol startall 5-26

Z
zero option 16-5

W
Wizard mode disabling 3-9

X
XOR 6-5, 6-14

Index-30

VERITAS Foundation Suite 3.5 for Solaris


Copyright 2002 VERITAS Software Corporation. All rights reserved.

VERITAS Education Solutions


Thank you for attending: VERITAS Foundation Suite: Administration and Troubleshooting The recommended next step in your learning path is: VERITAS Foundation Suite (Advanced): Performance and Tuning In this course, you will learn performance management aspects of VERITAS Volume Manager and VERITAS File System by building on basic administrative and operational tasks.

VERITAS Foundation Suite Learning Path


VERITAS Foundation Suite: Administration and Troubleshooting This course covers: Installation Configuration Online administration Recovery Troubleshooting

VERITAS Foundation Suite (Advanced): Performance and Tuning

This course covers: Performance tuning File system tuning QuickLog and Quick I/O Storage checkpointing Off-host processing

After completing the Foundation Suite learning path, to continue building your storage management skills, VERITAS recommends:

VERITAS Volume Replicator

VERITAS Cluster Server Suite Learn how to create a highly available environment through clustering with VERITAS Cluster Server.

VERITAS SANPoint Foundation Suite HA Learn how to extend VERITAS File System and VERITAS Volume Manager so that multiple servers can share access to SAN storage.

Learn how to implement data replication in your disaster recovery strategy with VERITAS Volume Replicator.

For the most up-to-date information on VERITAS Education Solutions offerings, visit http://www.veritas.com.

Copyright 2002 VERITAS Software Corporation. All rights reserved.

Copyright 2002 VERITAS Software Corporation. All rights reserved.

También podría gustarte