Documentos de Académico
Documentos de Profesional
Documentos de Cultura
October 2008
Veritas Storage Foundation and High Availability
Solutions Application Note
Copyright © 2008 Symantec Corporation. All rights reserved.
Symantec, the Symantec logo, Veritas, and Veritas Storage Foundation are trademarks or
registered trademarks of Symantec Corporation or its affiliates in the U.S. and other
countries. Other names may be trademarks of their respective owners.
The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.
Symantec Corporation
20330 Stevens Creek Blvd.
Cupertino, CA 95014
www.symantec.com
Third-party legal notices
Third-party software may be recommended, distributed, embedded, or bundled
with this Symantec product. Such third-party software is licensed separately by
its copyright holder. All third-party copyrights associated with this product are
listed in the Veritas Storage Foundation 5.0 Release Notes.
The Veritas Storage Foundation 5.0 Release Notes can be viewed at the following
URL:
http://entsupport.symantec.com/docs/283886
Technical support
For technical assistance, visit
http://www.symantec.com/enterprise/support/assistance_care.jsp and select
Product Support. Select a product and use the Knowledge Base search feature to
access resources such as TechNotes, product alerts, software downloads,
hardware compatibility lists, and our customer email notification service.
Contents
Binding a whole disk which is under VxVM control fails silently ....... 29
A DMP metanode cannot be used to export a whole disk to a guest logical
domain ................................................................................................... 30
The eeprom command cannot be used to reset EEPROM values to null 31
Introduction
This document provides release information about support for Solaris Logical
Domains (LDoms) using the products in the Veritas Storage Foundation and
High Availability Solutions 5.0 Maintenance Pack 1 (MP1) Solaris product line.
Support for Solaris Logical Domains (LDoms) is also available in later releases of
Veritas Storage Foundation and High Availability Solutions.
Review this entire document before installing your Veritas Storage Foundation
and High Availability products.
For information about Veritas Storage Foundation and High Availability
Solutions 5.0 and 5.0 Maintenance Pack 1, refer to:
■ Veritas Cluster Server Release Notes 5.0 for Solaris
■ Veritas Cluster Server Release Notes 5.0 MP1 for Solaris
■ Veritas Storage Foundation Release Notes 5.0 for Solaris
■ Veritas Storage Foundation Release Notes 5.0 MP1 for Solaris
For information about installing Veritas Storage Foundation 5.0, refer to the
following documentation:
■ Veritas Storage Foundation Installation Guide 5.0 for Solaris
■ Veritas Cluster Server Installation Guide 5.0 for Solaris
For further information about installing Veritas Cluster Server 5.0, see
“Installation instructions for VCS” on page 41.
Table 1-1 Storage Foundation and High Availability Solutions 5.0 and 5.0
Maintenance Pack 1 information
New features
Support for the new Logical Domain feature from Sun Microsystems has been
incorporated into this release of Veritas Storage Foundation and High
Availability Solutions.
Standardization of tools
Independent of how an operating system is hosted, consistent storage
management tools save an administrator time and reduce the complexity of the
environment.
Storage Foundation in the control domain provides the same command set,
storage namespace, and environment as in a non-virtual environment.
Array migration
Data migration for Storage Foundation can be executed in a central location,
migrating all storage from an array utilized by Storage Foundation managed
hosts.
This powerful, centralized data migration functionality is available with Storage
Foundation Manager 1.1 and later.
Term Definition
Logical Domains Software that communicates with the Hypervisor and logical
Manager domains to sequence changes, such as the removal of resources
or creation of a logical domain.
The Logical Domains Manager provides an administrative
interface and keeps track of the mapping between the physical
and virtual devices in a system.
Guest domain Utilizes virtual devices offered by control and I/O domains and
operates under the management of the control domain.
Virtual devices Physical system hardware, including CPU, memory, and I/O
devices that are abstracted by the Hypervisor and presented to
logical domains within the platform.
Figure 1-1 Block level view of VxVM and VxFS in LDoms environment
Storage Foundation and High Availability Solutions Support for Solaris Logical Domains 13
How Storage Foundation and High Availability Solutions works in LDoms
Note: VxFS can also be placed in the control domain, but there will be no
co-ordination between the two VxFS instances in the guest and the control
domain.
14 Storage Foundation and High Availability Solutions Support for Solaris Logical Domains
System requirements
System requirements
This section describes the system requirements for this release.
Veritas patches
The following Veritas patches or hotfixes are required for support with Solaris
Logical Domains:
■ Veritas Cluster Server requires Veritas patch 128055-01.
■ The 5.0 product installation scripts for Solaris fails in the ssh
communications phase if the node prints a system banner upon ssh to that
node.
This issue was found on a Solaris 10 Update 3 and Update 4 with the Solaris
Logical Domains software installed. The Solaris Logical Domains uses ssh as
the default means of communication for the control domain.
You would need to apply a hotfix patch if the following banner is in
/etc/issue:
|-----------------------------------------------------------
| This system is for the use of authorized users only.
| Individuals using this computer system without authority, or in
| excess of their authority, are subject to having all of their
| activities on this system monitored and recorded by system
| personnel.
|
| In the course of monitoring individuals improperly using this
| system, or in the course of system maintenance, the activities
| of authorized users may also be monitored.
|
| Anyone using this system expressly consents to such monitoring
| and is advised that if such monitoring reveals possible
| evidence of criminal activity, system personnel may provide the
| evidence of such monitoring to law enforcement officials.
|-------------------------------------------------------------
Caution: Do not shrink the underlying volume beyond the size of the VxFS
file system in the guest as this can lead to data loss.
■ Exporting a volume set to a guest LDom and trying to read/write the volume
set is not currently supported.
■ Veritas Volume Replicator is not supported in an LDoms environment.
The following Veritas VxFS software features are not supported in a Solaris
LDom guest environment:
■ Multi-Volume Filesets/DST
■ File-Level Smartsync
Storage Foundation and High Availability Solutions Support for Solaris Logical Domains 17
Component product release notes
■ The following VxFS tunables will not be set to their default values based on
the underlying volume layout, due to VxFS being in the guest LDom and
VxVM installed in the control domain:
■ read_pref_io/write_pref_io
■ read_nstream/write_nstream
If desired, the user can set the values of these tunables based on the
underlying volume layout in the /etc/vx/tunefstab.
For more information, refer to the section “Tuning I/O” in the Veritas File
System Administrator's Guide for version 5.0.
■ Storage Foundation Cluster File System is not recommended in an LDoms
environment.
Localization
This Application Note is not localized. It is available in English only.
High Availability
Find the Veritas Cluster Server release notes in the cluster_server/
release_notes directory of the product disc.
18 Storage Foundation and High Availability Solutions Support for Solaris Logical Domains
Product licensing
Product licensing
Symantec’s pricing policy changes when used in a LDom virtual machine
environment. Contact Symantec sales for more information.
Storage Foundation and High Availability Solutions Support for Solaris Logical Domains 19
Installing Storage Foundation in a LDom environment
Use the procedures in the Veritas installation documentation and Release Notes
to install Storage Foundation to the control domain.
Install version 5.0 first.
See Veritas Storage Foundation Installation Guide 5.0 for Solaris.
See Veritas Storage Foundation Release Notes 5.0 for Solaris.
Then, upgrade to 5.0 MP1.
See Veritas Storage Foundation Release Notes 5.0 MP1 for Solaris.
20 Storage Foundation and High Availability Solutions Support for Solaris Logical Domains
Installing Storage Foundation in a LDom environment
Caution: Only VxFS should be installed in the guest domain. Verify that other
packages are not installed.
To create virtual disks on top of the VxVM data volumes using the ldm
command
1 In the control domain (primary) configure a service exporting the VxVM
volume as a virtual disk.
primary# ldm add-vdiskserverdevice /dev/vx/dsk/dg-name/vol-name \
22 Storage Foundation and High Availability Solutions Support for Solaris Logical Domains
Provisioning storage for a Guest LDom
bootdisk1-vol@primary-vds0
2 Add the exported disk to a guest LDom.
primary# ldm add-vdisk vdisk1 bootdisk1-vol@primary-vds0 ldom1
3 Start the guest domain, and make sure the new virtual disk is visible.
primary# ldm bind ldom1
primary# ldm start ldom1
4 You might also have to run the devfsadm command in the guest domain.
ldom1# devfsadm -C
In this example, the new disk appears as /dev/[r]dsk/c0d1s0.
ldom1# ls -l /dev/dsk/c0d1s0
lrwxrwxrwx 1 root root 62 Sep 11 13:30 /dev/dsk/c0d1s0 ->
../../devices/virtual-devices@100/channel-devices@200/disk@1:a
Note: Note that with Solaris 10, Update 4, a VxVM volume shows up as a single
slice in the guest LDom.
Refer to “Software limitations” on page 28, or the LDoms 1.0 release notes from
Sun (Virtual Disk Server Should Export ZFS Volumes as Full Disks (Bug ID
6514091) for more details.
5 Mount the file system on the disk in order to access the application data.
ldom1# mount -F vxfs /dev/dsk/c0d1s0 /mnt
ldom1# mount -F ufs /dev/dsk/c0d1s0 /mnt
Caution: After the “volume as a single slice” limitation is fixed by Sun, then a
volume by default will show up as a full disk in the guest. In that case, the
Virtual Disk Client driver will write a VTOC on block 0 of the virtual disk, which
will end up as a WRITE on block 0 of the VxVM volume. This can potentially
cause data corruption, because block 0 of the VxVM volume contains user data.
Sun will provide an option in the LDom CLI to export a volume as a single slice
disk. This option should always be used in the migration scenario as the VxVM
volume already contains user data at block 0.
Because VxVM volumes currently show up as “single slice disks” in the guest
LDoms, they cannot be used as boot disks for the guests. However, a large VxFS
file can be used to provision a boot disk for a guest LDom, because a file appears
as a whole disk in the guest LDom.
The following process gives the outline of how a VxFS file can be used as a boot
disk.
In this example, the control domain and is named “primary” and the guest
domain is named “ldom1.” The prompts in each step show in which domain to
run the command.
Figure 1-2 Example of using VxVM snapshots for cloning LDom boot disks
Before this procedure, ldom1 has its boot disk contained in a large file
(/fs1/bootimage1) in a VxFS file system which is mounted on top of a VxVM
volume.
This procedure involves the following steps:
■ Cloning the LDom configuration to form a new LDom configuration.
This step is a Solaris LDom procedure, and can be achieved using the
following commands.
# ldm list-constraints -x
# ldm add-domain -i
Refer to Solaris LDoms documentation for more details about how to carry
out this step.
■ After cloning the configuration, clone the boot disk and provision it to the
new LDom.
To create a new LDom with different configuration than that of ldom1, then
skip this step of cloning the configuration, and just create the desired LDom
configuration separately.
26 Storage Foundation and High Availability Solutions Support for Solaris Logical Domains
Using VxVM snapshots for cloning LDom boot disks
Caution: Shut down the guest domain before executing the vxsnap
command to take the snapshot.
nmirror Specifies how many plexes are to be broken off. This attribute
can only be used with plexes that are in the SNAPDONE state.
(Such plexes could have been added to the volume by using
the vxsnap addmir command.)
Storage Foundation and High Availability Solutions Support for Solaris Logical Domains 27
Using VxVM snapshots for cloning LDom boot disks
Snapshots that are created from one or more ACTIVE or SNAPDONE plexes in
the volume are already synchronized by definition.
For backup purposes, a snapshot volume with one plex should be sufficient.
3 Use fsck (or some utility appropriate for the application running on the
volume) to clean the temporary volume’s contents. For example, you can
use this command with a VxFS file system:
primary# fsck -F vxfs /dev/vx/rdsk/diskgroup/snapshot
4 Mount the VxFS file system on the snapshot volume.
primary# mount -F vxfs /dev/vx/dsk/bootdisk-dg/SNAP-bootdisk1-vol \
/snapshot1/
This file system will contain a copy of the golden boot image file
/fs1/bootimage1.
The cloned file is visible on the primary.
primary # ls -l /snapshot1/bootimage1
-rw------T 1 root root 6442450944 Sep 4 12:40 /snapshot1/bootimage1
5 Verify that the checksum of the original and the copy are the same.
primary # cksum /fs1/bootimage1
primary # cksum /snapshot1/bootimage1
6 Configure a service exporting the file /snapshot1/bootimage1 as a
virtual disk.
primary# ldm add-vdiskserverdevice /snapshot1/bootimage1 \
vdisk2@primary-vds0
7 Add the exported disk to ldom1 first.
primary# ldm add-vdisk vdisk2 vdisk2@primary-vds0 ldom1
8 Start ldom1 and boot ldom1 from its primary boot disk vdisk1.
primary# ldm bind ldom1
primary# ldm start ldom1
9 You may have to run the devfsadm -C command to create the device nodes
for the newly added virtual disk (vdisk2).
ldom1# devfsadm -C
In this example the device entry for vdisk2 will be c0d2s#.
ldom1# # ls /dev/dsk/c0d2s*
/dev/dsk/c0d2s0 /dev/dsk/c0d2s2 /dev/dsk/c0d2s4 /dev/dsk/c0d2s6
/dev/dsk/c0d2s1 /dev/dsk/c0d2s3 /dev/dsk/c0d2s5 /dev/dsk/c0d2s7
10 Mount the root file system of c0d2s0 and modify the /etc/vfstab entries
such that all c#d#s# entries are changed to c0d0s#. This is because ldom2
is a new LDom and the first disk in the OS device tree is always named as
c0d0s#.
11 After the vfstab has been changed, unmount the file system and unbind
vdisk2 from ldom1.
primary# ldm remove-vdisk vdisk2 ldom1
28 Storage Foundation and High Availability Solutions Support for Solaris Logical Domains
Software limitations
Software limitations
The following section describes some of the limitations of the Solaris Logical
Domains software and how those limitations affect the functionality of the
Veritas Storage Foundation products.
You also need to add the change in /etc/system to make it persistent across
reboots:
set vds:vd_open_flags = 0x3
Note: This is a temporary workaround until the SUN bug listed above is fixed
and delivered in a patch.
Note: This is a temporary workaround until the SUN bug listed above is fixed
and delivered in a patch.
Storage Foundation and High Availability Solutions Support for Solaris Logical Domains 31
Software limitations
To create CVMVolDg
1 Make the configuration writeable:
# haconf -makerw
2 Add the CVMVolDg resource:
# hares -add <name of resource> CVMVolDg <name of group>
3 Add a dg name to the resource:
# hares -modify <name of resource> CVMDiskGroup sdg1
4 Make the attribute local to the system:
# hares -local <name of resource> CVMActivation
5 Add the attribute to the resource. This step must be repeated on each of the
nodes.
# hares -modify <name of resource> CVMActivation \
<activation value> -sys <nodename>
6 If the user wants to monitor volumes, then complete this step; otherwise
skip this step. In a database environment, we suggest the use of volume
monitoring.
# hares -modify <name of resource> CVMVolume \
-add <name of volume>
7 Modify the resource, so that a failure of this resource does not bring down
the entire group.
# hares -modify <name of resource> Critical 0
8 Enable it:
# hares -modify cvmvoldg1 Enabled 1
36 Using multiple nodes in an LDom environment
CVM in the control domain for providing high availability
These applications can be failed over and restarted inside guests running on
another active node of the cluster.
Caution: With the I/O domain reboot feature introduced in LDoms 1.0.1, when
the control domain reboots, any I/O being done by the guest domain gets queued
up and resumes once the control domain comes back up.
See the Logical Domains (LDoms) 1.0 .1 Release Notes from Sun..
Due to this, applications running in the guests may resume or time out based on
individual application settings. It is the user's responsibility to decide if the
application should be restarted on another guest (on the failed-over control
domain). There is a potential data corruption scenario, if the underlying shared
volumes gets accessed from both the guests simultaneously.
Shared volumes and their snapshots can be used as a backing store for guest
LDoms.
Note: The ability to take online snapshots is currently inhibited because the file
system in the guest cannot coordinate with the VxVM drivers in the control
domain.
Make sure that the volume whose snapshot is being taken is closed before the
snapshot is taken.
The following example procedure shows how snapshots of shared volumes are
administered in such an environment.
Consider the following scenario:
■ datavol1 is a shared volume being used by guest LDom ldom1 and c0d1s0
is the front end for this volume visible from ldom1.
5 Once the LDom ldom1 boots, remount the VxFS file system back on
c0d1s0.
Chapter 3
Configuring Logical
Domains for high
availability using Veritas
Cluster Server
This chapter contains the following:
■ About Veritas Cluster Server in an LDom environment
■ Installing VCS in an LDom environment
■ About configuring VCS in an LDom environment
■ Configuration scenarios
■ Creating the service groups
■ Configuring VCS to manage applications in guest domains
■ About VCS agent for LDoms
40 Configuring Logical Domains for high availability using Veritas Cluster Server
About Veritas Cluster Server in an LDom environment
VCS requirements
For installation requirements see, “System requirements” on page 14.
VCS requires shared storage that is visible across all the nodes in the cluster.
Configure each LDom on a node. The LDom’s boot device and application data
must reside on shared storage.
VCS prerequisites
This document assumes a working knowledge of VCS.
Review the prerequisites in the following documents to help ensure a smooth
VCS installation:
■ Veritas Cluster Server Release Notes
Find this in the cluster_server/release_notes directory of the product disc.
■ Veritas Cluster Server Installation Guide
Find this in the cluster_server/docs directory of the product disc.
Unless otherwise noted, all references to other documents refer to the Veritas
Cluster Server documents version 5.0 for Solaris.
Configuring Logical Domains for high availability using Veritas Cluster Server 41
About configuring VCS in an LDom environment
VCS limitations
The following limitations apply to using VCS in an LDom environment:
■ VCS does not support the use of alternate I/O domains as the use of
alternate I/O domains can result in the loss of high availability.
■ This release of VCS does not support attaching raw physical disks or slices
to LDoms. Such configurations may cause data corruption either during an
LDom failover or if you try to manually bring up LDom on different systems.
For details on supported storage configurations, see “Storage
configurations” on page 42.
■ Each LDom configured under VCS must have at least two VCPUs. With one
VCPU, the control domain always registers 100% CPU utilization for the
LDom. This is an LDom software issue.
Configuration scenarios
Figure 3-1 shows the basic dependencies for an LDom resource.
/'RP
6WRUDJH 1HWZRUN
Network configuration
Use the NIC agent to monitor the primary network interface, whether it is
virtual or physical. Use the interface that appears using the ifconfig command.
Figure 3-2 is an example of an LDom service group. The LDom resource requires
both network (NIC) and storage (Volume and DiskGroup) resources.
For more information about the NIC agent, refer to the Veritas Cluster Server
Bundled Agents Reference Guide.
Storage configurations
Depending on your storage configuration, use a combination of the Volume,
DiskGroup, and Mount agents to monitor storage for LDoms.
Note that VCS in an LDom environment supports only volumes or flat files in
volumes that are managed by VxVM.
Figure 3-2 The LDom resource can depend on many resources, or just the NIC,
Volume, and DiskGroup resources depending on environment
/'RP
9ROXPH 1,&
'LVN*URXS
For more information about the Volume and DiskGroup agents, refer to the
Veritas Cluster Server Bundled Agents Reference Guide.
Image files
Use the Mount, Volume, and DiskGroup agents to monitor an image file.
Figure 3-3 shows how the Mount agent works with different storage resources.
Figure 3-3 The Mount resource in conjunction with different storage resources
LDom
Mount NIC
Volume
DiskGroup
For more information about the Mount agent, refer to the Veritas Cluster Server
Bundled Agents Reference Guide.
For complete information about using and managing service groups, either
through CLI or GUI, refer to the Veritas Cluster Server User’s Guide.
Note that if VCS is not running while you run the executable, it saves the file for
your use.
For more information on using the hagrp and hares commands, refer to the
Veritas Cluster Server User’s Guide.
7 You are now prompted to select the failover nodes for the service group (the
system list):
VCS systems:
------------
1) sysA
2) sysB
a) All systems
q) Quit
9 Review the output as the wizard builds the service group. The service group
is offline until you complete the post-wizard tasks, and bring it online.
Post-wizard tasks
After you have run the wizard, before you bring the service group online, return
to any Mount resources and set their FsckOpt attributes.
When you want VCS to automatically create LDoms on nodes, you must set the
value of the CfgFile attribute in the LDom agent. If you already have LDoms
created on other nodes in the cluster, the LDoms must have the same LDom
name.
Make any other resource configuration changes as required.
You must install and configure VCS in at least two control domains to form a
VCS cluster.
Follow these steps to configure VCS inside the control domain and guest
domains:
■ “Creating a logical domain” on page 48
■ “Installing and configuring one-node VCS inside the logical domain” on
page 48
■ “Installing and configuring VCS inside the control domain” on page 49
4 Add a VCS user (lsg1 -admin) with the minimum privilege as the group
operator of the VCS service group (lsg1).
Refer to Veritas Cluster Server Installation Guide to perform a single node VCS
installation in the logical domains.
Note: If you create the RemoteGroup resource as part of the LDom service group,
then the RemoteGroup resource state remains as UNKNOWN if the LDom is
down. So, VCS does not probe the service group and cannot bring the LDom
online. The online global firm dependency between the service groups allows
VCS to fail over a faulted child LDom service group independent of the state of
the parent RemoteGroup service group.
Perform the following steps to install and configure VCS inside the control
domain:
RemoteGroup Monitor
Application
Storage Network
online global firm
LDom
Mount NIC
Volume
DiskGroup
LDom agent
The LDom agent brings LDoms online, takes them offline, and monitors the
LDom.
Limitations
The LDom agent requires at least two VCPUs per LDom.
Dependencies
The LDom resource depends on the NIC resource. It can depend on the Volume
or Mount resources in different environments.
Network resources
Use the NIC agent to monitor the network adapter for the LDom.
Storage resources
■ Veritas Volume Manager (VxVM) exposed volumes
Use the Volume and DiskGroup agents to monitor a VxVM volume.
■ Image file
Use the Mount, Volume, and DiskGroup agents to monitor an image file.
■ Primary network interface
Use the NIC agent to monitor the primary network interface, whether it is
virtual or physical.
Agent functions
State definitions
FAULTED Indicates that the LDom is down when VCS expected it to up and running.
100% CPU utilization of the LDom is detected as a fault.
UNKNOWN Indicates the agent cannot determine the LDom’s state. A configuration
problem likely exists in the VCS resource or the LDom.
Attributes
Required
attribute Description
Required
attribute Description
CfgFile The absolute location of the XML file that contains the LDom
configuration. The Online agent function uses this file to create
LDoms as necessary. Refer to the ldm(1M) man page for
information on this file.
Type-dimension: string-scalar
Default: n/a
Configuring Logical Domains for high availability using Veritas Cluster Server 53
About VCS agent for LDoms
Required
attribute Description
NumCPU The number of virtual CPUs that you want to attach to the LDom
when it is online. If you set this to a positive value, the agent
detaches all of the VCPUs when the service group goes offline. Do
not reset this value to zero after setting it to 1.
Type-dimension: integer-scalar
Default: 0
Sample configurations
LDom ldg1 (
LDomName = ldg1
)
54 Configuring Logical Domains for high availability using Veritas Cluster Server
About VCS agent for LDoms