Está en la página 1de 8

7/23/12

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5

Desh-Bangla.Com

Home

Forums

Online Course

Misc

Scripts

NewsGroup

Tutorial

Book

Certifications

Others

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5

How to set up highly available NFS with Red Hat Cluster Suite and RHEL 5
Article ID: 60254 - Created on: Jun 8, 2011 2:16 PM - Last Modified: Jul 26, 2011 1:56 PM

Issue
Need to set up highly available NFS Need clients to connect to an NFS service that can gracefully fail over to another system should the node providing the NFS service fail Would like a step by step guide illustrating how to set up such a service

Environme nt
Red Hat Enterprise Linux (RHEL) 5 through RHEL 5.9 Red Hat Cluster Suite (RHCS) entitlement or Advanced Platform 2 or more nodes to cluster Supported fence devices available for each node Shared storage provided by Fibre Channel or iSCSI

Re solution
1. Install RHEL 5 on the systems intended to be clustered and register them to RHN or a Satellite. Note : Access to RHN or a Satellite with the correct channels synchronized will be assumed in this article 2. Log into RHN or Satellite and subscribe the nodes to the "RHEL Clustering" and "Cluster Storage-Storage" child channels 3. Install the cluster packages on the nodes by running the following on all nodes:
#ymgopntl Cutrn CutrSoae u ruisal lseig lse-trg

4. Conga is the graphical utility for managing clusters. Conga has two parts: ricci and luci. Ricci is the agent that runs on your cluster nodes listening for commands. Luci is the graphical web application that you interact with. Conga administers the cluster by sending commands from luci to ricci as you interact with luci. Enable the ricci service on all nodes:
#ckofgrcio ;sriercisat hcni ic n evc ic tr

5. Choose a system to run luci on. You can elect to run luci on one of your cluster nodes or on a system outside of the cluster. If you intend to run luci on a node outside of the cluster the name of the package on RHN is luci. Installing luci will require that the system you intend to install on also be registered to RHN or a Satellite and registered to the Clustering channel. Choose a node with luci installed and perform the following to initialize and enable it:
#lc_di ii;ckofglc o;srielc sat uiamn nt hcni ui n evc ui tr

6. In order to confirm that luci is operating log into the luci graphical user interface via a web browser. It can be accessed at port 8084 via HTTPS on the system you started luci on. You will log in with the user name "admin" and the password you specified during the luci_admin init command.Note : Depending on the browser you use you may receive an SSL certificate warning. This is normal.

shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf

1/8

7/23/12
luci1.png

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5

7. Before the cluster can be set up the nodes must be able to route their heartbeat traffic effectively. This requires that the cluster nodes be able to resolve each other by a unique name and preferably over a private network. This is often best accomplished by having a private network for the cluster nodes and using the /etc/hosts file to define names for the cluster. The names we specify for the nodes in the cluster at cluster creation time must be resolvable so it is imperative that this step is completed correctly. What follows is an example of an /etc/hosts file from a node in a functioning 2 node cluster. Note that the cluster node names (node1.adrew.net and node2.adrew.net) do not resolve to 127.0.0.1 - this is important. Additionally keep in mind that the node name mappings must be consistent between the nodes so the other node would have an identical /etc/hosts file. Note : If you are using a node outside of the cluster to host luci that host will too need to be able to route the cluster names over the correct network.
[otnd1~#ct/t/ot ro@oe ] a echss #D ntrmv tefloigln,o vrosporm o o eoe h olwn ie r aiu rgas #ta rqientokfntoaiywl fi. ht eur ewr ucinlt il al 17001 2... :1 : lclotlcloanlclot oahs.oadmi oahs lclot.oadmi6lclot oahs6lcloan oahs6 nd1arwnt oe.de.e nd2arwnt oe.de.e nd1 oe nd2 oe

1218101 9.6.5.1 1218101 9.6.5.2

8. Verify that name resolution works as expected by pinging each node from each other node. Note : Cluster heartbeat is carried via multicast traffic. If multicast is not functioning correct on your network then the cluster will not form. This document assumes that multicast traffic is operational on the cluster's private network. For more information see here. 9. With name resolution working correctly you can now set up cluster via luci. Log into the luci web interface and follow these steps: i. Click on the "Cluster" Tab at the top of the page ii. Click on "Create a New Cluster" on the left hand side of the page. You should now be on the "Create a New Cluster" page: luci2.png

iii. Enter the resolvable hostnames for each node (that we set up in step 7 above) into the "Node Hostname" boxes. Enter the root passwords for the nodes into the "Root Password" boxes. Enter the name you desire for your cluster in the "Cluster Name" box. Click the "Use locally installed packages" radio button. After all information is entered click the "Submit" button at the bottom of the page. You'll be brought to a cluster setup progress page. Once the process is complete you'll be brought to the configuration landing page for your cluster.

shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf

2/8

7/23/12
luci3.png

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5

iv. The cluster should now be operational. Click on the "Cluster" tab to view the "Cluster List" page. If the cluster nodes and name show up in green then the cluster is up and running. If the cluster nodes or name are in red or if there are any errors then it is likely that something above was missed. Retrace the steps with an emphasis on name resolution. If the cause of the problem cannot be found then log a ticket with Red Hat Global Support Services in order to get help. v. Note : Fencing is required for cluster. You will need to set up a fence device for each node in order to have a fully supported and operational cluster. The general steps are click on the "Cluster" tab at the top of the screen, then click on a node name, then update the "Main Fencing Method" portion at the bottom of the screen, and then enter your fence device information. Repeat this for all nodes. The exact configuration required for your fence devices will be dependent on your hardware and environment thus beyond the scope of this document. Please refer to this document for more specific information on configuring fencing with luci. vi. Once cluster is up and operational with fencing we can move on to setting up the shared storage and the NFS service 10. In order to have a highly available NFS server you will need shared storage to export NFS from. When exporting NFS the filesystems EXT3, EXT4, or XFS are recomended as opposed to GFS or GFS2 due to the GFS/GFS2 locking layers. When using a single-host filesystem like EXT3, EXT4, or XFS it is important to take measures to prevent the filesystem from being mounted on multiple nodes at the same time otherwise corruption will occur. For this we will use Highly Available LVM (HA/LVM.) HA/LVM ensures that the LVM stack that the filesystem sits on can only be activated on one node in the cluster at a time. This protects against corruption due to accidentally mounting on multiple systems. i. Edit /etc/lvm/lvm.conf on all nodes. Set the locking_type parameter to 3:
#Tp o lcigt ue Dfut t lclfl-ae lcig() ye f okn o s. eals o oa iebsd okn 1. #Tr lcigofb stigt 0(agru:rssmtdt cruto un okn f y etn o dneos ik eaaa orpin #i LM cmad gtrncnurnl) f V2 omns e u ocrety. #Tp 2ue teetra sae lbaylciglbay ye ss h xenl hrd irr okn_irr. #Tp 3ue biti cutrdlcig ye ss ul-n lsee okn. #Tp 4ue ra-nylcigwihfrisayoeain ta mgt ye ss edol okn hc obd n prtos ht ih #cag mtdt. hne eaaa lcigtp =3 okn_ye

ii. On all nodes start and enable the clustered LVM service:
[otnd1~#ckofgcvdo ;sriecvdsat ro@oe ] hcni lm n evc lm tr

iii. On one nod e create your clustered LVM stack and your filesystem on your shared storage. For this example we will use mutlipath device mpath0p1. Your shared storage device(s) may be different. Note : You can tell that the previous steps were completed correctly if the output from vgcreate includes "Clustered volume group." If it does not mention "Clustered" then lvm.conf may be wrong or clvmd may not be running.
[otnd1~#pcet /e/pt/pt01 ro@oe ] vrae dvmahmahp Pyia vlm "dvmahmahp"scesul cetd hscl oue /e/pt/pt01 ucsfly rae [otnd1~#vcet v-av /e/pt/pt01 ro@oe ] grae ghlm dvmahmahp Cutrdvlm gop"ghlm scesul cetd lsee oue ru v-av" ucsfly rae [otnd1~#lcet -sz 1 -nm l-f v-av ro@oe ] vrae -ie G -ae vns ghlm Lgclvlm "vns cetd oia oue l-f" rae [otnd1~#mf.x3/e/ghlml-f ro@oe ] kset dvv-av/vns mef 13 (9My20) k2s .9 2-a-06 Flsse lbl ieytm ae= O tp:Lnx S ye iu Boksz=06(o=) lc ie49 lg2 Famn sz=06(o=) rget ie49 lg2 117 ioe,224 bok 302 nds 614 lcs 117bok (.0)rsre frtesprue 30 lcs 50% eevd o h ue sr Frtdt bok0 is aa lc= Mxmmflsse bok=6455 aiu ieytm lcs28346 8bokgop lc rus 378bok prgop 378famnsprgop 26 lcs e ru, 26 rget e ru

shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf

3/8

7/23/12

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5
134ioe prgop 68 nds e ru Sprlc bcussoe o bok: uebok akp trd n lcs 378 934 134,297 26, 80, 680 236 Wiigioetbe:dn rtn nd als oe Cetn junl(12bok) dn raig ora 89 lcs: oe Wiigsprlcsadflsse acutn ifrain dn rtn uebok n ieytm conig nomto: oe Ti flsse wl b atmtclycekdeey3 mut o hs ieytm il e uoaial hce vr 6 ons r 10dy,wihvrcmsfrt 8 as hcee oe is. Uetn2s- o - t oerd. s uef c r i o vrie

iv. Now run vgscan on the other nodes in the cluster. You should see output like the following showing that the new volume group is now active. If there are metadata locking errors on vgscan then it is likely that lvm.conf is not correct or that clvmd is not running. Make the correct adjustments and try again. You can also verify that the volume group is clustered by looking for the clustered flag in vgdisplay output:
[otnd2~#vsa ro@oe ] gcn Raigalpyia vlms edn l hscl oue. Ti mytk awie. hs a ae hl.. Fudvlm gop"ghlm uigmtdt tp lm on oue ru v-av" sn eaaa ye v2 Fudvlm gop"oGop0 uigmtdt tp lm on oue ru Vlru0" sn eaaa ye v2 [otnd2~#vdslyv-av |ge - cutr ro@oe ] gipa ghlm rp i lse Cutrd lsee ys e

v. The HA/LVM stack is now ready for use in a clustered service. Note : Do not mount the filesystem on any of the nodes yet. Moutning the

filesystem on multiple nodes at the same time will corrupt it. When using HA/LVM and a single-node filesystem it is important to restrict access to the filesystem to the clustered service and to not mount it by hand anywhere.

11. Now that the storage stack, networking, and basic cluster configuration is complete we can begin to configure our Highly Available NFS service. Follow the steps below to create your service: i. Log into luci. Click on the "Cluster" tab at the top of the screen. Then click on the cluster's name which should be in green. You should now be step 11.png

on the "Configure cluster properties" page: ii. Click "Resources" on the lower left side of the page then click "Add a Resource" when the option appears. Select "LVM" from the "Select a Resource Type" drop down box. iii. You will not be on the "Add a Resource" "LVM Resource Configuration" page. Enter a uniquely identifiable name of your choice in the the "Name" field. For example, I entered nfs-halvm. Enter the volume group name for the clustered volume group you created into the "Volume Group Name" field. Enter the name of the logical volume we created into the "Logical Volume Name" field. Hit the "Submit" button. After this completes you should be brought back to the recourses page and it should reflect the new HALVM resource:

shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf

4/8

7/23/12
step12.png

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5

iv. You will now repeat these same steps to add the remaining required resources to form the Highly Available NFS service: filesystem, IP addres, and NFS client and export v. Click on "Resources" then click on "Add a Resource." Select "Filesystem" from the drop down box. In the "Name" field enter a uniquely identifiable name for the filesystem. In the "Mount point" field enter the directory to which you want the filesystem mounted. In the "Device" field enter the full path to the device node for the logical volume we created. In the "Filesystem ID" field enter a unique 5 or 6 digit number. After all data is entered hit the "Submit" button. Note : It says optional for the FSID field but for NFS it is not optional. In order for NFS failover to work step13.png

correctly we need to specify a number here. vi. Click "Resources" and then "Add a Resource." Select "NFS Export" from the drop down list. Enter a uniquely identifiable name in the "Name" field and hit submit. Note : The difference between the "NFS Export" and "NFS Client" resources can seem somewhat confusing without some explanation. Think of /etc/exports. On the left you have the directory tree you are exporting and on the right you have the client and access

control options. With the cluster NFS resources it is somewhat the same. The "NFS Export" is the left side and inherits the mount point from the filesystem resource. The "NFS Client" is the right side and defines the client and access control options. vii. Click "Resources" and then "Add a Resource." Select "NFS Client" from the drop down list. Enter a uniquely identifiable name in the "Name" field. Enter the IP or hostname of the system or systems you want to allow access to this NFS service or a wildcard (wildcard: *) if you want to grant access to any system that can reach this export. In the "Options" field enter your desired NFS client options such as ro, rw, or no_root_squash.

shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf

5/8

7/23/12

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5
step15.png

click the "Submit" button viii. Click "Resources" and then "Add a Resource." Select "IP Address" from the drop down. In the "Name" field enter a uniquely identifiable name. Enter the IP address you want the NFS service to be reached into the "IP Address" field. Check the "Monitor Link" checkbox. Note : In order for the virtual IP you want the NFS service to be reached on to operate the IP will need to be on a subnet that one your NICs already has an IP address on. The way the IP Address resource works is it looks for a NIC with the address on the same subnet as the IP you specified and it runs a command to create a second IP address on that same NIC. All cluster nodes must have an interface with an IP address that is on the same subnet as your desired virtual IP. For example, I specified 192.168.122.200 because all of my nodes have interfaces on the 192.168.122.0/24 network. If I had no NICs on that network then my IP resource would fail to start when trying to start the service.

12. All of the resources that are required to build the Highly Available NFS service are now defined for the cluster: NFS client, NFS export, filesystem, HA/LVM stack, and IP addresss. The next steps are to combine the resources into a service. i. Log into luci. Click on the "Cluster" tab at the top of the screen and then click on your cluster's name (in green lettering.) Click "Services" in the menu on the left of the screen and then click "Add a Service." ii. Enter a uniquely identifiable name for your service into the "Name" field. For example: HANFS. iii. Check the "Automatically start this service" check box if you would like the service to start when you start the cluster. Leave it unchecked if you would like to start the service manually instead. iv. Check the "enable NFS lock work arounds" check box. v. Leave "Run exclusive" unchecked vi. There is no failover domain so leave the failover domain drop down box on "None." We have not defined a failover domain so this option is outside of the scope of this document. For more information on failover domains please see this document. vii. The following three options will depend on your desires. The "Recovery Policy" dictates what cluster will do if the service fails any of its status checks. Relocate will relocate the service to another node immediately. Recover will attempt to restart the service a set number of times and then will relocate the service to another node. You can specify how many times to attempt to restart the service before relocate in the box below. The Disable option will disable the service and it will require human intervention to bring it back up. The "length of time in seconds" option will allow you to specify how long we wait between restarts before considering them contigous restarts and possibly relocating. Select the

shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf

6/8

7/23/12

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5

options that you want based on your desired configuration: viii. Click "Add a resource to this serivice." From the "Use an existing global resource" drop down select the HALVM resource you defined. ix. Click LVM resource's "Add a child" button. From the "Use an existing global resource" drop down select the filesystem resource you defined. x. Click the filesystem resource's "Add a child" button. From the "Use an existing global resource" drop down select the NFS Export resource you defined. xi. Click the the NFS Export resource's "Add a child" button. From the "Use an existing global resource" drop down select the NFS Client resource you defined. xii. Click the NFS Client resource's "Add a child" button. From the "Use an existing global resource" drop down select the IP Address resource you defined. xiii. Click the "Submit" button. xiv. After the submit is complete you will be brought to the service's administration page. The service's name will be in red because the service is not yet started. Select "Start this service" from the "Choose a task..." drop down menu. After a short wait the service will be enabled. The service's name will appear in green and the "Status:" text label will indicate that the service is now running and on which node it is running.

13. The Highly Available NFS service is now running and operational. 14. You can test relocating the service to another node by selecting the "Relocate this service to <node name>" option in the "Choose a task..." drop down list in the service administration page 15. You can verify the service is operational by attempting to mount the NFS export from a system outside of the cluster:
Epr ls fr12181220 xot it o 9.6.2.0: /n/f-xot* mtnsepr [otarwvrs~#mut- ns12181220/n/f-xot/n/f/ ro@de-it ] on t f 9.6.2.0:mtnsepr mtns [otarwvrs~#mut-tns|ge nsepr ro@de-it ] on l f rp f-xot 12181220/n/f-xoto /n/f tp ns(wad=9.6.2.0) 9.6.2.0:mtnsepr n mtns ye f r,dr12181220

Adde ndum
shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf 7/8

7/23/12

How to set up HA NFS with Red Hat Cluster Suite and RHEL 5
Some customers may prefer to configure the cluster manually without the help of Conga. This is fine. The non-Conga-related steps in this document can still be used to set up the LVM stack, set up networking, and install the packages. Below is the cluster.conf file that the above example generated. Customers can use this cluster.conf to create their own service or services by hand:
<xlvrin"."> ?m eso=10? <lse ais"de-et cni_eso=7 nm=arwts" cutr la=arwts" ofgvrin"" ae"de-et> <ec_amncensat""ps_aldly""ps_ondly""> fnedeo la_tr=0 otfi_ea=0 otji_ea=3/ <lsends cutroe> <lsend nm=nd1arwnt ndi=1 vts"" cutroe ae"oe.de.e" oed"" oe=1> <ec/ fne> <cutroe /lsend> <lsend nm=nd2arwnt ndi=2 vts"" cutroe ae"oe.de.e" oed"" oe=1> <ec/ fne> <cutroe /lsend> <cutroe> /lsends <mnepce_oe=1 tond=1/ ca xetdvts"" w_oe""> <ecdvcs> fneeie/ <m r> <alvroan/ fioedmis> <eore> rsucs <v l_ae"vns nm=nshlm v_ae"ghlm/ lm vnm=l-f" ae"f-av" gnm=v-av"> <fepr nm=nsepr"> nsxot ae"f-xot/ <fcin alwrcvr""nm=nscin"otos"wn_otsus"tre=*/ nslet lo_eoe=0 ae"f-let pin=r,oro_qah agt""> <pades"9.6.2.0"mntrln=1/ i drs=12181220 oio_ik""> <rsucs /eore> <evc atsat""ecuie""nm=HNS nsok""rcvr=rlct" srie uotr=1 xlsv=0 ae"AF" flc=1 eoey"eoae> <v rf"f-av" lm e=nshlm> <srf"f-ieytm> f e=nsflsse" <fepr rf"f-xot> nsxot e=nsepr" <fcin rf"f-let> nslet e=nscin" <prf"9.6.2.0"> i e=12181220/ <nslet /fcin> <nsxot /fepr> <f> /s <lm /v> <srie /evc> <r> /m <cutr /lse>

<sdvc=/e/ghlml-f"frefc=0 freumut""fi=661 ftp=et"muton=/n/ff eie"dvv-av/vns oc_sk"" oc_non=0 sd"61" sye"x3 onpit"mtnse

Tags: nfs, cluster, howto

Copyright 2009 ---. All Rights Reserved.


Joomla template created with Artisteer.

shumon.amitumioshay.com/index.php?option=com_content&view=article&id=342:how-to-set-up-ha-nf

8/8

También podría gustarte