Está en la página 1de 5

Removing and Adding a Node from Oracle RAC 11g

Topology:
1 Openfiler (SAN) & 2 node

Openfiler:

Hardware: Hostname: Openfiler. HDD: sda 30G, sdb 70G eth0: 192.168.1.195 /24 gateway: 192.168.1.1 eth1: 192.168.2.195 /24

NODE 1

NODE2
Oracle RedHat 5 Sda 30G Eth0: 192.168.1.152 /24 Eth1: 192.168.2.152 /24 Racdb Racdb2 Racnode2

Hardware
OS HDD NIC SID Instance Hostname Oracle RedHat 5 Sda 30G Eth0: 192.168.1.151 /24 Eth1: 192.168.2.151 /24 Racdb Racdb1 Racnode1

Deploying: Steps:
1. Run DBCA from remaining node:
root$ xhost + root$ su - grid oracle$ export DISPLAY=:0.0 oracle$ dbca

Select [Oracle Real Application Clusters database]-> Next Select [Instance Management]->Next Select [Delete Instance] ->Next Select database , then , enter SYS and SYS' password ->Next. Select instance what you want to remove, and click Next. Click OK when prompted. Check if the instance is removed from cluster
oracle$ srvctl config database -d racnode2 Output: Database unique name: racdb Database name: racdb Oracle home: /u01/app/oracle/product/11.2.0/db_1 Oracle user: oracle Spfile: +DATA/racdb/spfilerac11g2.ora Domain: mua60s.vn Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racnode1 Disk Groups: DATA,FLASH Mount point paths: Services: Type: RAC Database is administrator managed

2. Disable and Stop Listener:


Check listener from any node: srvctl config listener -a Output: Name: LISTENER Network: 1, Owner: oracle Home: /u01/app/11.2.0/grid on node(s) racnode1,racnode2 End points: TCP:1521

Disable LISTENER: grid $ srvctl disable listener -l LISTENER -n racnode2

Stop LISTENER:
grid $ srvctl stop listener -l LISTENER -n racnode2

Update the Inventory of node to be deleted oracle$ cd $ORACLE_HOME/oui/bin


oracle$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racnode2}" -local -silent Output: Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. The inventory pointer is located at /etc/oraInst.loc The inventory is located at /opt/app/oraInventory 'UpdateNodeList' was successful. Actual 4094 MB Passed

Check /u01/app/oraInventory/ContentsXML/inventory.xml Racnode2:


<HOME NAME="OraDb11g_home1" LOC="/opt/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="racnode2"/> </NODE_LIST> </HOME>

Racnode 1:
<HOME NAME="OraDb11g_home1" LOC="/opt/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="racnode1"/> <NODE NAME="racnode2"/> </NODE_LIST> </HOME>

3. Deinstall Oracle home:


on racnode2:

oracle$ cd $ORACLE_HOME/deinstall
./deinstall -local

on racnode1: Update Inventory oracle$ cd $ORACLE_HOME/oui/bin


oracle$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racnode1}" Check inventory.xml: <HOME NAME="OraDb11g_home1" LOC="/opt/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"> <NODE_LIST> <NODE NAME="racnode1"/> </NODE_LIST> </HOME>

4. Remove Grid Home


Check status of node to be deleted (run it on racnode2)
oracle$ olsnodes -s -t

Example output:
rac4 rac5 Active Active Unpinned Unpinned

if its status is pinned, run the command


oracle$ crsctl unpin css -n racnode2 grid$ cd /u01/app/11.2.0/grid/crs/install grid$ ./rootcrs.pl -deconfig -force

From racnode1,run this command:


crsctl delete node -n racnode2 output: CRS-4661: Node rac5 successfully deleted. On the node to be deleted, update Inventory : grid$ cd /u01/app/11.2.0/grid/oui/bin grid$ ./runInstaller -updateNodeList ORACLE_HOME=$CRS_HOME "CLUSTER_NODES={racnode2}" -silent -local CRS=TRUE

Deinstall grid home:


grid$ cd /u01/app/11.2.0/grid/deinstall grid$ ./deinstall -local

5. Update Inventory and check: On remaining node: grid$ cd /u01/app/11.2.0/grid/oui/bin


grid$ ./runInstaller -updateNodeList ORACLE_HOME=$CRS_HOME "CLUSTER_NODES={rac4}" CRS=TRUE cluvfy stage -post nodedel -n racnode2

Example Output:
Performing post-checks for node removal Checking CRS integrity... CRS integrity check passed Node removal check passed Post-check for node removal was successful.