Está en la página 1de 5

Page 1 of 5

Adding a Node to a 10g RAC Cluster [ID 270512.1]


Modified 24-JAN-2011

Type BULLETIN

Status PUBLISHED

PURPOSE
------The purpose of this note is to provide the user with a document that
can be used as a guide to add a cluster node from an Oracle 10g Real
Applications environment.
SCOPE & APPLICATION
------------------This document can be used by DBAs and support analsyts who need to
either add a cluster node or assist another in adding a cluster
node in a 10g Unix Real Applications environment. If you are on
10gR2 (10.2.0.2 or higher), please refer to the documentation for
more updated steps.
Prerequisite
---------All nodes from initial cluster installation must be available up and running
for adding a new node.
If there is requirement for down node (due to hardware/os problem) to be
replaced with a new node using this addNode procedure then first remove the bad
node using document Note 466975.1 before proceeding with addNode.
ADDING A NODE TO A 10g RAC CLUSTER
---------------------------------The most important steps that need to be followed are;
A.
B.
C.
D.
E.

Configure the OS and hardware for the new node.


Add the node to the cluster.
Add the RAC software to the new node.
Reconfigure listeners for new node.
Add instances via DBCA.

Here is a breakdown of the above steps.


A.
Configure the OS and hardware for the new node.
------------------------------------------------------Please consult with available OS vendor documentation for this step.
See Note 264847.1 for network requirements. Also verify that the OCR and
voting files are visible from the new node with correct permissions.
B.
Add the node to the cluster.
-----------------------------------1.

If the CRS Home is owned by root and you are on a version < 10.1.0.4, change
the ownership of the CRS Home directories on all nodes to the Oracle user
so that OUI can read and write to these directories.

2.

Set the DISPLAY environment variable and run the addNode.sh script from
$ORA_CRS_HOME/oui/bin on one of the existing nodes as the oracle user.
Example:
DISPLAY=ipaddress:0.0; export DISPLAY
cd $ORA_CRS_HOME/oui/bin
./addNode.sh

3.

The OUI Welcome screen will appear, click next.

4.

On the "Specify Cluster Nodes to Add to Installation" screen, add the


public and private node names (these should exist in /etc/hosts and
should be pingable from each of the cluster nodes), click next.

5.

The "Cluster Node Addition Summary" screen will appear, click next.

6.

The "Cluster Node Addition Progress" screen will appear. You will
then be prompted to run rootaddnode.sh as the root user. First verify
that the CLSCFG information in the rootaddnode.sh script is correct.
It should contain the new public and private node names and node
numbers. Example:
$CLSCFG -add -nn ,2 -pn ,2 -hn ,2
Then run the rootaddnode.sh script on the EXISTING node you ran the
addNode.sh from. Example:
su root
cd $ORA_CRS_HOME
sh -x rootaddnode.sh
Once this is finished, click OK in the dialog box to continue.

7.

At this point another dialog box will appear, this time you are
prompted to run $ORA_CRS_HOME/root.sh on all the new nodes.
If you are on version < 10.1.0.4 then

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN... 6/4/2011

Page 2 of 5

- Locate the highest numbered NEW cluster node using "$ORA_CRS_HOME/bin/olsnodes -n".
- Run the root.sh script on this highest numbered NEW cluster node.
- Run the root.sh script on the rest of the NEW nodes in any order.
For versions 10.1.0.4 and above the root scritps can be run on the NEW
nodes in any order.
Example:
su root
cd $ORA_CRS_HOME
sh -x root.sh
If there are any problems with this step, refer to Note 240001.1
Once this is finished, click OK in the dialog box to continue.
8.

After running the CRS root.sh on all new nodes, as the root user run
$ORA_CRS_HOME/bin/racgons add_config :4948 :4948...
from any node.

9.

Next you will see the "End of Installation" screen.


may exit the installer.

10.

Change the ownership of all CRS Homes back to root.

At this point you

C.
Add the Oracle Database software (with RAC option) to the new node.
--------------------------------------------------------------------------1.

On a pre-existing node, cd to the $ORACLE_HOME/oui/bin directory and


run the addNode.sh script. Example:
DISPLAY=ipaddress:0.0; export DISPLAY
cd $ORACLE_HOME/oui/bin
./addNode.sh

2.

The OUI Welcome screen will appear, click next.

3.

On the "Specify Cluster Nodes to Add to Installation" screen, specify


the node you want to add, click next.

4.

The "Cluster Node Addition Summary" screen will appear, click next.

5.

The "Cluster Node Addition Progress" screen will appear.


then be prompted to run root.sh as the root user.

You will

su root
cd $ORACLE_HOME
./root.sh
Once this is finished, click OK in the dialog box to continue.
6.

Next you will see the "End of Installation" screen.


may exit the installer.

At this point you

7.

Cd to the $ORACLE_HOME/bin directory and run the vipca tool with the
new nodelist. Example:
su root
DISPLAY=ipaddress:0.0; export DISPLAY
cd $ORACLE_HOME/bin
./vipca -nodelist ,

8.

The VIPCA Welcome Screen will appear, click next.

9.

Add the new node's virtual IP information, click next.

10.

You will then see the "Summary" screen, click finish.

11.

You will now see a progress bar creating and starting the new CRS
resources. Once this is finished, click ok, view the configuration
results, and click on the exit button.

12.

Verify that interconnect information is correct with:


oifcfg getif
If it is not correct, change it with:
oifcfg setif /:
For example:
oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
or
oifcfg setif -node

eth1/10.10.10.0:cluster_interconnect

D.
Reconfigure listeners for new node.
------------------------------------------1.

Run NETCA on the NEW node to verify that the listener is configured on
the new node. Example:
DISPLAY=ipaddress:0.0; export DISPLAY
netca

2.

Choose "Cluster Configuration", click next.

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN... 6/4/2011

Page 3 of 5

3.

Select all nodes, click next.

4.

Choose "Listener configuration", click next.

5.

Choose "Reconfigure", click next.

6.

Choose the listener you would like to reconfigure, click next.

7.

Choose the correct protocol, click next.

8.

Choose the correct port, click next.

9.

Choose whether or not to configure another listener, click next.

10.

You may get an error message saying, "The information provided for this
listener is currently in use by another listener...". Click yes to
continue anyway.

11.

The "Listener Configuration Complete" screen will appear, click next.

12.

Click "Finish" to exit NETCA.

13.

Run crs_stat to verify that the listener CRS resource was created.
Example:
cd $ORA_CRS_HOME/bin
./crs_stat

14.

The new listener will likely be offline.


nodeapps on the new node. Example:

Start it by starting the

15.

Use crs_stat to confirm that all VIP's, GSD's, ONS's, and listeners are
ONLINE.

srvctl start nodeapps -n

E.
Add instances via DBCA. (for databases involving standby see section F first)
--------------------------------------------------------------------1.

To add new instances, launch DBCA from a pre-existing node.

Example:

DISPLAY=ipaddress:0.0; export DISPLAY


dbca
2.

On the welcome screen, choose "Oracle Real Application Clusters",


click next.

3.

Choose "Instance Management", click next.

4.

Choose "Add an Instance", click next.

5.

Choose the database you would like to add an instance to and specify
a user with SYSDBA privileges, click next. Click next again...

6.

Choose the correct instance name and node, click next.

7.

Review the storage screen, click next.

8.

Review the summary screen, click OK and wait a few seconds for the
progress bar to start.

9.

Allow the progress bar to finish. When asked if you want to perform
another operation, choose "No" to exit DBCA.

10.

To verify success, log into one of the instances and query from
gv$instance, you should now see all nodes.

F.
Adding Instances to Database when there's a standby database in place.
-----------------------------------------------Depending on the current dataguard configuration there are several steps
we need to perform. The steps to perform depend on which cluster we are
adding to the new node to and how many nodes and instances will be on
primary and standby site at the end.
Possible cases are:
1.

When adding node to primary cluster only.


In this case we need to execute all the steps described in E (above) to
add the instance to the primary cluster, recreate the standby controlfile
after adding the new instance/thread to the primary DB and alter the
standby database to add the standby redologs for the new thread.
Note that re-creating the standby controlfile is necessary for a physical
standby database but optional for a logical standby database.
Example commmands
(thread 3 was added to primary and only 2 redolog groups per thread):

a.
b.

Follow all steps in E (described above) to add the primary instance


Create a new standby controlfile from the primary database and copy it
to the standby.
On primary:
alter database create standby controlfile as "/u01/stby.ctl";

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN... 6/4/2011

Page 4 of 5

c.

Add the standby redologs on standby after the database has been
mounted with the new controlfile:
On standby:
alter database add standby logfile thread 3
group 11 ('/u01/oradata/DR/srl_3_11.dbf') size 100m,
group 12 ('/u01/oradata/DR/srl_3_12.dbf') size 100m;

2.

When adding node to standby cluster only.


If the new node was added to the standby cluster we also need to know
how many nodes are currently in the primary and standby cluster to know
what to do, the options are :

2.1

When current number of nodes on primary cluster is greater than or equal


to the number of nodes on standby cluster
We assume the thread and instance on the primary database has already been
created, and we now want to add the instance to the standby cluster only.
In this case we need to create the standby redologs for the new thread on
the standby, if they do not already exist, update the init.ora or spfile
for the new standby instance and register the new standby instance to CRS
using srvctl.
Example commmands
(a 3rd. node was added to standby cluster and only 2 redolog groups per thread):

a.

Add standby redologs :


alter database add standby logfile thread 3
group 11 ('/u01/oradata/DR/srl_3_11.dbf') size 100m,
group 12 ('/u01/oradata/DR/srl_3_12.dbf') size 100m;

b.

Update the init.ora or spfile parameters such as thread, instance_name,


instance_number, local_listener, undo_tablespace, etc...
for the new standby instance.

c.

Register new instance to CRS using srvctl


$ srvctl add instance -d DB_NAME -i INSTANCE3 -n NEW_STANDBY_NODE3

2.2

When current number of nodes on primary cluster is lower than


the number of nodes on standby cluster
In this case we'll need to add a new public thread on the primary database
and enable it even if there will be no primary instance for that thread,
then recreate the standby's controlfile and follow the same steps in F.2.1
to add the new set of SRLs and register new instance to CRS.
Example commmands
(a 3rd. node was added to standby cluster and only 2 redolog groups per thread):

a.

On primary create new thread and enable it:


alter database add logfile thread 3
group 9 ('/u01/oradata/prod/rl_3_9.dbf') size 100m,
group 10 ('/u01/oradata/prod/rl_3_10.dbf') size 100m;
alter database enable public thread 3;

b.

Create new standby controlfile from primary:


alter database create standby controlfile as "/u01/stby.ctl";

c.

On standby (after the database has been mounted with the new controlfile):
alter database add standby logfile thread 3
group 11 ('/u01/oradata/DR/srl_3_11.dbf') size 100m,
group 12 ('/u01/oradata/DR/srl_3_12.dbf') size 100m;

d.

Update the init.ora or spfile parameters such as thread, instance_name,


instance_number, local_listener, undo_tablespace, etc...
for the new standby instance.

e.

Register new standby instance to CRS using srvctl


$ srvctl add instance -d DB_NAME -i INSTANCE3 -n NEW_STANDBY_NODE3

3.

When adding node to both primary and standby clusters.


In this case follow all steps in both F.1 and F.2 in that same order.

RELATED DOCUMENTS
----------------Oracle Real Application Clusters Administrator's Guide - Chapter 5
Oracle Series/Oracle Database 10g High Availabilty Chapter 5 28-34.
Note 239998.
Note 466975.1

Related

Products
Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN... 6/4/2011

Page 5 of 5

Back to top
Copyright (c) 2007, 2010, Oracle. All rights reserved. Legal Notices and Terms of Use | Privacy Statement

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN... 6/4/2011

También podría gustarte