Step by Step of Deleting / Removing Node(s) from 11g R1 RAC on Linux:

By Bhavin Hingu

bhavin@oracledba.org

 

<<HOME>>

 

This document explains the step by step process of Removing/Deleting RAC Node from 11g R1 Cluster. In this process, I am going to remove a single node (node2-pub) from 2 node RAC cluster online without affecting the availability of the RAC Database running on ASM.

 

Existing RAC Setup:

 

Hostname

Private

VIP

CRS_HOME

ASM_HOME (Local)

ORACLE_HOME (Local)

ASM_INST

Inst for “test” DB

 

 

 

(Local)

 

 

 

 

node1-pub

node1-prv

node1-vip

/u01/app/crs

/u01/app/asm/product/11gr1

/u01/app/oracle/product/11g/db_2

+ASM1

test1

node2-pub

node2-prv

node2-vip

/u01/app/crs

/u01/app/asm/product/11gr1

/u01/app/oracle/product/11g/db_2

+ASM2

test2

 

Node to be deleted: node2-pub

 

Tasks List (to be executed in Order):

 

Modify the Database Service configuration

Delete Database Instance test2 on node2-pub

Remove ASM Instance on node2-pub

Delete LISTENER on node2-pub

Remove DB_HOME, ASM_HOME on node2-pub

Remove the nodeapps on node2-pub

Update Inventory on remaining nodes for DB_HOME and ASM_HOME

Remove Oracle Clusterware (crs) on node2-pub

Update the CRS Inventory with the node list on the remaining nodes (node1-pub)

Verify that the node node2-pub is removed from the Cluster

 

 

Let’s get the current status of CRS on all the nodes. Below output shows that the DB, ASM instances and services are all up and running on both the nodes.

 

Snap1.jpg

 

Modify Database Services:

 

Update the Database service to run on all the nodes except the node that is being deleted. This can be achieved by modifying the service by providing the appropriate instance value as a preferred instance where we want this service to be run at startup. So, here, the preferred instance will be "test1" where we want test_srv to run after deleting test2.

 

srvctl status service -d test

srvctl stop service -d test -s test_srv -i test2

srvctl config service -d test

srvctl modify service -d test -s test_srv -n -i test1

srvctl config service -d test

 

Snap2.jpg

 

This task is also being taken care by the dbca as part of deleting Instance.

 

Delete Database Instance test2 on node2-pub:

 

Remove all the Database instances using dbca on the node that is being deleted. In my case, there is only one DB instance test2 running on this node. The Instances on node2-pub should be up and running before you start dbca to delete instance.

 

Invoke the dbca from any node in the cluster other than the ones that is being deleted. Here it is node1-pub. Verify that the instance test2 is successfully deleted from node2-pub. The thread for instance test2 should longer be existed in the database 'test'.

 

Snap3.jpg

 

Snap4.jpg

 

Snap5.jpg

 

Snap6.jpg

 

Snap7.jpg

 

Snap8.jpgSnap9.jpg

 

Snap10.jpg

 

Snap11.jpg

 

 

Database resource should not be running on the node which is going to be removed from the cluster. In the crs_stat output, the database resource ora.test.db is running on the node2-pub and that needs to be relocated to node1-pub.

 

srvctl config database -d test

crs_relocate ora.test.db

crs_stat –t

 

Snap13.jpg

 

 

Remove the ASM Instance +ASM2 on node2-pub:

 

srvctl stop asm -n node2-pub

srvctl remove asm -n node2-pub

srvctl config asm -n node2-pub

srvctl config asm -n node1-pub

crs_stat -t

 

Snap14.jpg

 

Delete LISTENER on node2-pub:

 

After deleting the ASM instance on node2-pub, remove the LISTENER running on this node using netca utility. Execute the netca from the ASM_HOME if LISTENER is configured in ASM_HOME.

 

Snap22.jpgSnap15.jpgSnap16.jpgSnap17.jpgSnap18.jpgSnap19.jpgSnap21.jpg

 

 

Stop the Nodeapps on node2-pub.

 

srvctl stop nodeapps -n node2-pub

 

Snap23.jpg

 

Remove the DB HOME and ASM_HOME from node2-pub:

 

DB_HOME:

 

·         Connect to the node2-pub as oracle user using X-terminal.

·         Set the ORACLE_HOME to DB HOME (in my case, it is /u01/app/oracle/product/11g/db_2)

·         Update the Oracle Inventory with the CLUSTER_NODES to null to DETACH the ORACLE_HOME from the rest of the nodes in CLUSTER so that runInstaller will only remove the ORACLE_HOME from nbode2-pub local node.

·         De-Install the ORACLE_HOME.

 

export ORACLE_HOME=/u01/app/oracle/product/11g/db_2

echo $ORACLE_HOME

cd /u01/app/oracle/product/11g/db_2/oui/bin

./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES="" -local

./runInstaller -ignoreSysPrereqs -silent "REMOVE_HOMES={$ORACLE_HOME}" –local

 

NOTE: 11g is not certified on CentOS and so I have to use ignoreSysPrereqs option.

Repeat the same procedure for the ASM HOME.

 

Snap24.jpg

 

ASM_HOME:

 

·         Connect to the node2-pub as oracle user using X-terminal.

·         Set the ORACLE_HOME to ASM HOME (in my case, it is /u01/app/asm/product/11gr1)

·         Update the Oracle Inventory with the CLUSTER_NODES to null to DETACH the ORACLE_HOME from the rest of the nodes in CLUSTER so that runInstaller will only remove the ORACLE_HOME from nbode2-pub local node.

·         De-Install the ORACLE_HOME.

 

export ORACLE_HOME=/u01/app/asm/product/11gr1

echo $ORACLE_HOME

cd /u01/app/asm/product/11gr1/oui/bin

./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES="" -local

./runInstaller -ignoreSysPrereqs -silent "REMOVE_HOMES={$ORACLE_HOME}" –local

 

Snap25.jpg

 

Remove nodeapps from node2-pub:

 

Connect as oracle on any of the node and execute below command. Make sure that the nodeapps are not ONLINE on node2-pub. If so, then stop it before removing them.

 

srvctl remove nodeapps -n node2-pub

 

Snap26.jpg

 

Update Inventory of DB_HOME and ASM_HOME on the remaining Nodes:

 

After removing DB_HOME and ASM_HOME from node2-pub, it is required to update the Inventories for these HOMEs on the remaining Nodes in Cluster with the new list of remaining Nodes. Execute the below commands to update Inventory from any of the remaining Node. The CLUSTER_NODES option must contain the list of all the nodes except the ones that are being deleted. In my case of 2-node RAC, the available nodes will be only one node i.e, node1-pub.

 

For DB_HOME:

 

·         Connect to the node2-pub as oracle user using X-terminal.

·         Set the ORACLE_HOME to DB HOME (in my case, it is /u01/app/oracle/product/11g/db_2)

·         Update the Oracle Inventory with the CLUSTER_NODES for the remaining Nodes (CLUSTER_NODES variable).

 

export ORACLE_HOME=/u01/app/oracle/product/11g/db_2

echo $ORACLE_HOME

cd /u01/app/oracle/product/11g/db_2/oui/bin

./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub

 

For ASM_HOME:

 

·         Connect to the node2-pub as oracle user using X-terminal.

·         Set the ORACLE_HOME to ASM HOME (in my case, it is /u01/app/asm/product/11gr1)

·         Update the Oracle Inventory with the CLUSTER_NODES for the remaining Nodes (CLUSTER_NODES variable).

 

export ORACLE_HOME /u01/app/asm/product/11gr1

echo $ORACLE_HOME

cd /u01/app/asm/product/11gr1

./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub

 

Snap27.jpg

 

Remove CRS (Clusterware) from node2-pub:

 

Connect to the node being deleted (node2-pub) as root and execute the rootdelete.sh script to prepare it for the CRS removal.

 

/u01/app/crs/install/rootdelete.sh local nosharedvar nosharedhome

 

Snap28.jpg

 

From any of the remaining nodes other than the one that is being deleted, execute the rootdeletenode.sh script as a root user to remove the node2-pub from the OCR. It needs node name and node number which can be obtained by running olsnodes -n command line utility.

 

/u01/app/crs/install/rootdeletenode.sh <node to be deleted>,<node_numnber>

 

/u01/app/crs/install/rootdeletenode.sh node2-pub,2

 

Snap29.jpg

 

Update the Inventory for CRS on the remaining Nodes:

 

Connect to any of the remaining nodes and execute the below command to update the inventory with the proper no. of Nodes in cluster for the CRS_HOME. The inventory has already been updated for the DB_HOME as well as ASM_HOME. In my case, I connected to node1-pub and ran below command.

 

export ORACLE_HOME=/u01/app/crs

echo $ORACLE_HOME

cd /u01/app/crs/oui/bin

./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub  CRS=TRUE

 

Snap30.jpg

 

Verify that the node is removed successfully:

 

Verify that the node has been removed successfully by looking at OCR through various commands like olsnodes. Oracle Inventory should not contain any the removed node as part of the cluster which can be checked by running lsinventory.

 

Snap32.jpg

 

On the Deleted Nodes, remove the OS Directory for DB_HOME, ASM_HOME as well as CRS_HOME.

 

****** Node node2-pub has been deleted from the Cluster Successfully!!! *****

 

<<HOME>>

 

 

HTML Comment Box is loading comments...