Recently, I have successfully changed the NIC interface name for public and private NIC in 2 nodes Oracle clusterware 11.1.0.7. The purpose of this change is to change single NIC interface to teaming NIC interfaces in UNIX level for HA purpose.

I would like to share the steps which applies to Oracle Clusterware 10.1 to 11.2 in any UNIX/Linux platform.

Before:

(node1:root) /usr/crs/oracle/product/11.1/crs/bin # ./oifcfg getif
lan2 23.252.18.0 global public
lan4 10.250.48.0 global cluster_interconnect

(node2:root) /usr/crs/oracle/product/11.1/crs/bin # ./oifcfg getif
lan2 23.252.18.0 global public
lan4 10.250.48.0 global cluster_interconnect

(node1:root) /usr/oracle $ srvctl config nodeapps -n node1 -a
VIP exists.: /racvip1/23.252.18.91/255.255.255.0/lan2

(node2:root) /usr/oracle $ srvctl config nodeapps -n node2 -a
VIP exists.: /racvip2/23.252.18.92/255.255.255.0/lan2

Activity steps:

(1) shutdown CRS in both nodes (using root id):

# /sbin/init.d/init.crs stop OR /etc/init.d/init.crs stop

(2) Hand over to UNIX/Linux sysadmin to proceed on NIC teaming or bonding in both nodes.

(3) Once new NIC interfaces are up in both nodes, startup CRS (using root id):

# /sbin/init.d/init.crs start OR /etc/init.d/init.crs start

(4) Shutdown database by srvctl (using oracle id):

$ srvctl stop database -d <DBNAME> -o immediate

(5) Shutdown asm instnaces in both nodes (using oracle id):

$ srvctl stop asm -n node1
$ srvctl stop asm -n node2

(6) Shutdown nodeapps in both nodes:

$ srvctl stop nodeapps -n node1
$ srvctl stop nodeapps -n node2

(7) Perform these steps  (using root id):

For public NIC (in one node only),
# cd $CRS_HOME/bin
# ./oifcfg delif -global lan2
# ./oifcfg setif -global lan900/23.252.18.0:public

For private NIC (in one node only),
# cd $CRS_HOME/bin
# ./oifcfg delif -global lan4
# ./oifcfg setif -global lan901/10.250.48.0:cluster_interconnect

For VIP (using root is),
at node1:
# cd $CRS_HOME/bin
# srvctl modify nodeapps -n node1 -A 23.252.18.91/255.255.255.0/lan900

at node2:
# cd $CRS_HOME/bin
# srvctl modify nodeapps -n node2 -A 23.252.18.92/255.255.255.0/lan900

(8) Checking in both nodes:

(node1:root) /usr/crs/oracle/product/11.1/crs/bin # ./oifcfg getif

lan900 23.252.18.0 global public
lan901 10.250.48.0 global cluster_interconnect

(node2:root) /usr/crs/oracle/product/11.1/crs/bin # ./oifcfg getif
lan900 23.252.18.0 global public
lan901 10.250.48.0 global cluster_interconnect

(node1:root) /usr/crs/oracle/product/11.1/crs/bin # ./ srvctl config nodeapps -n node1 -a
VIP exists.: /racvip1/23.252.18.91/255.255.255.0/lan900

(node2:root) /usr/crs/oracle/product/11.1/crs/bin # ./srvctl config nodeapps -n node2 -a
VIP exists.: /racvip2/23.252.18.92/255.255.255.0/lan900

(9) You may reboot machines or  start nodeapps, asm, services and database by srvctl (using oracle id).

Reference Notes from Metalink:

Note 276434.1:Modifying the VIP or VIP Hostname of a 10g or 11g Oracle

Note 283684.1:How to Change Interconnect/Public Interface IP or Subnet in Oracle Clusterware