Oracle RAC修改public, VIP, SCAN IP
1. grid停止数据库, 停止集群
srvctl stop database -d orcl -o immediate
srvctl stop listener
2. root停止CRS, 几个节点都应执行
crsctl stop crsctl
3. 修改/etc/hosts文件(原文件应进行备份)
4. 修改public网卡的IP
/etc/sysconfig/network-scripts/ifcfg-eth0
5. 启动CRS
[root@dbhost01 ~]# crsctl start crs
[root@dbhost02 ~]# crsctl start crs
6. 修改public ip地址
--查看当前设定
[root@dbhost01 ~]# oifcfg getif
eth2 10.0.0.0 global cluster_interconnect
eth0 192.168.2.0 global public
oifcfg delif -global eth0
oifcfg setif -global eth0/192.168.31.0:public
[root@dbhost01 ~]# oifcfg delif -global eth0
[root@dbhost01 ~]# oifcfg setif -global eth0/192.168.31.0:public
查看修改后的设定:
[root@dbhost01 ~]# oifcfg getif
eth2 10.0.0.0 global cluster_interconnect
eth0 192.168.31.0 global public
7. 修改VIP
需要停止数据库、监听和VIP,如果按照之前操作,目前数据库和监听已经是停止状态。
[root@dbhost01 ~]# srvctl stop vip -n dbhost01
PRCC-1017 : 192.168.2.22 was already stopped on dbhost01
PRCR-1005 : Resource ora.dbhost01.vip is already stopped
[root@dbhost01 ~]# srvctl stop vip -n dbhost02
[root@dbhost01 ~]# srvctl modify nodeapps -n dbhost01 -A 192.168.31.22/255.255.255.0/eth0
[root@dbhost01 ~]# srvctl modify nodeapps -n dbhost02 -A 192.168.31.23/255.255.255.0/eth0
再次验证VIP修改完成:
[grid@dbhost01 ~]$ srvctl config vip -n dbhost02
VIP exists: /192.168.31.23/192.168.31.23/192.168.31.0/255.255.255.0/eth0, hosting node dbhost02
[grid@dbhost01 ~]$ srvctl config vip -n dbhost01
VIP exists: /192.168.31.22/192.168.31.22/192.168.31.0/255.255.255.0/eth0, hosting node dbhost01
启动VIP:
[grid@dbhost01 ~]$ srvctl start vip -n dbhost01
PRKO-2420 : VIP is already started on node(s): dbhost01
[grid@dbhost01 ~]$ srvctl start vip -n dbhost02
[grid@dbhost01 ~]$
[grid@dbhost01 ~]$
[grid@dbhost01 ~]$ ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:30:2D:6D
inet addr:192.168.31.20 Bcast:192.168.31.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe30:2d6d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1044912 errors:0 dropped:0 overruns:0 frame:0
TX packets:3783536 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:241469598 (230.2 MiB) TX bytes:5248362651 (4.8 GiB)
eth0:1 Link encap:Ethernet HWaddr 08:00:27:30:2D:6D
inet addr:192.168.31.22 Bcast:192.168.31.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
启动监听, 数据库
[grid@dbhost01 ~]$ srvctl start listener
[grid@dbhost01 ~]$ srvctl start database -d orcl
8. 修改SCAN IP
查看scan IP设定:
[grid@dbhost02 ~]$ srvctl config scan
SCAN name: 192.168.2.24, Network: 1/192.168.31.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /192.168.2.24/192.168.2.24
停止scan listener
[grid@dbhost02 ~]$ srvctl stop scan_listener
[grid@dbhost02 ~]$ srvctl stop scan
[grid@dbhost02 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running
[grid@dbhost02 ~]$ srvctl stop scan
PRCC-1016 : scan1 was already stopped
PRCR-1005 : Resource ora.scan1.vip is already stopped
[grid@dbhost02 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
使用root用户修改scan IP:
[root@dbhost02 ~]# srvctl modify scan -n orcl-cluster-scan
[root@dbhost02 ~]# srvctl modify scan_listener -u
[root@dbhost02 ~]# srvctl start scan_listener
[root@dbhost02 ~]# srvctl config scan
SCAN name: orcl-cluster-scan, Network: 1/192.168.31.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /orcl-cluster-scan/192.168.31.24
确认集群所有资源正常:
[root@dbhost02 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.BACKUP.dg
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.DATA.dg
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.FRA.dg
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.LISTENER.lsnr
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.VOTE.dg
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.asm
ONLINE ONLINE dbhost01 Started
ONLINE ONLINE dbhost02 Started
ora.gsd
OFFLINE OFFLINE dbhost01
OFFLINE OFFLINE dbhost02
ora.net1.network
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.ons
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
ora.registry.acfs
ONLINE ONLINE dbhost01
ONLINE ONLINE dbhost02
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE dbhost02
ora.cvu
1 ONLINE ONLINE dbhost01
ora.dbhost01.vip
1 ONLINE ONLINE dbhost01
ora.dbhost02.vip
1 ONLINE ONLINE dbhost02
ora.oc4j
1 ONLINE ONLINE dbhost02
ora.orcl.db
1 ONLINE ONLINE dbhost01 Open
2 ONLINE ONLINE dbhost02 Open
ora.scan1.vip
1 ONLINE ONLINE dbhost02
参考:
http://www.cnblogs.com/jyzhao/p/7265903.html