本篇内容主要讲解“怎么修改linux中rac上IP地址”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“怎么修改linux中rac上IP地址”吧!
修改前IP:
#####PUBLIC IP #####
192.168.11.100 db1
192.168.11.200 db2
##### VIP #####
192.168.11.111 db1_vip
192.168.11.222 db2_vip
#####SCAN IP #####
192.168.11.101 scanip
修改后IP:
#####PUBLIC IP #####
192.168.57.100 db1
192.168.57.200 db2
##### VIP #####
192.168.57.111 db1_vip
192.168.57.222 db2_vip
#####SCAN IP #####
192.168.57.101 scanip
修改网段11..改成57..
1 关闭数据库
[grid@db1 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node db1
Instance orcl2 is running on node db2
[grid@db1 ~]$ srvctl stop database -d orcl
[grid@db1 ~]$ srvctl status database -d orcl
Instance orcl1 is not running on node db1
Instance orcl2 is not running on node db2
2 查看原scan的信息
[grid@db1 ~]$ srvctl config scan
SCAN name: scanip, Network: 1/192.168.11.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scanip/192.168.11.101
[grid@db1 ~]$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:1521
3 关闭listener、crs
[grid@db1 ~]$ srvctl stop listener
[grid@db1 ~]$ crsctl stop crs -f
CRS-4563: Insufficient user privileges.
CRS-4000: Command Stop failed, or completed with errors.
[grid@db1 ~]$ su root
Password:
[root@db1 grid]# crsctl stop crs -f
[root@db2 grid]# crsctl stop crs -f
4 修改/etc/hosts(两个节点)
[root@db1 ~]# vi /etc/hosts
#####PUBLIC IP #####
192.168.57.100 db1
192.168.57.200 db2
5 修改os网卡信息(两个节点)
vi /etc/sysconfig/network-scripts/ifcfg-eth0
service network restart
然后在虚拟机上重新配置网卡连接方式,(在宿主机上新建立一个虚拟网卡虚拟机桥接在该网卡上)
6 启动crs(两个节点)
[root@db1 grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@db2 grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
--节点1
1 查看信息(原)
[root@db1 grid]# oifcfg getif
eth2 10.0.0.0 global cluster_interconnect
eth0 192.168.11.0 global public
2 删除原ip
[root@db1 grid]# oifcfg delif -global eth0/192.168.11.0
3 注册新ip
[root@db1 grid]# oifcfg setif -global eth0/192.168.57.0:public
4 验证
[root@db1 grid]# oifcfg getif
eth2 10.0.0.0 global cluster_interconnect
eth0 192.168.57.0 global public
--节点2
[root@db2 grid]# oifcfg getif
eth2 10.0.0.0 global cluster_interconnect
eth0 192.168.57.0 global public --网段已经改过来了。我感觉已经不用删除,再添加了
[root@db2 grid]# oifcfg delif -global eth0/192.168.11.0
[root@db2 grid]# oifcfg setif -global eth0/192.168.57.0:public
[root@db2 grid]# oifcfg getif
eth2 10.0.0.0 global cluster_interconnect
eth0 192.168.57.0 global public
1 查看vip配置
[root@db2 admin]# srvctl config nodeapps -a
Network exists: 1/192.168.11.0/255.255.255.0/eth0, type static
VIP exists: /db1_vip/192.168.11.111/192.168.11.0/255.255.255.0/eth0, hosting node db1
VIP exists: /db2_vip/192.168.11.222/192.168.11.0/255.255.255.0/eth0, hosting node db2
2 停实例和vip
[root@db1 grid]# srvctl stop instance -d rac -n db1
[root@db1 grid]# srvctl stop vip -n db1 -f
[root@db2 grid]# srvctl stop instance -d rac -n db2
[root@db2 grid]# srvctl stop vip -n db2 -f
3 修改hosts文件(root两个节点)
vi /etc/hosts
##### VIP #####
192.168.57.111 db1_vip
192.168.57.222 db2_vip
4 修改vip的配置
--节点1
[root@db1 grid]# /grid/11.2.0/grid/bin/srvctl modify nodeapps -n db2 -A 192.168.57.111/255.255.255.0/eth0
--节点2
[root@db2 grid]# /grid/11.2.0/grid/bin/srvctl modify nodeapps -n db2 -A 192.168.57.222/255.255.255.0/eth0
5 启动
--节点1
[root@db1 grid]# srvctl start vip -n db1
PRKO-2420 : VIP is already started on node(s): db1
[root@db1 grid]# srvctl start listener -n db1
[root@db1 grid]# srvctl start instance -d rac -n db1
--节点2
[root@db2 grid]# srvctl start vip -n db2
[root@db2 grid]# srvctl start listener -n db2
[root@db2 grid]# srvctl start instance -d rac -n db2
6 验证
[root@db2 grid]# srvctl config nodeapps -a
Network exists: 1/192.168.57.0/255.255.255.0/eth0, type static
VIP exists: /db1_vip/192.168.57.111/192.168.57.0/255.255.255.0/eth0, hosting node db1
VIP exists: /192.168.57.222/192.168.57.222/192.168.57.0/255.255.255.0/eth0, hosting node db2
不知道什么原因,这里vip的名字不出现,也不太影响
1停止scan服务和scan
[root@db1 grid]# srvctl stop scan_listener
[root@db1 grid]# srvctl stop scan
2 修改host文件
vi /etc/hosts
#####SCAN IP #####
192.168.57.101 scanip
3 修改scan配置(root)
[root@db1 grid]# srvctl modify scan -n scanip
--这里是自己hosts解析的scanip的名字
4 修改scan listener配置
[root@db1 grid]# srvctl modify scan_listener -u
5 启动
[root@db1 grid]# srvctl start scan_listener
6 启动数据库
[root@db1 grid]# srvctl start database -d orcl
7 验证
[root@db1 grid]# srvctl config scan
SCAN name: scanip, Network: 1/192.168.57.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scanip/192.168.57.101
--查看集群资源
[root@db1 grid]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE db1
ONLINE ONLINE db2
ora.gsd
OFFLINE OFFLINE db1
OFFLINE OFFLINE db2
ora.net1.network
ONLINE ONLINE db1
ONLINE ONLINE db2
ora.ons
ONLINE ONLINE db1
ONLINE ONLINE db2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE db2
ora.db1.vip
1 ONLINE ONLINE db1
ora.db2.vip
1 ONLINE ONLINE db2
ora.orcl.db
1 ONLINE ONLINE db1 Open
2 ONLINE ONLINE db2 Open
ora.scan1.vip
1 ONLINE ONLINE db2
知识补充:
--添加scan
[root@db1 grid]# srvctl add scan scanip
--添加实例
$ srvctl add instance -d orcl -i orcl1 -n db1
$ srvctl add instance -d orcl -i orcl2 -n db2
srvctl start instance -d orcl -n db1
srvctl start instance -d orcl -n db2
--添加database
srvctl add database -d orcl -o /oracle/home -p +DATA/orcl/spfileorcl.ora
srvctl start database -d orcl
--添加本地监听
srvctl add listener -l listener
crsctl start resource ora.LISTENER.lsnr
--添加scan资源
srvctl add scan -n scanip
srvctl start scan
到此,相信大家对“怎么修改linux中rac上IP地址”有了更深的了解,不妨来实际操作一番吧!这里是亿速云网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。