这篇文章主要讲解了“Rhel7_Oracle11g_rac安装方法是什么”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“Rhel7_Oracle11g_rac安装方法是什么”吧!
1、禁用selinux
getenforce setenforce 0 vim /etc/selinux/config
2、关闭防火墙、禁止开机启动
systemctl stop firewalld.service systemctl disable firewalld.service
3、修改主机名
主机名中禁止使用下划线“_”,建议使用小写字母,长度小于8位 hostnamectl set-hostname mydb1 hostnamectl set-hostname mydb2 修改完重新登录
4、配置yum
mount -t iso9660 -o loop /dev/sr0 /media/ cat rhel7.repo [base] name=rhel7.7 baseurl=file:///media enable=1 gpgcheck=0
5、关闭没必要的服务
Redhat6: service iptables stop service ip6tables stop chkconfig iptables off chkconfig ip6tables off service sshd start chkconfig sshd on service Bluetooth stop chkconfig Bluetooth off service postfix stop chkconfig postfix off service cups stop chkconfig cups off service cpuspeed off chkconfig cpuspeed off service NetworkManager stop chkconfig NetworkManager off service vsftpd stop chkconfig vsftpd off service dhcpd stop chkconfig dhcpd off service nfs stop chkconfig nfs off service nfslock stop chkconfig nfslock off service ypbind stop chkconfig ypbind off Redhat7: .................
5、安装依赖包
--检查 rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' \ binutils \ compat-libcap1 \ compat-libstdc++-33 \ gcc \ gcc-c++ \ glibc \ glibc-devel \ ksh \ libstdc++ \ libstdc++-devel \ libaio \ libaio-devel \ make \ sysstat --安装 yum -y install binutils compat-libstdc++-33 gcc gcc-c++ glibc glibc-common glibc-devel ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel make sysstat openssh-clients compat-libcap1 xorg-x11-utils xorg-x11-xauth elfutils unixODBC unixODBC-devel libXp elfutils-libelf elfutils-libelf-devel smartmontools
6、内核参数修改
--计算方法:cat /proc/sys/fs/file-max + 512 * process * instance number fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 --计算方法:getconf PAGE_SIZE TOTAL RAM IN BYTES / PAGE_SIZE kernel.shmall = 536870912 --计算方法:HALF OF TOTAL RAM IN BYTES kernel.shmmax = 1073741824 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_max = 1048576 fs.aio-max-nr = 4194304 vm.dirty_ratio=20 vm.dirty_background_ratio=3 vm.dirty_writeback_centisecs=100 vm.dirty_expire_centisecs=500 vm.swappiness=10 vm.min_free_kbytes=524288 #rp_filter,这里假设 eth3 和 eth4 都是私有网卡 net.ipv4.conf.ens39.rp_filter = 2 net.ipv4.conf.ens39.rp_filter = 2 #IP分片汇聚的最大/最小内存用量,计算公式:numCPU *130000,逻辑 cpu 为 96,那么 high参数建议至少设置为 12m 以上。同时 low 参数比 high 参数少 1m 即可。 net.ipv4.ipfrag_high_thresh=16777216 net.ipv4.ipfrag_low_thresh=15728640 net.ipv4.ipfrag_time=60 -----修改后如下: fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 536870912 kernel.shmmax = 1073741824 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_max = 1048576 fs.aio-max-nr = 4194304 vm.dirty_ratio= 20 vm.dirty_background_ratio= 3 vm.dirty_writeback_centisecs= 100 vm.dirty_expire_centisecs= 500 vm.swappiness= 10 vm.min_free_kbytes= 524288 net.ipv4.conf.ens39.rp_filter = 2 net.ipv4.conf.ens39.rp_filter = 2 net.ipv4.ipfrag_high_thresh=16777216 net.ipv4.ipfrag_low_thresh=15728640 net.ipv4.ipfrag_time=60
7、关闭操作系统透明大页
redhat7 # cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] --增加transparent_hugepage=never cat /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="rd.lvm.lv=myvg/swap rd.lvm.lv=myvg/usr vconsole.font=latarcyrheb-sun16 rd.lvm.lv=myvg/root crashkernel=auto vconsole.keymap=us rhgb quiet transparent_hugepage=never" GRUB_DISABLE_RECOVERY="true" --对grub生效 # grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue- 41c535c189b842eea5a8c20cbd9bff26 Found initrd image: /boot/initramfs-0-rescue- 41c535c189b842eea5a8c20cbd9bff26.img done --关闭tuned服务 # systemctl stop tuned.service # systemctl disable tuned.service --重启 reboot --确认 cat /sys/kernel/mm/transparent_hugepage/enabled redhat6、redhat7 在/etc/rc.d/rc.local文件中加入下面配置,重启生效: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi 重启后确认结果如下: cat /sys/kernel/mm/transparent_hugepage/defrag always [never] cat /sys/kernel/mm/transparent_hugepage/enabled always [never]
8、ntp设置
Redhat6配置: # vi /etc/sysconfig/ntpd # Drop root to id 'ntp:ntp' by default. OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid" #启动微调模式 # Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=yes #同步硬件bios时间 #vi /etc/ntp.conf server xx.xx.xx.xx prefer #添加ntp服务器地址为首要地址 server 127.127.1.0 iburst#添加本机为次要同步地址 #vi /etc/ntp/step-tickers xx.xx.xx.xx#添加ntp服务器地址,设置在ntp启动时自动同步时间 配置完毕后,重启ntpd服务,并查看状态 chkconfig ntpd on service ntpd restart ntpstat#查看ntpd服务状态 date #查看时间是否正常 redhat7配置: yum install ntp vi /etc/ntp.conf server 10.5.26.10 iburst vi /etc/sysconfig/ntpd OPTIONS="-x -g" systemctl start ntpd.service systemctl enable ntpd.service systemctl status ntpd.service
9、网卡绑定
red6 touch /etc/modprobe.d/bonding.conf echo "alias bondeth0 bonding" >> /etc/modprobe.d/bonding.conf vi /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=static USERCTL=no NM_CONTROLLED=no IPADDR=10.1.2.3 NETMASK=255.255.255.0 GATEWAY=10.1.2.254 BONDING_OPTS="mode=1 miimon=100" vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes BOOTPROTO=none MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED=no echo "ifenslave bond0 eth0 eth2" >>/etc/rc.d/rc.local 重启服务器 查看状态 cat /sys/class/net/bonding_masters cat /sys/class/net/bondeth0/bonding/mode cat /proc/net/bonding/bondeth0 通过拔插网线验证 red7 nmcli team
10、网络检查
(1). 确保节点间通信网络(私有网络)是通过单独的交换机连接,而不是直连。 (2). 确保所有节点的连接到相同网络的网卡名称、网络子网都一样。比如连接到 public 网络的网卡名称都叫 eth0,其 IP 地址子网都是 133.37.x.0,子网掩码都为 255.255.255.0。 (3). 确保系统中有且只有一个默认路由,并且是通过 public 网络到达默认路由。 (4). 确保网卡到网络的带宽是正确的。 #ethtool eth2 Settings for eth2: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: umbg Wake-on: g Current message level: 0x00000003 (3) Link detected: yes 上面的 Speed: 1000Mb/s 表示连接的实际网络带宽为 1000Mbps。虽然交换机、网卡都 是 1000Mbps 或以上带宽,但有时由于端口问题、网络线缆问题等原因,实际带宽 并没有这么多。 (5). 确保在私有网络上开启了多播。可以在 Oracle 官方支持网站上下载 mcasttest.pl 脚本进行检查
11、存储多路径配置
见 https://www.modb.pro/db/14031
12、创建用户组
groupadd -g 1000 oinstall groupadd -g 1001 dba groupadd -g 1002 oper groupadd -g 1003 asmadmin groupadd -g 1004 asmoper groupadd -g 1005 asmdba useradd -u 1000 -g oinstall -G dba,oper,asmdba oracle useradd -u 1001 -g oinstall -G dba,asmadmin,asmdba,asmoper,oper grid
13、limits限制
touch /etc/security/limits.d/99-grid-oracle-limits.conf grid soft nproc 16384 grid hard nproc 16384 grid soft nofile 10240 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768 grid soft memlock unlimited grid hard memlock unlimited grid soft core unlimited grid hard core unlimited oracle soft nproc 16384 oracle hard nproc 16384 oracle soft nofile 10240 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768 oracle soft memlock unlimited oracle hard memlock unlimited oracle soft core unlimited oracle hard core unlimited touch /etc/profile.d/oracle-grid.sh #Setting the appropriate ulimits for oracle and grid user if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi if [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
14、创建目录
集群软件BASE目录:/u01/app/oracle 集群软件HOME目录:/u01/app/11.2.0/grid 数据库软件BASE目录:/u01/app/oracle 数据库软件HOME目录:/u01/app/oracle/product/11.2.0/db_home mkdir -p /u01/app/grid mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oraInventory chown -R grid:oinstall /u01/ mkdir -p /u01/app/oracle/product/11.2.0/db_home chown -R oracle:oinstall /u01/app/oracle/ chmod -R 755 /u01
15、设置oracle、grid用户环境变量
oracle: export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=racdb1; export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_home export ORACLE_TERM=xterm export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK export LANG=en_US.UTF-8 umask 022 grid: export ORACLE_SID=+ASM1 export ORACLE_OWNER=grid export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib PATH=$PATH:$HOME/bin$:$ORACLE_HOME/bin export LANG=en_US.UTF-8 export PATH umask 022
16、修改hosts文件
Public IP:别名直接使用主机名,即uname –a返回的机器名; Private IP: 别名为主机名-priv。 Virtual IP:别名为主机名-vip。 SCAN IP:别名为数据库名-scan。 #Public Ip 192.168.0.203 mydb1 192.168.0.204 mydb2 #Virtual Ip 192.168.0.205 mydb1-vip 192.168.0.206 mydb2-vip #Private Ip 192.168.124.203 mydb1-priv 192.168.124.204 mydb2-priv #Scan Ip 192.168.0.207 racdb-scan
17、设置互信
安装grid软件时使用sshUserSetup.sh快速创建互信, $node1 $node2变量参数需要根据实际节点名称进行内容调整。 在一个节点上执行即可(可以在root用户下执行): ./sshUserSetup.sh -user grid -hosts "$node1 $node2" -advanced -exverify –confirm ./sshUserSetup.sh -user oracle -hosts "$node1 $node2" -advanced -exverify -confirm
18、命名规范
1)集群(CLUSTER)的命名规则 Cluster name本身没有特殊用途,仅在使用其他管理工具统一管理不同RAC实例时有用,不得超过15个字符。 ${DB_NAME}-cls 2)SCAN的命名规则 SCAN 名称本身没有特殊用途,仅在使用其他管理工具统一管理时有用,不得超过15个字符。 ${DB_NAME}-scan
19、安装时检查忽略选项
1)Package:pdksh-5.2.14 2)Device Checks for AM 3)Task resolv.conf Integrity
20、grid集群软件安装,在执行root.sh脚本前,添加ohas服务
touch /usr/lib/systemd/system/ohas.service chmod 777 /usr/lib/systemd/system/ohas.service ohas.service : [Unit] Description=Oracle High Availability Services After=syslog.target [Service] ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Restart=always [Install] WantedBy=multi-user.target systemctl daemon-reload systemctl enable ohas.service systemctl start ohas.service systemctl status ohas.service
21、在两个节点分别执行orainstRoot.sh和root.sh脚本
执行顺序:1节点orainstRoot.sh,2节点orainstRoot.sh,1节点root.sh,2节点root.sh 1节点: [root@mydb1 system]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@mydb1 system]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to inittab **ohasd failed to start** ##出现该提示需要重启ohas.service服务 Failed to start the Clusterware. Last 20 lines of the alert log follow: 2020-01-10 08:16:18.019: [client(23496)]CRS-2101:The OLR was formatted using version 3. CRS-2672: Attempting to start 'ora.mdnsd' on 'mydb1' CRS-2676: Start of 'ora.mdnsd' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'mydb1' CRS-2676: Start of 'ora.gpnpd' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'mydb1' CRS-2672: Attempting to start 'ora.gipcd' on 'mydb1' CRS-2676: Start of 'ora.cssdmonitor' on 'mydb1' succeeded CRS-2676: Start of 'ora.gipcd' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'mydb1' CRS-2672: Attempting to start 'ora.diskmon' on 'mydb1' CRS-2676: Start of 'ora.diskmon' on 'mydb1' succeeded CRS-2676: Start of 'ora.cssd' on 'mydb1' succeeded 已成功创建并启动 ASM。 已成功创建磁盘组OCRDG。 clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 992a298111ba4fb8bf16c75cdd232ca8. Successfully replaced voting disk group with +OCRDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATEFile Universal IdFile Name Disk group -- ------------------------------- --------- 1. ONLINE 992a298111ba4fb8bf16c75cdd232ca8 (/dev/mapper/asm_ocr1p1) [OCRDG] Located 1 voting disk(s). sh: /bin/netstat: 没有那个文件或目录 CRS-2672: Attempting to start 'ora.asm' on 'mydb1' CRS-2676: Start of 'ora.asm' on 'mydb1' succeeded CRS-2672: Attempting to start 'ora.OCRDG.dg' on 'mydb1' CRS-2676: Start of 'ora.OCRDG.dg' on 'mydb1' succeeded 软件包准备中... cvuqdisk-1.0.9-1.x86_64 Configure Oracle Grid Infrastructure for a Cluster ... succeeded 2节点: [root@mydb2 etc]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@mydb2 etc]# [root@mydb2 etc]# cd [root@mydb2 ~]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to inittab **ohasd failed to start** ##出现该提示需要重启ohas.service服务 Failed to start the Clusterware. Last 20 lines of the alert log follow: 2020-01-10 08:27:01.458: [client(21808)]CRS-2101:The OLR was formatted using version 3. CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node mydb1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster sh: /bin/netstat: 没有那个文件或目录 软件包准备中... cvuqdisk-1.0.9-1.x86_64 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
22、报错忽略
1)Configure Oracle Grid Infrastructure for aCluster 2)Oracle Cluster Verification Utility
23、数据库软件安装检查忽略项
1)Package:pdksh-5.2.14 2)Task resolv.conf Integrity 3)Single Client Access Name(SCAN)
23、安装报错
提示:Error in invoking target 'agent nmhs'of makefile '/u01/app/oracle/product/11.2.0/db_home/sysman/lib/ins_emagent.mk'. 解决方法如下: cd $ORACLE_HOME/sysman/lib cp ins_emagent.mk ins_emagent.mk.bak vi ins_emagent.mk /NMECTL 快速定位,修改如下: $(MK_EMAGENT_NMECTL) -lnnz11 说明:第一个是字母l 后面两个是数字1 然后点击 Retry
24、root用户下,在两个节点分别执行root.sh
/u01/app/oracle/product/11.2.0/db_home/root.sh [root@mydb2 ~]# /u01/app/oracle/product/11.2.0/db_home/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_home Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions.
25、使用asmca配置实例安装所需磁盘
REDODG
DATADG
ARCHDG
26、使用dbca创建实例
27、配置hugepage,大页内存
使用大内存页有哪些好处:
1. 减少页表(Page Table)大小。每一个Huge Page,对应的是连续的2MB物理内存,这样12GB的物理内存只需要48KB的Page Table,与原来的24MB相比减少很多。 2. Huge Page内存只能锁定在物理内存中,不能被交换到交换区。这样避免了交换引起的性能影响。 3. 由于页表数量的减少,使得CPU中的TLB(可理解为CPU对页表的CACHE)的命中率大大提高。 4. 针对Huge Page的页表,在各进程之间可以共享,也降低了Page Table的大小。实际上这里可以反映出Linux在分页处理机制上的缺陷。而其他操作系统,比如AIX,对于共享内存段这样的内存,进程共享相同的页表,避免了Linux的这种问题。像笔者维护的一套系统,连接数平常都是5000以上,实例的SGA在60GB左右,要是按Linux的分页处理方式,系统中大部分内存都会被页表给用掉。
实施步骤如下:
1)检查/proc/meminfo grep -i hugepage /proc/meminfo 2)计算HugePages_Total大小,使用hugepages_settings.sh脚本进行计算: #!/bin/bash KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` # Find out the HugePage size HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'` # Start from 1 pages to be on the safe side and guarantee 1 free HugePage NUM_PG=1 # Cumulative number of pages required to handle the running shared memory segments for SEG_BYTES in `ipcs -m | awk '{print $5}' | grep "[0-9][0-9]*"` do MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` if [ $MIN_PG -gt 0 ]; then NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` fi done # Finish with results case $KERN in '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; '2.6') MEM_LOCK=`echo "$NUM_PG*$HPG_SZ" | bc -q`; echo "Recommended setting within the kernel boot command line(/etc/sysctl.conf): vm.nr_hugepages = $NUM_PG" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle soft memlock $MEM_LOCK" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle hard memlock $MEM_LOCK" ;; '3.10') MEM_LOCK=`echo "$NUM_PG*$HPG_SZ" | bc -q`; echo "Recommended setting within the kernel boot command line(/etc/sysctl.conf): vm.nr_hugepages =$NUM_PG" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle soft memlock $MEM_LOCK" echo "Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle hard memlock $MEM_LOCK" ;; *) echo "Unrecognized kernel version $KERN. Exiting." ;; esac #end ---- 3)修改/etc/sysctl.conf文件,增加如下行,根据上步计算的hugepages大小: vm.nr_hugepages=9218 4)生效 sysctl -p 5)修改/etc/security/limits.d/99-grid-oracle-limits.conf,增加如下,设定oracle用户可以锁定内存的大小 ,以KB为单位,可以设置为具体值,也可设置为unlimited: oracle soft memlock unlimited oracle hard memlock unlimited 6)重新启动实例
28、修改本地和集群监听端口号为11521
1)修改前确认: [grid@mydb1 ~]$ srvctl config listener Name: LISTENER Network: 1, Owner: grid Home: <CRS home> End points: TCP:1521 [grid@mydb1 ~]$ srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521 2)在集群下修改listener、scan_listener端口为11521,在一个节点执行 srvctl modify listener -l LISTENER -p "TCP:11521" srvctl modify scan_listener -p 11521 3)修改local_listener,进入sqlplus; alter system set local_listener = '(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.205)(PORT = 11521))' scope=both sid='racdb1'; alter system set local_listener = '(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.206)(PORT = 11521))' scope=both sid='racdb2'; 4)修改remote_listener alter system set remote_listener='racdb-scan:11521' scope=both; 5)在第1个节点关闭本地监听,修改listener.ora、endpoints_listener.ora、tnsnames.ora,并重启本地监听 srvctl stop listener -l LISTENER -n mydb1 cd $ORACLE_HOME/network/admin --修改1521为11521 vi endpoints_listener.ora vi listener.ora srvctl start listener -l LISTENER -n mydb1 srvctl status listener -l LISTENER srvctl config listener 6)在第2个节点关闭本地监听,修改listener.ora、endpoints_listener.ora、tnsnames.ora,并重启本地监听 srvctl stop listener -l LISTENER -n mydb2 cd $ORACLE_HOME/network/admin --修改1521为11521 vi endpoints_listener.ora vi listener.ora srvctl start listener -l LISTENER -n mydb2 srvctl status listener -l LISTENER srvctl config listener 7)修改ASM 监听端口 (如果不修改asm监听端口,lsnrctl status查看监听状态时不会显示asm服务监听状态) su - grid sqlplus / as sysdba alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.205)(PORT=11521))' scope=both sid='+ASM1'; alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.206)(PORT=11521))' scope=both sid='+ASM2'; lsnrctl status
29、asm参数优化
ASM 磁盘组使用的是默认的 1M AU 大小,对于大型数据库,这会造成较多的内存占用, 同时对性能略微有些影响,建议对于新增的用于放置数据文件的 ASM 磁盘组,适当调大 AU 大小,比如 4M 或 8M(2 的幂值)。根据电信运营商的实际经验,建议设置 AU 为 为 4m 。
30、数据库参数修改推荐
Alter system set resource_manager_plan='FORCE:' scope =spfile sid='*'; Alter system set audit_trail=none scope=spfile sid='*'; alter system set undo_retention=10800 scope=spfile sid='*'; alter system set session_cached_cursors=200 scope=spfile sid='*'; alter system set db_files=2000 scope=spfile sid='*'; alter system set max_shared_servers=0 scope=spfile sid='*'; alter system set sec_max_failed_login_attempts=100 scope=spfile sid='*'; alter system set deferred_segment_creation=false scope=spfile sid='*'; alter system set parallel_force_local=true scope=spfile sid='*'; alter system set parallel_max_servers=32 scope=spfile sid='*'; alter system set sec_case_sensitive_logon=false scope=spfile sid='*'; alter system set open_cursors=3000 scope=spfile sid='*'; alter system set open_links =40 scope=spfile sid='*'; alter system set open_links_per_instance =40 scope=spfile sid='*'; alter system set db_cache_advice=off scope=spfile sid='*'; alter system set "_b_tree_bitmap_plans"=false scope=spfile sid='*'; alter system set "_gc_policy_time"=0 scope=spfile sid='*'; alter system set "_gc_defer_time"=3 scope=spfile sid='*'; alter system set "_lm_tickets"=5000 scope=spfile sid='*'; alter system set "_optimizer_use_feedback"=false sid='*'; alter system set "_undo_autotune"=false scope=both sid='*'; alter system set "_bloom_filter_enabled"=FALSE scope=spfile sid='*'; alter system set "_cleanup_rollback_entries"=2000 scope=spfile sid='*'; alter system set "_px_use_large_pool"=true scope=spfile sid='*'; alter system set "_optimizer_extended_cursor_sharing_rel"=NONE scope=spfile sid='*'; alter system set "_optimizer_extended_cursor_sharing"=NONE scope=spfile sid='*'; alter system set "_optimizer_adaptive_cursor_sharing"=false scope=spfile sid='*'; alter system set "_optimizer_mjc_enabled"=FALSE scope=spfile sid='*'; alter system set "_sort_elimination_cost_ratio"=1 scope=spfile sid='*'; alter system set "_partition_large_extents"=FALSE scope=spfile sid='*'; alter system set "_index_partition_large_extents"=FALSE scope=spfile sid='*'; alter system set "_clusterwide_global_transactions"=FALSE scope=spfile sid='*'; alter system set "_part_access_version_by_number"=FALSE scope=spfile; alter system set "_partition_large_extents"=FALSE scope=spfile; alter system set "_sort_elimination_cost_ratio"=1 scope=spfile; alter system set "_use_adaptive_log_file_sync"=FALSE scope=spfile; alter system set "_lm_sync_timeout"=1200 scope=spfile; alter system set "_ksmg_granule_size"=134217728 scope=spfile; alter system set "_optimizer_cartesian_enabled"=false scope=spfile; alter system set "_external_scn_logging_threshold_seconds"=3600 scope=spfile; alter system set "_datafile_write_errors_crash_instance"=false scope=spfile; alter system set event='28401 TRACE NAME CONTEXT FOREVER, LEVEL 1:60025 trace name context forever:10949 trace name context forever,level 1' sid='*' scope=spfile;
感谢各位的阅读,以上就是“Rhel7_Oracle11g_rac安装方法是什么”的内容了,经过本文的学习后,相信大家对Rhel7_Oracle11g_rac安装方法是什么这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是亿速云,小编将为大家推送更多相关知识点的文章,欢迎关注!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。