前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >11gR2 RAC添加和删除节点步骤--添加节点

11gR2 RAC添加和删除节点步骤--添加节点

作者头像
AiDBA宝典
发布2019-09-29 16:49:41
1.7K0
发布2019-09-29 16:49:41
举报
文章被收录于专栏:小麦苗的DB宝专栏

今天小麦苗给大家分享的是11gR2 RAC添加和删除节点步骤。

11gR2 RAC添加和删除节点步骤--添加节点

1 个节点的hosts关闭防火墙

service iptables stop

chkconfig iptables off

3 创建用户和组

--创建组:

useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba grid

useradd -u 1101 -g oinstall -G dba,oper,asmdba oracle

--配置用户的环境变量

--grid用户:

export PATH

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_HOSTNAME=rac3

export ORACLE_SID=orcl3

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/11.2.0/db_1

export ORACLE_UNQNAME=orcl

export TNS_ADMIN=$ORACLE_HOME/network/admin

#export ORACLE_TERM=xterm

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export LANG=en_US

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'

umask 022

6 配置limits.conf修改内核参数

--和shmmax使sysctl停止NTP安装相关依赖包

yum install gcc compat-libstdc++-33 elfutils-libelf-devel glibc-devel glibc-headers gcc-c++ libaio-devel libstdc++-devel pdksh compat-libcap1-*

11 执行

/sbin/start_udev

[root@rac3 ~]# ll /dev/asm*

brw-rw---- 1 grid asmadmin 8, 16 Jun 14 05:42 /dev/asm-diskb

brw-rw---- 1 grid asmadmin 8, 32 Jun 14 05:42 /dev/asm-diskc

brw-rw---- 1 grid asmadmin 8, 48 Jun 14 05:42 /dev/asm-diskd

brw-rw---- 1 grid asmadmin 8, 64 Jun 14 05:42 /dev/asm-diske

brw-rw---- 1 grid asmadmin 8, 80 Jun 14 05:42 /dev/asm-diskf

brw-rw---- 1 grid asmadmin 8, 96 Jun 14 05:42 /dev/asm-diskg

brw-rw---- 1 grid asmadmin 8, 112 Jun 14 05:42 /dev/asm-diskh

以上10个步骤要和之前2个节点配置一样

12 和GRID在节点1验证对等性

[grid@rac1 ~]$ cluvfy comp nodecon -n rac1,rac2,rac3

Verifying node connectivity

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "192.168.8.0" with node(s) rac2,rac1,rac3

TCP connectivity check passed for subnet "192.168.8.0"

Node connectivity passed for subnet "172.168.0.0" with node(s) rac2,rac1,rac3

TCP connectivity check passed for subnet "172.168.0.0"

Node connectivity passed for subnet "169.254.0.0" with node(s) rac2,rac1

TCP connectivity check passed for subnet "169.254.0.0"

Interfaces found on subnet "192.168.8.0" that are likely candidates for VIP are:

rac2 eth0:192.168.8.223 eth0:192.168.8.224

rac1 eth0:192.168.8.221 eth0:192.168.8.222 eth0:192.168.8.225

rac3 eth0:192.168.8.227

Interfaces found on subnet "172.168.0.0" that are likely candidates for VIP are:

rac2 eth1:172.168.1.19

rac1 eth1:172.168.1.18

rac3 eth1:172.168.1.20

WARNING:

Could not find a suitable set of interfaces for the private interconnect

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "192.168.8.0".

Subnet mask consistency check passed for subnet "172.168.0.0".

Subnet mask consistency check passed for subnet "169.254.0.0".

Subnet mask consistency check passed.

Node connectivity check passed

Verification of node connectivity was successful.

14 对新节点安装Clusterware

[grid@rac1 ~]$ cluvfy stage -post hwos -n rac3

Performing post-checks for hardware and operating system setup

Checking node reachability...

Node reachability check passed from node "rac1"

Checking user equivalence...

User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"

Node connectivity passed for interface "eth0"

TCP connectivity check passed for subnet "192.168.8.0"

Check: Node connectivity for interface "eth1"

Node connectivity passed for interface "eth1"

ERROR: /*错误原因由于BUG,检查了网络和对等性都没问题,这里忽略*/

PRVF-7617 : Node connectivity between "rac1 : 192.168.8.221" and "rac3 : 172.168.1.20" failed

TCP connectivity check failed for subnet "172.168.0.0"

Node connectivity check failed

Checking multicast communication...

Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check for multiple users with UID value 0 passed

Time zone consistency check passed

Checking shared storage accessibility...

Disk Sharing Nodes (1 in count)

------------------------------------ ------------------------

/dev/sda rac3

Disk Sharing Nodes (1 in count)

------------------------------------ ------------------------

/dev/sdb rac3

/dev/sdc rac3

/dev/sdd rac3

/dev/sde rac3

/dev/sdf rac3

/dev/sdg rac3

/dev/sdh rac3

Shared storage check was successful on nodes "rac3"

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...

Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Post-check for hardware and operating system setup was unsuccessful on all the nodes.

[grid@rac1 ~]$ cluvfy stage -pre crsinst -n rac3

Performing pre-checks for cluster services setup

Checking node reachability...

Node reachability check passed from node "rac1"

Checking user equivalence...

User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"

Node connectivity passed for interface "eth0"

TCP connectivity check passed for subnet "192.168.8.0"

Check: Node connectivity for interface "eth1"

Node connectivity passed for interface "eth1"

ERROR:

PRVF-7617 : Node connectivity between "rac1 : 192.168.8.221" and "rac3 : 172.168.1.20" failed

TCP connectivity check failed for subnet "172.168.0.0"

Node connectivity check failed

Checking multicast communication...

Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"...

。。。。。。。。

Package existence check failed for "pdksh" /*节点3没装pdksh,这个包可装可不装*/

Check failed on nodes:

rac3

Package existence check passed for "expat(x86_64)"

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed

Default user file creation mask check passed

Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined

domain entry in file "/etc/resolv.conf" is consistent across nodes

search entry in file "/etc/resolv.conf" is consistent across nodes

The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Time zone consistency check passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

[grid@rac1 ~]$ cluvfy stage -pre nodeadd -n rac3 -fixup -verbose

Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac1"

Destination Node Reachable?

------------------------------------ ------------------------

rac3 yes

。。。。。。。。。。。。。。。。。。

Result: Package existence check passed for "sysstat"

Check: Package existence for "pdksh"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

rac1 pdksh-5.2.14-30 pdksh-5.2.14 passed

rac3 missing pdksh-5.2.14 failed

Result: Package existence check failed for "pdksh"

Check: Package existence for "expat(x86_64)"

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

rac1 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed

rac3 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed

Result: Package existence check passed for "expat(x86_64)"

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

Check: Current group ID

Result: Current group ID check passed

Starting check for consistency of primary group of root user

Node Name Status

------------------------------------ ------------------------

rac1 passed

rac3 passed

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

Check: Time zone consistency

Result: Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes

No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking to make sure user "grid" is not in "root" group

Node Name Status Comment

------------ ------------------------ ------------------------

rac1 passed does not exist

rac3 passed does not exist

Result: User "grid" is not part of "root" group. Check passed

Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined

File "/etc/resolv.conf" does not have both domain and search entries defined

Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...

domain entry in file "/etc/resolv.conf" is consistent across nodes

Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...

search entry in file "/etc/resolv.conf" is consistent across nodes

Checking DNS response time for an unreachable node

Node Name Status

------------------------------------ ------------------------

rac1 passed

rac3 passed

The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...

Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...

Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined

More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file

Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Pre-check for node addition was unsuccessful on all the nodes.

2.3 GI命令

正式添加节点之前它也会调用cluvfy因为addNode.sh 工具来验证新加入节点是否满足条件,而我们DNS 所以在运行addNode.sh 这个参数就是从addNode.sh 在节点1命令把oracle 选择实例,输入sys选择节点和实例名-> Finish.

的图形化管理,也可以使用dbca用oracle 资源里

用户执行,还要注意oracle 发现orcl3五. 配置

5.1 用户下tnsnames.ora 修改所有节点,Oracle文件,修改内容如下:

NODE1_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST= rac1-vip)(PORT = 1521))

NODE2_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST =rac2-vip)(PORT = 1521))

NODE3_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST =rac3-vip)(PORT = 1521))

DAVE_REMOTE =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST=rac1-vip)(PORT = 1521))

(ADDRESS = (PROTOCOL = TCP)(HOST=rac2-vip)(PORT = 1521))

(ADDRESS = (PROTOCOL = TCP)(HOST=rac3-vip)(PORT = 1521))

)

)

5.2 和REMOTE_LISTENER 执行如下操作:

alter system set LOCAL_LISTENER='NODE1_LOCAL' scope=both sid='orcl1';

alter system set LOCAL_LISTENER='NODE2_LOCAL' scope=both sid='orcl2';

alter system set LOCAL_LISTENER='NODE3_LOCAL' scope=both sid='orcl3';

alter system set REMOTE_LISTENER='ORCL_REMOTE' scope=both sid='*';

修改Service-Side TAF 修改之前的service实例:orcl3

[oracle@rac1 admin]$ srvctl modify service -d orcl -s orcl_taf -n -i orcl1,orcl2,orcl3

[oracle@rac1 admin]$ srvctl config service -d orcl

Service name: orcl_taf

Service is enabled

Server pool: orcl_orcl_taf

Cardinality: 3

Disconnect: false

Service role: PRIMARY

Management policy: AUTOMATIC

DTP transaction: false

AQ HA notifications: false

Failover type: SELECT

Failover method: BASIC

TAF failover retries: 180

TAF failover delay: 5

Connection Load Balancing Goal: LONG

Runtime Load Balancing Goal: NONE

TAF policy specification: BASIC

Edition:

Preferred instances: orcl1,orcl2,orcl3

Available instances:

[oracle@rac1 admin]$ srvctl start service -d orcl -s orcl_taf -i orcl3

[oracle@rac1 admin]$ srvctl status service -d orcl

Service orcl_taf is running on instance(s) orcl1,orcl3

#没启用,这里顺便启动下

[oracle@rac1 admin]$ srvctl start service -d orcl -s orcl_taf -i orcl2

[oracle@rac1 admin]$ srvctl status service -d orcl

Service orcl_taf is running on instance(s) orcl1,orcl2,orcl3

验证

[grid@rac3 ~]$ olsnodes -s

rac1 Active

rac2 Active

rac3 Active

[grid@rac3 ~]$ olsnodes -n

rac1 1

rac2 2

rac3 3

[grid@rac1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATADG.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

ora.LISTENER.lsnr

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

ora.SYSTEMDG.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

ora.asm

ONLINE ONLINE rac1 Started

ONLINE ONLINE rac2 Started

ONLINE ONLINE rac3 Started

ora.gsd

OFFLINE OFFLINE rac1

OFFLINE OFFLINE rac2

OFFLINE OFFLINE rac3

ora.net1.network

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

ora.ons

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE rac2

ora.cvu

1 ONLINE ONLINE rac2

ora.oc4j

1 ONLINE ONLINE rac2

ora.orcl.db

1 ONLINE ONLINE rac1 Open

2 ONLINE ONLINE rac2 Open

3 ONLINE ONLINE rac3 Open

ora.orcl.orcl_taf.svc

1 ONLINE ONLINE rac1

2 ONLINE ONLINE rac3

3 ONLINE ONLINE rac2

ora.rac1.vip

1 ONLINE ONLINE rac1

ora.rac2.vip

1 ONLINE ONLINE rac2

ora.rac3.vip

1 ONLINE ONLINE rac3

ora.scan1.vip

1 ONLINE ONLINE rac2

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu Jun 9 10:09:31 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

SQL> col host_name for a20

SQL> select inst_id,host_name,instance_name,status from gv$instance;

INST_ID HOST_NAME INSTANCE_NAME STATUS

---------- -------------------- ---------------- ------------

1 rac1 orcl1 OPEN

3 rac3 orcl3 OPEN

2 rac2 orcl2 OPEN

[root@rac1 ~]# ./crs_stat.sh

Name Target State Host

------------------------ ---------- --------- -------

ora.DATADG.dg ONLINE ONLINE rac1

ora.LISTENER.lsnr ONLINE ONLINE rac1

ora.LISTENER_SCAN1.lsnr ONLINE ONLINE rac2

ora.SYSTEMDG.dg ONLINE ONLINE rac1

ora.asm ONLINE ONLINE rac1

ora.cvu ONLINE ONLINE rac2

ora.gsd OFFLINE OFFLINE

ora.net1.network ONLINE ONLINE rac1

ora.oc4j ONLINE ONLINE rac2

ora.ons ONLINE ONLINE rac1

ora.orcl.db ONLINE ONLINE rac1

ora.orcl.orcl_taf.svc ONLINE ONLINE rac1

ora.rac1.ASM1.asm ONLINE ONLINE rac1

ora.rac1.LISTENER_RAC1.lsnr ONLINE ONLINE rac1

ora.rac1.gsd OFFLINE OFFLINE

ora.rac1.ons ONLINE ONLINE rac1

ora.rac1.vip ONLINE ONLINE rac1

ora.rac2.ASM2.asm ONLINE ONLINE rac2

ora.rac2.LISTENER_RAC2.lsnr ONLINE ONLINE rac2

ora.rac2.gsd OFFLINE OFFLINE

ora.rac2.ons ONLINE ONLINE rac2

ora.rac2.vip ONLINE ONLINE rac2

ora.rac3.ASM3.asm ONLINE ONLINE rac3

ora.rac3.LISTENER_RAC3.lsnr ONLINE ONLINE rac3

ora.rac3.gsd OFFLINE OFFLINE

ora.rac3.ons ONLINE ONLINE rac3

ora.rac3.vip ONLINE ONLINE rac3

ora.scan1.vip ONLINE ONLINE rac2

添加删除节点小结

11gR2 RAC 个阶段:

)第一阶段主要工作是复制GIRD HOME,并且启动GRID信息,更新inventory(2到新节点,更新inventory(3创建新的数据库实例(包括创建undo ,初始化参数等),更新OCR的卸载步骤正好和上面的步骤相反。 在添加/状态,不需要停机,对客户端业务没有影响。新节点的ORACLE_BASE路径在添加过程中会自动创建,无需手动创建。

(1删除节点前,建议手工备份一下OCR删除节点失败,可以通过恢复原来的OCR(2时,OUI配置功能,但是添加节点脚本addNode.sh用户和grid用户等效性。

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2018-07-13,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 DB宝 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档