专栏首页从ORACLE起航,领略精彩的IT技术。Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级

Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级

Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级 环境:OEL 5.7 + Oracle 10.2.0.5 RAC

3.安装Clusterware

3.1 解压clusterware安装介质

将存放Oracle相关安装介质目录赋权给Oracle用户:

[root@oradb27 media]# chown -R oracle:oinstall /u01/media/

oracle用户解压安装介质:

[oracle@oradb27 media]$ gunzip 10201_clusterware_linux_x86_64.cpio.gz 
[oracle@oradb27 media]$ cpio -idmv < 10201_clusterware_linux_x86_64.cpio 

执行预检查:

[root@oradb27 media]# /u01/media/clusterware/rootpre/rootpre.sh 
No OraCM running 

3.2 开始安装clusterware

使用Xmanager(MAC系统是XQuartz)开始安装clusterware:

[root@oradb27 media]# cd /u01/media/clusterware/install
[root@oradb27 install]# vi oraparam.ini 
修改下面这里,
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2
添加redhat-5,即:
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5

[root@oradb27 clusterware]# pwd
/u01/media/clusterware
[root@oradb27 clusterware]# ./runInstaller 

3.3 root用户按提示执行脚本

节点1执行:

#开始没有对/dev/sd{a,b,c,d,e},这5个LUN分区
[root@oradb27 rules.d]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 rules.d]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Failed to upgrade Oracle Cluster Registry configuration

#对/dev/sd{a,b,c,d,e},这5个LUN分别分区sd{a,b,c,d,e}1后执行成功
[root@oradb27 10.2.0.5]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 10.2.0.5]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        oradb27
CSS is inactive on these nodes.
        oradb28
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@oradb27 10.2.0.5]# 

官方对这个错误的解决方法可参考MOS文档: Executing root.sh errors with "Failed To Upgrade Oracle Cluster Registry Configuration" (文档 ID 466673.1)

Before running the root.sh on the first node in the cluster do the following:

  1. Download Patch:4679769 from Metalink (contains a patched version of clsfmt.bin).
  2. Do the following steps as stated in the patch README to fix the problem: Note: clsfmt.bin need only be replaced on the 1st node of the cluster

节点2执行:

[root@oradb28 crshome_1]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete

[root@oradb28 crshome_1]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        oradb27
        oradb28
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0.5/crshome_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
[root@oradb28 crshome_1]# 

上面的这个报错信息,需要在/u01/app/oracle/product/10.2.0.5/crshome_1/bin下修改vipca和srvctl文件内容:

[root@oradb28 bin]# ls -l vipca 
-rwxr-xr-x 1 oracle oinstall 5343 Jan  3 09:44 vipca
[root@oradb28 bin]# ls -l srvctl 
-rwxr-xr-x 1 oracle oinstall 5828 Jan  3 09:44 srvctl
加入
unset LD_ASSUME_KERNEL

重新运行 /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)

没有再报错,但是也没有成功显示进行vipca创建。

3.4 vipca创建(可能不需要)

如果上面3.3步骤正常执行成功了vipca,那么此步骤不再需要; 如果上面3.3步骤没有正常执行成功vipca,那么就需要手工在最后一个节点手工vipca创建: 这里手工执行vipca还遇到一个错误如下:

[root@oradb28 bin]# ./vipca 
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

查看网络层相关的信息,并手工注册信息:

[root@oradb28 bin]# ./oifcfg getif
[root@oradb28 bin]# ./oifcfg iflist
eth0  192.168.1.0
eth1  10.10.10.0
[root@oradb28 bin]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 06:CB:72:01:07:88  
          inet addr:192.168.1.28  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1018747 errors:0 dropped:0 overruns:0 frame:0
          TX packets:542075 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2196870487 (2.0 GiB)  TX bytes:43268497 (41.2 MiB)

eth1      Link encap:Ethernet  HWaddr 22:1A:5A:DE:C1:21  
          inet addr:10.10.10.28  Bcast:10.10.10.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5343 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3656 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1315035 (1.2 MiB)  TX bytes:1219689 (1.1 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2193 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2193 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:65167 (63.6 KiB)  TX bytes:65167 (63.6 KiB)

[root@oradb28 bin]# ./oifcfg -h
PRIF-9: incorrect usage

Name:
        oifcfg - Oracle Interface Configuration Tool.

Usage:  oifcfg iflist [-p [-n]]
        oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
        oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
        oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
        oifcfg [-help]

        <nodename> - name of the host, as known to a communications network
        <if_name>  - name by which the interface is configured in the system
        <subnet>   - subnet address of the interface
        <if_type>  - type of the interface { cluster_interconnect | public | storage }

[root@oradb28 bin]# ./oifcfg setif -global eth0/192.168.1.0:public
[root@oradb28 bin]# ./oifcfg getif
eth0  192.168.1.0  global  public
[root@oradb28 bin]# 
[root@oradb28 bin]# 
[root@oradb28 bin]# 
[root@oradb28 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
[root@oradb28 bin]# ./oifcfg getif
eth0  192.168.1.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
[root@oradb28 bin]# 

当oifcfg getif正常获取信息后,再次运行VIPCA创建成功。

然后再继续回到安装clusterware的界面继续也显示成功。 此时查看集群的状态应该都是正常的:

[oracle@oradb27 bin]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb27 bin]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb27 bin]$ 

[oracle@oradb28 ~]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb28 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb28 ~]$ 

4.升级Clusterware

4.1 解压Patchset包

[root@oradb27 media]$ unzip p8202632_10205_Linux-x86-64.zip
[root@oradb27 media]$ cd Disk1/
[root@oradb27 Disk1]$ pwd
/u01/media/Disk1

4.2 开始升级clusterware

使用xquartz开始升级clusterware: ssh -X oracle@192.168.1.27

[root@oradb27 Disk1]$ ./runInstaller 

升级过程中,在预安装检查时,有一个参数设置不符合检查要求,如下:

Checking for rmem_default=1048576; found rmem_default=262144.   Failed <<<<

可以调整/etc/sysctl.conf配置文件,然后执行sysctl -p生效。

4.3 root用户按提示执行脚本

    1.  Log in as the root user.
    2.  As the root user, perform the following tasks:

        a.  Shutdown the CRS daemons by issuing the following command:
                /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
        b.  Run the shell script located at:
                /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
            This script will automatically start the CRS daemons on the
            patched node upon completion.

    3.  After completing this procedure, proceed to the next node and repeat.

即分别执行:

/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh

节点1执行:

[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources 
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb27 bin]# 

节点2执行:

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources 
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb28 bin]# 

升级成功,确认crs版本为10.2.0.5,集群状态正常:

[oracle@oradb27 bin]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb28 ~]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb27 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....b27.gsd application    0/5    0/0    ONLINE    ONLINE    oradb27     
ora....b27.ons application    0/3    0/0    ONLINE    ONLINE    oradb27     
ora....b27.vip application    0/0    0/0    ONLINE    ONLINE    oradb27     
ora....b28.gsd application    0/5    0/0    ONLINE    ONLINE    oradb28     
ora....b28.ons application    0/3    0/0    ONLINE    ONLINE    oradb28     
ora....b28.vip application    0/0    0/0    ONLINE    ONLINE    oradb28     
[oracle@oradb27 ~]$ 

至此,oracle clusterware安装(10.2.0.1)和升级(10.2.0.5)已完成。

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • OSW 快速安装部署

    关于在运行Oracle的环境下部署OSW具体好处不再多说,只需要知晓,在日常Oracle定位各类故障,osw的数据可以协助诊断问题。MOS很多文档也多处提到需要...

    Alfred Zhao
  • Linux rsync实现断点续传

    Linux 主机之间即时传送文件,scp命令大家都很熟悉 但当要传送的文件较大,过程中如果网络中断了,就比较悲剧了。这时候可以考虑使用rsync命令替代scp,...

    Alfred Zhao
  • 记录下diagwait的问题

    diagwait算是一个小知识点,但具有普遍性。因为现实中很多客户生产数据库版本低于11.2,同时diagwait没有正确设置(默认配置不合理),轻则导致一些节...

    Alfred Zhao
  • 经典炸弹人

    《炸弹人》是HUDSON出品的一款ACT类型游戏,经典的第一作登陆在FC版本,游戏于1983年发行。游戏具体操作是一个机器人放置炸弹来炸死敌人,但也可以炸死自己...

    程序源代码
  • [第33期] 树,二叉树, 二叉搜索树

    比如想想访问中间某个结点的时候,或者倒数第几个结点 就只能从头往后一个一个查, 效率不高。

    用户6900878
  • 漫画:二叉树系列 第四讲(BST的查找)

    在上一节中,我们学习了二叉搜索树。那我们如何在二叉搜索树中查找一个元素呢?和普通的二叉树又有何不同?我们将在本节内容中进行学习!

    程序员小浩
  • 记录 android 开发的一个 "面试" 问题

    前序:      3天前,有幸得到师兄赏识,和他一起去帮一间珠海的本地的IT公司担任面试官,虽说如此,我自己本身就还没毕业,充其量是去见识下世面罢了。当天共面试...

    林冠宏-指尖下的幽灵
  • linux学习第十篇:find命令,文件名后缀

    find命令 find命令用于查找文件系统中的指定文件 其命令格式为   find 要查找的路径 -name  查找文件名  例如   find . -na...

    用户1215343
  • ZABBIX 自定义采集触发时间范围

    周一到周五每天上午09:15-11:30 每隔5秒获取一次数据,下午13:00-15:00每隔10秒获得一次数据,其它时间段不获取数据。

    Kevin song
  • SELinux的基本使用

    从进入了 CentOS 5.x 之后的 CentOS 版本中 (当然包括 CentOS 7),SELinux 已经是个非常完备的核心模块了!尤其 CentOS ...

    小柒吃地瓜

扫码关注云+社区

领取腾讯云代金券