重新配置与卸载 11gR2 Grid Infrastructure

      Oracle 11g R2 Grid Infrastructure 的安装与配置较之前的版本提供了更多的灵活性。在Grid Infrastructure安装完毕前执行root.sh经常容易出现错误,并且需要修复该错误才能得以继续。在这个版本中我们可以直接通过执行脚本rootcrs.pl来重新配置Grid Infrastructure而无需先卸载Grid Infrastructure,然后修复故障后进行再次安装。下面描述了rootcrs.pl的用法以及使用deinstall彻底卸载Grid Infrastructure。

1、rootcrs.pl命令介绍

#命令位置:$GRID_HOME/crs/install
#命令说明:
#  该命令主要是用于对crs进行维护与管理,包括patch,upgrade,downgrade,deconfig等等
#  perldoc rootcrs.pl执行这个命令获得完整的介绍
[root@linux1 install]# ./rootcrs.pl -h
Unknown option: h
Usage:
      rootcrs.pl [-verbose] [-upgrade | -patch] [-hahome <directory>]
                 [-paramfile <parameter-file>] 
                 [-deconfig | -downgrade] [-force] [-lastnode]
                 [-downgrade] [-oldcrshome <old crshome path>] [-version <old crs version>]  
                 [-unlock [-crshome <path to crs home>]]

      Options:
       -verbose    Run this script in verbose mode
       -upgrade    Oracle HA is being upgraded from previous version
       -patch      Oracle HA is being upgraded to a patch version
       -hahome     Complete path of Oracle Clusterware home
       -paramfile  Complete path of file specifying HA parameter values
       -lastnode   Force the node this is executing on to be considered the
                   last node of the install and perform actions associated
                   with configurig the last node
       -downgrade  Downgrade the clusterware
       -version    For use with downgrade; special handling is required if
                   downgrading to 9i. This is the old crs version in the format
                   A.B.C.D.E (e.g 11.1.0.6.0).
       -deconfig   Remove Oracle Clusterware to allow it to be uninstalled or reinstalled.
       -force      Force the executon of steps in delete that cannot be verified 
                   to be safe
       -unlock     Unlock CRS home 
       -crshome    Complete path of crs home. Use with unlock option.
       -oldcrshome For use with downgrade. Complete path of the old crs home.

      If neither -upgrade nor -patch is supplied, a new install is performed

      To see the full manpage for this program, execute:
        perldoc rootcrs.pl     

#对于执行root.sh失败时,我们可以通过该命令以-deconfig 参数来清除crs的配置信息,然后根据log修复故障或使用patch之后再重新执行root.sh #对于该命令的patch,upgrade,downgrade用法再此不作详细介绍

2、重新配置Grid Infrastructure及ASM

#重新配置Grid Infrastructure并不会移除已经复制的二进制文件,仅仅是回复到配置crs之前的状态,下面是其步骤

a、使用root用户登录,并执行下面的命令(所有节点,但最后一个节点除外)
  # perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
  
b、同样使用root用户在最后一个节点执行下面的命令。该命令将清空ocr 配置和voting disk  
  # perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

c、如果使用了ASM磁盘,继续下面的操作以使得ASM重新作为候选磁盘(清空所有的ASM磁盘组)
  # dd if=/dev/zero of=/dev/sdb1 bs=1024 count=100
  # /etc/init.d/oracleasm deletedisk DATA /dev/sdb1
  # /etc/init.d/oracleasm createdisk DATA /dev/sdb1

#Author : Robinson
#Blog   : http://blog.csdn.net/robinson_0612

3、彻底删除Grid Infrastructure

#11g R2 Grid Infrastructure也提供了彻底卸载的功能,deinstall该命令取代了使用OUI方式来清除clusterware以及ASM,回复到安装grid之前的环境。
#该命令将停止集群,移除二进制文件及其相关的所有配置信息。
#命令位置:$GRID_HOME/deinstall
#下面是该命令操作的具体事例,操作期间,需要提供一些交互信息,以及在新的session以root身份清除一些/tmp下的文件
[root@linux1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@linux1 bin]# cd ../deinstall/
[root@linux1 deinstall]# pwd
/u01/app/11.2.0/grid/deinstall
[root@linux1 deinstall]# ./deinstall
You must not be logged in as root to run ./deinstall.
Log in as Oracle user and rerun ./deinstall.
[root@linux1 deinstall]# su grid
[grid@linux1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2013-07-16_05-54-03-PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################## CHECK OPERATION START ########################
Install check configuration START

Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: linux1,linux2

Install check configuration END

Traces log file: /tmp/deinstall2013-07-16_05-54-03-PM/logs//crsdc.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/netdc_check207506844451155733.log

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/asmcadc_check2698133635629979531.log

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/11.2.0/grid.
ASM Diagnostic Destination : /u01/app/grid
ASM Diskgroups : +DATA
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS 
  that use this ASM instance(s).
 If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. 
 Do you  want to modify above information (y|n) [n]: 

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)linux1,linux2
Oracle Home selected for de-install is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2013-07-16_05-54-03-PM/logs/deinstall_deconfig2013-07-16_05-54-37-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2013-07-16_05-54-03-PM/logs/deinstall_deconfig2013-07-16_05-54-37-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/asmcadc_clean3319637107726750003.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/netdc_clean9055263637610505743.log

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

---------------------------------------->

Run the following command as the root user or the administrator on node "linux2".

/tmp/deinstall2013-07-16_05-54-03-PM/perl/bin/perl -I/tmp/deinstall2013-07-16_05-54-03-PM/perl/lib 
-I/tmp/deinstall2013-07-16_05-54-03-PM/crs/install /tmp/deinstall2013-07-16_05-54-03-PM/crs/install/rootcrs.pl -force  
-delete -paramfile /tmp/deinstall2013-07-16_05-54-03-PM/response/deinstall_Ora11g_gridinfrahome1.rsp

Run the following command as the root user or the administrator on node "linux1".

/tmp/deinstall2013-07-16_05-54-03-PM/perl/bin/perl -I/tmp/deinstall2013-07-16_05-54-03-PM/perl/lib
-I/tmp/deinstall2013-07-16_05-54-03-PM/crs/install /tmp/deinstall2013-07-16_05-54-03-PM/crs/install/rootcrs.pl -force 
-delete -paramfile /tmp/deinstall2013-07-16_05-54-03-PM/response/deinstall_Ora11g_gridinfrahome1.rsp -lastnode

Press Enter after you finish running the above commands

<----------------------------------------

Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Delete directory '/u01/app/grid' on the local node : Done

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linux2' : Done

Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'linux2' : Done

Delete directory '/u01/app/oraInventory' on the remote nodes 'linux2' : Done

Delete directory '/u01/app/grid' on the remote nodes 'linux2' : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


Oracle install clean START

Clean install operation removing temporary directory '/tmp/install' on node 'linux1'
Clean install operation removing temporary directory '/tmp/install' on node 'linux2'

Oracle install clean END

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Oracle Clusterware is stopped and successfully de-configured on node "linux2"
Oracle Clusterware is stopped and successfully de-configured on node "linux1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linux2'.
Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'linux2'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'linux2'.
Successfully deleted directory '/u01/app/grid' on the remote nodes 'linux2'.
Oracle Universal Installer cleanup was successful.

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'linux1,linux2' at the end of the session.

Oracle install successfully cleaned up the temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Pythonista

Mac下安装ipython

43620
来自专栏乐沙弥的世界

MySQL 自动故障转移工具--mysqlfailover

38360
来自专栏yukong的小专栏

【ssm个人博客项目实战01】SSM环境搭建1、ssm系统架构2、项目整合

maven是一个优秀的项目对象管理器工具 我可以通过在pom.xml中添加需要的jar包的依赖就可以导入对应的jar包了,非常的方便。 下面就列出整合ssm所...

61020
来自专栏杨建荣的学习笔记

dg broker校验失败的一个奇怪问题(二) (r8笔记第51天)

对昨天提出的问题做了一个简单的分析和排查,也算是有了一个交代,上一篇文章在 dg broker校验失败的一个奇怪问题 我查看了最近的日志,发现在半个月以前有一...

36250
来自专栏SpringSpace.cn

在 ubuntu 12.10 中安装 opensips 1.8.2

解压软件包: tar -zxvf opensips-1.8.2_src.tar.gz

23620
来自专栏码神联盟

框架 | SpringBoot项目创建和发布部署步骤

3.2K40
来自专栏流柯技术学院

linux下安装rzsz

wget http://freeware.sgi.com/source/rzsz/rzsz-3.48.tar.gz

63210
来自专栏一个会写诗的程序员的博客

第11章 Spring Boot应用监控第11章 Spring Boot应用监控小结

在实际的生产系统中,我们怎样知道我们的应用运行良好呢?我们往往需要对系统实际运行的情况(各种cpu,io,disk,db,业务功能等指标)进行监控运维。这需要耗...

32230
来自专栏码神联盟

重磅来袭,抱歉,来晚啦

来一波 、基本概念 1.1、spring Spring 是一个开源框架, Spring 是于 2003 年兴起的一个轻量级的 Java 开发框架,由 Ro...

363110
来自专栏极客编程

用Visual Studio Code和CLion进行EOS开发

每一个开发人员都需要一个良好的IDE,EOS开发也是一样,为项目开发过程构建一个良好的IDE环境是第一步。这就是为什么我们要写这个如何使用VS Code或者CL...

27020

扫码关注云+社区

领取腾讯云代金券