1、查看磁盘组信息
asmcmd lsdg
@H_404_5@查看磁盘组的具体信息(总大小,可用等)
@H_404_5@2、查看votedisk的当前情况
@H_404_5@crsctl query css votedisk
@H_404_5@
@H_404_5@3、crsctl replace votedisk +DG2
@H_404_5@
@H_404_5@表决盘从DATA磁盘组迁移到asm@H_404_5@磁盘组DG2@H_404_5@中。
@H_404_5@现在votedisk@H_404_5@在asm@H_404_5@磁盘组dg2@H_404_5@中,我们希望手工在asm@H_404_5@磁盘组DG1@H_404_5@中添加一块新的表决盘。但是添加失败了。
@H_404_5@另外,表决盘也不支持同时在asm磁盘组中和ocfs文件系统中。
@H_404_5@比如,当前表决盘在dg2磁盘组中,如果执行下面添加一块表决盘在ocfs文件系统中的命令,也是不能成功的:
@H_404_5@
crsctl add css votedisk /myocfs1/vdfile5
crsctl replace votedisk /myocfs1/vd4
@H_404_5@查看是否成功:
crsctl query css votedisk
crsctl query css votedisk
也可以删除多于的votedisk,@H_404_5@通过fuid@H_404_5@删除指定的表决文件:
crsctl delete css votedisk afb49b9a67304f9ebfaf4278c2eeeb71
6、ocr的管理
检查ocr的情况:
当前ocr在data磁盘组中。
如果当前系统中有ocfs文件系统,我们可以在ocfs添加ocr镜像:
touch /myocfs1/ocr_mirror1
chown root:oinstall /myocfs1/ocr_mirror1
chmod 640 /myocfs1/ocr_mirror1
@H_404_5@执行添加ocr@H_404_5@镜像的操作:
@H_404_5@
ocrconfig -add /myocfs1/ocr_mirror1
@H_404_5@也可以在asm@H_404_5@磁盘组中执行添加ocr@H_404_5@镜像的操作:
@H_404_5@
ocrconfig -add +DG2
Ocrcheck
你也可以删除ocr镜像:
ocrconfig -delete /myocfs1/ocr_mirror1
@H_404_5@删除asm@H_404_5@磁盘组中的镜像:
ocrconfig -delete +DG2
ocrcheck
发现没有dg2了。
7、替换ocr ocfs文件系统中
touch /myocfs1/ocr_new
chown root:oinstall /myocfs1/ocr_new
chmod 640 /myocfs1/ocr_new
@H_404_5@把/myocfs1/ocr_mirror1@H_404_5@替换到新的位置:/myocfs1/ocr_new
ocrconfig -replace /myocfs1/ocr_mirror1 -replacement /myocfs1/ocr_new
@H_404_5@可以把上面的位置替换到DG1@H_404_5@的新位置:
ocrconfig -replace /myocfs1/ocr_new -replacement +DG1
@H_404_5@查看是否成功:
Ocrcheck
8、ocr的备份
@H_404_5@先查看备份情况:
@H_404_5@手工备份:
ocrconfig -manualbackup
@H_404_5@再次查看ocr@H_404_5@备份情况:
ocrconfig -showbackup
9、ocr的备份二
可以使用ocrdump
@H_404_5@可以把指定的备份ocr@H_404_5@文件dump@H_404_5@出来,默认生成在当前目录,文件名:OCRDUMPFILE
ocrdump -backupfile /taryartar/12c/grid_home/cdata/mycluster/backup_20150520_231258.ocr
Ocr@H_404_5@的导出:
@H_404_5@导出ocr@H_404_5@,ocrEXP@H_404_5@是导出的文件名,当前目录:
ocrconfig -export ocrEXP
Ocr@H_404_5@的导入:
@H_404_5@导入ocr@H_404_5@文件,ocrEXP@H_404_5@是导入的ocr@H_404_5@文件名:
ocrconfig -import ocrEXP
10、
Ocr@H_404_5@两种备份方式选择:
ocrconfig -manualbackup and ocrconfig -restore
ocrconfig -export and ocrconfig -import
@H_404_5@一般建议是通过第一种方式进行备份和恢复。
@H_404_5@如果要用第二种导入、导出的方式,那么需要关闭集群,然后执行导入导出获得完整的ocr@H_404_5@文件。
ocrconfig -manualbackup@H_404_5@产生的文件和ocrconfig -export@H_404_5@产生的文件,格式不同。
@H_404_5@所以ocrconfig -manualbackup@H_404_5@必须是用ocrconfig -restore
Ocrconfig -export@H_404_5@导出的文件必须是ocrconfig -import@H_404_5@的方式进行导入。
11、
@H_404_5@表决盘/@H_404_5@文件(voting files@H_404_5@)的备份与恢复
1@H_404_5@、表决盘备份作为ocr@H_404_5@备份的一部分
3@H_404_5@、表决盘全部丢失,需要人工干预。
12、@H_404_5@表决盘、集群注册表全部丢失的恢复(文件系统篇)
2、@H_404_5@恢复voting disk
3、@H_404_5@启动集群。
4、@H_404_5@检查ocr@H_404_5@和voting disk@H_404_5@的完整性。
5、@H_404_5@检查集群的状态。
@H_404_5@实验:
===ocr voting disk @H_404_5@恢复试验(集群文件系统)======
Ocrcheck
/myocfs1/ocr_new
crsctl query css votedisk
@H_404_5@记录表决盘位置:
/myocfs1/storage/vd6B
ocrconfig -manualbackup
@H_404_5@记录刚备份的备份ocr@H_404_5@文件:
/taryartar/12c/grid_home/cdata/mycluster/backup_20150521_152003.ocr
ocrconfig -showbackup
rm -rf /myocfs1/ocr_new
@H_404_5@直接删除集群现在使用的voting disk
rm -rf /myocfs1/storage/vd6B
@H_404_5@查看ocr@H_404_5@和表决文件情况:
Ocrcheck
Ocr@H_404_5@情况不可查,命令报错
crsctl query css votedisk
Votedisk@H_404_5@还可查,但实际文件已不存在。
@H_404_5@查看当前集群情况:
Olsnodes
@H_404_5@强制在当前节点的关闭集群,要加-f@H_404_5@参数。否则在集群故障的时候无法完全关闭。
crsctl stop crs -f
@H_404_5@上面关闭不一定彻底,下面我们手工关闭asm@H_404_5@实例:
sql> shutdown abort;
@H_404_5@其它的集群相关进程,从操作系统层面直接杀掉:
ps -ef |grep ora|awk '{print $2}'|xargs kill -9
ps -ef |grep asm|awk '{print $2}'|xargs kill -9
ps -ef |grep grid|awk '{print $2}'|xargs kill -9
ocrconfig -showbackup
@H_404_5@查看集群停止之前正在使用的ocr@H_404_5@是否存在:
on one node
@H_404_5@这边是不存在的:
ll /myocfs1/ocr_new
touch /myocfs1/ocr_new
chown root:oinstall /myocfs1/ocr_new
chmod 640 /myocfs1/ocr_new
ll /myocfs1/ocr_new
@H_404_5@执行从指定的备份文件进行ocr@H_404_5@的还原:
ocrconfig -restore /taryartar/12c/grid_home/cdata/mycluster/backup_20150521_152003.ocr
ll /myocfs1/ocr_new
@H_404_5@还原完成后可以看到ocr@H_404_5@文件大小变为不是0@H_404_5@了
@H_404_5@而且执行ocrcheck@H_404_5@命令可以正常看到ocr@H_404_5@的信息。
ocrcheck
@H_404_5@下面进行votedisk@H_404_5@的恢复
@H_404_5@把集群启动到排他模式,只在一个节点启动:
on one node
crsctl start crs -excl
@H_404_5@启动完成可以查看一下表决盘的情况:
crsctl query css votedisk
@H_404_5@如果原位置可用,则可以直接替换到原先的表决盘。此处原先位置已经删除了,所以不可用,下面的命令报错。
crsctl replace votedisk /myocfs1/storage/vd6B
@H_404_5@此时可以新增一个表决盘:
crsctl add css votedisk /myocfs1/storage/vd6C
@H_404_5@然后就可以查看表决盘的情况:
crsctl query css votedisk
crsctl stop crs -f
--@H_404_5@此时ocr@H_404_5@和votedisk@H_404_5@都有了,可以正式启动集群了。所有节点都执行:
crsctl start crs
@H_404_5@确定集群是否正常
@H_404_5@检查集群所有节点的ocr@H_404_5@的版本完整性:
cluvfy comp ocr -n all -verbose
@H_404_5@检查集群所有节点votedisk@H_404_5@的版本完整性:
cluvfy comp vdisk -n all -verbose
crsctl check cluster -all
crsctl stat res -t
cluvfy comp ocr -n all -verbose
cluvfy comp vdisk -n all -verbose
12、ocr voting disk @H_404_5@恢复试验(ASM@H_404_5@篇)
@H_404_5@首先查看ocr@H_404_5@的当前集群情况
Ocrcheck
ocr在磁盘组dg2中。
@H_404_5@查看votedisk@H_404_5@情况
crsctl query css votedisk
[root@rac1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 3d071f92464a4fe2bf8fa4ce05f897f0 (/dev/raw/raw4) [DG2]
Located 1 voting disk(s).
[root@rac1 ~]#
votedisk也在磁盘组dg2中。
@H_404_5@发现ocr@H_404_5@和votedisk@H_404_5@都放在dg2@H_404_5@磁盘组中
@H_404_5@下面确定dg2@H_404_5@磁盘组由哪些磁盘组成
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ sqlplus / as sysasm
sql*Plus: Release 12.1.0.2.0 Production on Thu Nov 9 22:31:33 2017
Copyright (c) 1982,2014,Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
sql> column path format a20;
sql> set linesize 500
sql> select dg.NAME as disk_group,d.NAME,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,PATH from V$ASM_DISK d,V$ASM_DISKGROUP dg
where d.GROUP_NUMBER=dg.GROUP_NUMBER
2 3 order by dg.NAME;
DISK_GROUP NAME MOUNT_S HEADER_STATU MODE_ST PATH
------------------------------ ------------------------------ ------- ------------ ------- --------------------
DATA DATA_0001 CACHED MEMBER ONLINE /dev/raw/raw2
DATA DATA_0000 CACHED MEMBER ONLINE /dev/raw/raw1
DG1 DG1_0000 CACHED MEMBER ONLINE /dev/raw/raw3
DG2 DG2_0000 CACHED MEMBER ONLINE /dev/raw/raw4
sql>
@H_404_5@破坏ocr@H_404_5@和votedisk@H_404_5@所在的磁盘组使用的磁盘:
dd if=/dev/zero of=/dev/raw/raw4 bs=1024k count=1
@H_404_5@在执行上面的sql@H_404_5@语句查询,发现已经看不到dg2@H_404_5@磁盘组了,说明已经破坏掉了。
@H_404_5@下面看集群几个节点组成
Olsnodes
@H_404_5@四个节点,但是我这边只起了两个。
@H_404_5@在每个节点上都强制停止集群
crsctl stop crs -f 节点一: [root@rac1 ~]# crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.crsd' on 'rac1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac1' CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1' CRS-2673: Attempting to stop 'ora.DG2.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.tar.db' on 'rac1' CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac1' CRS-2673: Attempting to stop 'ora.DG1.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac1' CRS-2673: Attempting to stop 'ora.cvu' on 'rac1' CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1' CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cvu' on 'rac2' CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2' CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded CRS-2677: Stop of 'ora.scan3.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac2' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac1' CRS-2676: Start of 'ora.scan3.vip' on 'rac2' succeeded CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded CRS-2677: Stop of 'ora.scan2.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac2' CRS-2677: Stop of 'ora.DG2.dg' on 'rac1' succeeded CRS-2676: Start of 'ora.scan2.vip' on 'rac2' succeeded CRS-2677: Stop of 'ora.DG1.dg' on 'rac1' succeeded CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.tar.db' on 'rac1' succeeded CRS-2677: Stop of 'ora.mgmtdb' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac1' CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeeded CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac2' CRS-2676: Start of 'ora.MGMTLSNR' on 'rac2' succeeded CRS-2676: Start of 'ora.cvu' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.rac4.vip' on 'rac1' CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac2' CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac2' CRS-2677: Stop of 'ora.rac4.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.rac4.vip' on 'rac2' CRS-2676: Start of 'ora.rac4.vip' on 'rac2' succeeded CRS-2675: Stop of 'ora.oc4j' on 'rac1' Failed CRS-2679: Attempting to clean 'ora.oc4j' on 'rac1' CRS-2681: Clean of 'ora.oc4j' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.oc4j' on 'rac2' CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac2' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac1' CRS-2677: Stop of 'ora.rac3.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac2' CRS-2676: Start of 'ora.rac3.vip' on 'rac2' succeeded CRS-2676: Start of 'ora.oc4j' on 'rac2' succeeded CRS-2676: Start of 'ora.mgmtdb' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac1' CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1' CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2673: Attempting to stop 'ora.storage' on 'rac1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1' CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'rac1' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2673: Attempting to stop 'ora.evmd' on 'rac1' CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@rac1 ~]# [root@rac1 ~]# 节点二: [root@rac2 ~]# crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.crsd' on 'rac2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.tar.db' on 'rac2' CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac2' CRS-2673: Attempting to stop 'ora.DG1.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.cvu' on 'rac2' CRS-2673: Attempting to stop 'ora.DG2.dg' on 'rac2' CRS-2677: Stop of 'ora.rac1.vip' on 'rac2' succeeded CRS-2677: Stop of 'ora.rac3.vip' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2' CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded CRS-2797: Shutdown is already in progress for 'rac2',waiting for it to complete CRS-2797: Shutdown is already in progress for 'rac2',waiting for it to complete CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.storage' on 'rac2' CRS-2673: Attempting to stop 'ora.crf' on 'rac2' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2' CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2' CRS-2677: Stop of 'ora.storage' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2' CRS-2673: Attempting to stop 'ora.evmd' on 'rac2' CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac2' CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2' CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@rac2 ~]#
@H_404_5@清理残存进程
Asm@H_404_5@实例
sql> shutdown abort;
ps -ef |grep ora|awk '{print $2}'|xargs kill -9
ps -ef |grep asm|awk '{print $2}'|xargs kill -9
ps -ef |grep grid|awk '{print $2}'|xargs kill -9
@H_404_5@清理节点一:
@H_404_5@清理节点二:
@H_404_5@在一个节点把集群起到排他模式
on one node
crsctl start crs -excl -nocrs
@H_404_5@启动后可以通过sqlplus@H_404_5@去asm@H_404_5@实例里面看看磁盘组的情况
@H_404_5@注意,在哪个节点起的排他模式,就在哪个节点查磁盘组的情况。
su - grid
sqlplus / as sysasm
select name,state from v$asm_diskgroup;
@H_404_5@发现看不到磁盘组的信息的信息
@H_404_5@有时候破坏了磁盘组,但是有可能实例中还有残存的信息,所以需要删除一下磁盘组
drop diskgroup dg2 force including contents;
@H_404_5@报错不用管,因为磁盘组dg2@H_404_5@本来就不存在了。
create diskgroup dg2
external redundancy
disk
'/dev/raw/raw4' name nr_1;
@H_404_5@创建完成后,需要改变磁盘组兼容性属性(三条语句):
ALTER DISKGROUP dg2 SET ATTRIBUTE 'compatible.asm' = '12.1.0.0.0' ;
ALTER DISKGROUP dg2 SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.0.0';
ALTER DISKGROUP dg2 SET ATTRIBUTE 'compatible.advm' = '12.1.0.0.0';
sudo su - grid
sqlplus / as sysasm
select name,state from v$asm_diskgroup;
@H_404_5@下面可以找到最新的ocr@H_404_5@备份
ocrconfig -showbackup
@H_404_5@发现最近的是/taryartar/12c/grid_home/cdata/mycluster/backup00.ocr
ocrconfig -restore /taryartar/12c/grid_home/cdata/mycluster/backup00.ocr
@H_404_5@本次成功。
@H_404_5@完成以后检查ocr@H_404_5@的情况:
Ocrcheck
@H_404_5@下面恢复表决盘
@H_404_5@先查看一下表决盘的情况
crsctl query css votedisk
@H_404_5@发现当前没有表决盘。
@H_404_5@把表决盘恢复到dg2
crsctl replace votedisk +dg2
@H_404_5@检查votedisk@H_404_5@情况
crsctl query css votedisk
@H_404_5@可以发现有表决盘了
@H_404_5@此时ocr@H_404_5@和表决盘都已经恢复了,可以关闭集群了。
crsctl stop crs -f
@H_404_5@注意,此时只用在节点一停,因为前面我们只在节点一以排他模式启动了集群。
@H_404_5@然后以正常模式重启集群(所有节点)
crsctl start crs
@H_404_5@节点一:
@H_404_5@节点二:
@H_404_5@检查集群服务是否都正常
crsctl check crs
@H_404_5@节点一:
@H_404_5@节点二:
crsctl check cluster -all
@H_404_5@检查集群资源是否都正常
crsctl stat res -t
Su - grid
cluvfy comp ocr -n all -verbose
[grid@rac2 ~]$ cluvfy comp ocr -n all -verbose 验证 OCR 完整性 正在检查 OCR 完整性... 正在检查是否缺少非集群配置... 所有节点都没有非集群的,仅限本地的配置 正在检查守护程序的活动性... 检查: "CRS daemon" 的活动性 节点名 正在运行? ------------------------------------ ------------------------ rac2 是 rac1 是 结果:"CRS daemon" 的活动性检查已通过 正在检查 OCR 配置文件 "/etc/oracle/ocr.loc"... OCR 配置文件 "/etc/oracle/ocr.loc" 检查成功 ocr 位置 "+DG2/mycluster/OCRFILE/registry.255.959643883" 的磁盘组在所有节点上都可用 检查 OCR 转储功能 OCR 转储检查已通过 NOTE: 此检查不验证 OCR 内容的完整性。请以授权用户的身份执行 'ocrcheck' 以验证 OCR 的内容。 OCR 完整性检查已通过 OCR 完整性 的验证成功。