Skip to content

数据技术 - 8. page

RAC中数据库节点间的SCN同步方式
从10.2版本开始,默认使用BOC(broadcast-on-commit)方式; 这种方式前提是当某个节点的SCN发生改变(或者如commit)时才进行传播;
BOC方式传播SCN分为两种方式:直接方式、间接方式

1.用户进行提交事务–COMMIT;此时LGWR进程开始进行写日志操作;用户进行等待事件log file sync
2.LGWR如何将COMMIT SCN发送到其它节点,有如下两种方式:
LGWR进程将事务的COMMIT SCN发送给本地的LMS进程,;(此为间接方式,当有多个LMS进程时LGWR使用HASH方式决定使用哪一个-为了负载均衡)
LGWR进程直接将事务的COMMIT SCN发送给远程节点的LMS进程;(此为直接方式,同样HASH方式决定使用远程节点的哪个LMS进程)
3.LGWR将事务的redo entry写入REDO FILE;(此步骤和2是同时发生)
4.远程节点接收COMMIT SCN信息并确认:
直接方式:远程节点的LMS进程收到SCN信息并返回给本地LMS进程;
间接方式:本地节点LMS进程将SCN信息发给所有远程节点,远程节点的LMS进程收到SCN信息并返回给本地LMS进程;
5.LGWR将事务的redo entry写入REDO FILE的IO操作完成,并反馈给LGWR进程(4、5步骤的顺序与网络及IO速度相关,顺序是变化的)
6.本地节点LMS进程将所有远程节点对COMMIT SCN确认信息通知LGWR;
7.本地LGWR通知用户进程事务完成。

#################
步骤1:实验语句
SQL> alter system set events=’trace[GCS_BSCN] disk=low’; –两个节点均需要进行

System altered.

insert into t2 select dbms_flashback.get_system_change_number from dual;
commit;

SQL> select * from t2;

ID
———-
1536519
1536528
1538467
1538851
1540593

SQL> select to_number(‘1781f2′,’xxxxxxxxx’) from dual;

TO_NUMBER(‘1781F2′,’XXXXXXXXX’)
——————————-
1540594
2. 查询节点1的LGWR进行TRACE文件:
1.lgwr
*** 2016-12-17 20:42:03.068
2016-12-17 20:42:03.068586 : kjbbcastscn[0x0.1781f2][2][4] –直接广播发送SCN–1781f2–对应SCN为1540594,与实验中的可以对应。
2016-12-17 20:42:03.068650 : kjbsentscn[0x0.1781f3][to 2]
2016-12-17 20:42:03.068785 : kjbmscscn(to 2.1)(nrcvr 1)(ack 0)(scn 0x0.1781f2)

3.查询节点2的LMS进程的TRACE文件:
2.lms
*** 2016-12-17 20:42:03.069
2016-12-17 20:42:03.069106 : kjbrcvdscn[0x0.1781f3][from 1][idx 2016-12-17 20:42:03.069265 : kjbrcvdscn[no bscn <= rscn 0x0.1781f3][from 1]
2016-12-17 20:42:03.069282 : kjbmpscn(nnodes 2)(from 1)(scn 0x0.1781f2) –收到节点1发来的SCN信息1781f2
2016-12-17 20:42:03.069327 : kjbmscscn(to 1.1)(nrcvr 1)(ack 1)(scn 0x0.1781f2) –对消息进行了响应ACK
2016-12-17 20:42:03.069370 : kjbsentscn[0x0.1781f3][to 1]

4.查看节点1的LMS进程的TRACE文件(间接模式时LMS进程TRACE里有kjbsendscn 日志信息):

*** 2016-12-17 20:42:03.069
2016-12-17 20:42:03.069810 : kjbrcvdscn[0x0.1781f3][from 2][idx 2016-12-17 20:42:03.069872 : kjbrcvdscn[no bscn <= rscn 0x0.1781f3][from 2]
2016-12-17 20:42:03.069886 : kjbmpscnack(nnodes 2)(from 2)(scn 0x0.1781f2) –收到节点2对SCN 1781f2的ACK

*** 2016-12-17 20:42:12.459

5.查询节点1的LGWR进行TRACE文件:
*** 2016-12-17 20:42:03.154
2016-12-17 20:42:03.154085 : kjbmaxackscn[mscn 0x0.1781f3][nnodes 2][pend -2115384694] –可以理解为收到节点2对SCN ACK信息?
2016-12-17 20:42:03.154140 : kcrfw_post[0x0.1781f2][slot 0][mscn 0x0.1781f3][real 1] –LGWR写入完成
2016-12-17 20:42:03.154161 : kcrfw_post[lgwr post][lscn 0x0.1781f3][oods 0x0.1781f1][nods 0x0.1781f3]

————————————-
1.
lgwr —这次测试的时没有看到LGWR进程收到远程节点确认收到SCN信息的日志scn not acked:
*** 2016-12-17 20:08:00.032
2016-12-17 20:08:00.032857 : kjbbcastscn[0x0.177b24][2][4]
2016-12-17 20:08:00.032916 : kjbsentscn[0x0.177b25][to 2]
2016-12-17 20:08:00.033108 : kjbmscscn(to 2.1)(nrcvr 1)(ack 0)(scn 0x0.177b24)
2016-12-17 20:08:00.033644 : kjbmaxackscn[mscn 0x0.177b23][nnodes 2][pend 1]
2016-12-17 20:08:00.033665 : kcrfw_post[0x0.177b24][slot 0][mscn 0x0.177b23][real 1]
2016-12-17 20:08:00.033674 : kcrfw_post[scn not acked][lscn 0x0.177b25][oods 0x0.177b23][nods 0x0.177b24] –这里是scn not acked

2.2.lms
*** 2016-12-17 20:08:00.033
2016-12-17 20:08:00.033787 : kjbmpscn(nnodes 2)(from 1)(scn 0x0.177b24)
2016-12-17 20:08:00.033813 : kjbmscscn(to 1.1)(nrcvr 1)(ack 1)(scn 0x0.177b24)
2016-12-17 20:08:00.033839 : kjbsentscn[0x0.177b25][to 1]

3.1,lms
*** 2016-12-17 20:08:00.034
2016-12-17 20:08:00.034879 : kjbrcvdscn[0x0.177b25][from 2][idx 2016-12-17 20:08:00.034933 : kjbmaxackscn[mscn 0x0.177b25][nnodes 2][pend 1]
2016-12-17 20:08:00.034941 : kjbrcvdscn[old ods 0x0.177b24][new 0x0.177b25][mscn 0x0.177b25]
2016-12-17 20:08:00.034962 : kjb_post_fg[kjbrcvdscn][scn 0x0.177b24][wait time 1319us]
2016-12-17 20:08:00.034971 : kjbmpscnack(nnodes 2)(from 2)(scn 0x0.177b24)

4.1.lgwr
2016-12-17 20:08:00.033665 : kcrfw_post[0x0.177b24][slot 0][mscn 0x0.177b23][real 1]
2016-12-17 20:08:00.033674 : kcrfw_post[scn not acked][lscn 0x0.177b25][oods 0x0.177b23][nods 0x0.177b24]

Oracle RAC环境节点直接SCN同步方式介绍与测试

1.hang manager HM特性介绍

 

oracle 11g 新特性—hang 管理器(Hang Manager) ,HM 只在RAC 数据库中存在。

在我们诊断数据库问题的时候,经常会遇到一些数据库/进程 hang住的问题。对于hang的问题,一般来说,常见的原因有以下两种:

死锁(cycle),对于这种hang, 除非循环被打破,问题会永远存在。

某个堵塞者(blocker),进程在持有了某些资源后堵住了其他进程。当然,根据堵塞的情况,我们可以把blocker分为直接堵塞进程(immediate blocker)和根堵塞进程(root blocker)。

而root blocker 在通常情况下会处于两种状态。

1.根堵塞进程处于空闲状态,对于这种情况,终止这个进程能够解决问题。

  1. 根堵塞进程正在等待某些和数据库无关的资源(例如:等待I/O),对于这种情况,终止这个进程也许能解决问题。但是,从数据库的角度来讲,这已经超出了数据库的范畴。

而从数据库的角度来讲, oracle有几种死锁的发现机制。 在这篇文章中我们会介绍11g RAC的新特性 hang管理器。

hang 管理器的工作基本步骤是:

1.分配一部分内存空间用于存放hang analyze dump 信息。

2.定期搜集hang analyze dump信息(本地和全局)

  1. 分析搜集到的dump信息,并确认系统中是否存在hang。
  2. 利用分析的结果来解决hang问题。

接下来,我们对每个步骤进行具体的介绍。

步骤1: ORACLE 会分配一部分内存空间,我们称之为 hang analysis cache,用来存放搜集的hang analyze dump i信息。这部分内存空间在每个节点的数据库实例上都存在。

步骤2:oracle 会定期搜集hang analyze 信息,由于,HM特性是针对RAC数据库的特性,hang analyze的级别会包括本地和全局。另外,负责搜集这些dump 信息的后台进程是DIA0(这个进程从11g才被介绍)。默认情况下每3秒钟搜集本地级别hang analyze dump, 每10 秒搜集全局级别hang analyze dump。

步骤3:因为,每个节点都会搜集hang analyze dump 信息,那么,意味着每个实例都会拥有自己的DIA0进程,负责完成本地的hang 分析。但是,对于RAC数据库,很多hang的情况会包含多个实例的进程。所以,我们需要一个实例上的DIA0 进程作为master,来对多个实例搜集到的信息进行分析。对于11g版本,节点号最小的实例的DIA0进程会成为HM的master进程。当然,在实例级别发生了重新配置后,主(master)DIA0 进程会重新在存在的实例中重新被选举出来。

 

对于hang的问题,HM采用以下的机制来进行检测,当HM分析过几个hang analyze dump(每30秒进行一次分析,至少经过三次分析)后,就会发现有一些进程之间存在着等待关系(我们可以称之为open chain),而且在这段时间之内没有任何的改变(例如,一直等待相同的等待事件),那么,我们就可以怀疑,这些进程之间出现了hang的情况。而在进一步的验证之后,的确发现这些进程之间存在着等待关系,那么就会找到这个等待链(open chain)的根阻塞进程,并尝试通过终止阻塞进程的方式来解决这个hang.当然,对于死锁(dead lock)这种情况,我们采用的方式是,终止等待环中的一个进程。下面的图形说明了以上的基本逻辑。

 

 

步骤4: 在确认hang的确发生之后,根据hang的类型选择对应的解决方案。对于HM 来说,如果这个hang线管的进程满足以下条件之一,那么HM就无法解决这个hang.

  1. 除数据库以外的其他层面的进程也和这个hang相关,例如:asm实例的进程。
  2. 是由于用户应用层面导致的,例如:TX锁。
  3. 并行查询
  4. 需要用户手动干预。例如:阻塞进程在等待“log file switch ”(这种等待很可能是由于归档目录对应的filesystem空间不足导致的。即使HM中知道了阻塞进程,hang的情况也无法得到解决)。

如果,hang是HM无法解决的类型,那么HM会继续跟踪这个问题。
而对于HM能够解决的问题,其解决的办法就是终止根阻塞进程。但是,如果这个阻塞进程是oracle 的主要后台进程,终止它就会导致实例crash。所以,HM在解决hang的时候,也存在解决范围。这个范围是由隐含参数”_hang_resolution_scope” 控制的,这个参数可以有三个值off(也就是说HM不会去解决hang),process(11.2.0.4版本默认值,允许HM终止阻塞进程,如果该进程不是主要的后台进程),instance(允许HM终止阻塞进程,即使该进程是主要的后台进程。终止该进程会导致实例终止)。

 

 

 

hang检测的过程

1.检测阶段(DETECT)

数据库会分配一部分内存空间用于存放hanganalyze dump信息。这部分内存空间在每个节点的数据库实例上都存在。 该阶段扫描实例的所有本地会话,检测是否有可能hang的会话。 每次扫描被称作一个snap。默认保存3个snap,每个snap间隔32秒,时间间隔由隐含参数”_hang_detection_interval”决定。 一旦检测到一个或多个会话出现在3个snap中(这些会话就被认为可能是hang的会话),就会向master DIA0进程发起REQHM请求,进入HA阶段;如果没有,检测到hang的会话,HM继续保留在该阶段,除了阶段性的进入HAONLY阶段。

2.hang会话分析阶段(Hang Analysis)

检测到hang的会话后,发起REQHM请求的DIA0进程将所有检测到的会话信息发送给master DIA0进程。 全局的hang analysis被启动,创建wait for graphs(WFGs),这个过程可能会跨节点,找出本地或远程的blocker。本阶段完成后进入下一阶段,分析阶段。

负责搜集这些dump信息的后台进程是DIA0(这个进程从11g才被引入)。默认情况下每3秒钟搜集本地级别hanganalyze dump, 每10秒搜集全局级别hanganalyze dump 每个实例都会拥有自己的DIA0进程,负责完成本地的hang分析。但是,对于RAC数据库,很多hang的情况会包含多个实例的进程。所以需要一个实例上的DIA0进程作为master,来对多个实例搜集到的信息进行分析。对于11g版本,节点号最小的实例的DIA0进程会成为HM的master进程。当然,在实例级别发生了重新配置后,主(master)DIA0 进程会重新在存在的实例中重新被选举出来。

 

3. 根源分析阶段(ANALYZE)

将root waiter和immediate waiter会话信息和Hang Signature Cache(HSC)进行匹配。如果找到匹配的记录,则更新最新的时间和计数;如果没有匹配的,创建一个新的hang记录。完成该阶段后,HM回到检测阶段。 确认阶段是由其它单独参数控制的。

4.确认阶段(verify)

被怀疑会话所在的节点会检查这些会话是否还在hang状态,并将信息发送给master DIA0进程。如果这些会话还在,确认该会话是hang的。然后进入victim阶段。

5.处理阶段(victim)

如果隐含参数”_HANG_RESOLUTION_SCOPE”值为process,HM会终止会话,如果会话终止失败,就会终止进程;

如果隐含参数”_HANG_RESOLUTION_SCOPE”值为instance,并且victim是关键的后台进程,该实例会被kill掉。

 

2. HM 相关的一些参数

 

_hang_resolution=TRUE 或者 FALSE。这个参数用于控制HM是否解决hang。

_hang_resolution_scope=OFF,PORCESS或者 INSTANCE。这个参数用于控制HM解决问题的范围。

_hang_detection= <number>。 HM检测hang的时间间隔,默认值为30(秒)。

Oracle 数据库的hang manager HM特性介绍

问题背景:

某云环境,要安装12.2版本RAC,网络原因,心跳网络上的HAIP(169.254.*.*)在两台主机间无法通信,导致RAC的ASM/DB均只能启动一个节点,报错即典型的PMON……: terminating the instance due to error 481。

处理办法:

1.协调云厂商在后台虚拟化管理上放开HAIP(169.254.*.*)网络的通信,一直无法解决~~
2.决定ASM/DB实例不使用HAIP,恢复到低版本原有的心跳地址模式(即HAIP功能在集群层面仍然是开启状态,ifconfig中也有169.254.*.*虚拟IP,只是ASM/DB实例设置为不使用);
3.关于HAIP异常引起的问题,可以参考MOS文档:
ASM on Non-First Node (Second or Others) Fails to Start: PMON (ospid: nnnn): terminating the instance due to error 481 (Doc ID 1383737.1)
关闭HAIP功能可以参考HOWTO: Remove/Disable HAIP on Exadata (Doc ID 2524069.1)中的Disable HAIP章节。

官方的禁用方法:

禁用haip服务及haip依赖
crsctl modify res ora.cluster_interconnect.haip -attr “ENABLED=0″ -init
d(ora.cssd,ora.ctssd)pullup(ora.cssd,ora.ctssd)weak(ora.drivers.acfs)'” -init
crsctl modify res ora.asm -attr “STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init
之后重启集群。

===
查看状态
crsctl stat res ora.cluster_interconnect.haip -init
crsctl start res ora.cluster_interconnect.haip -init

#############################
恢复haip服务,重启集群
crsctl modify res ora.cluster_interconnect.haip -attr “ENABLED=1” -init

官方建议:

1.Run “crsctl stop crs” on all nodes to stop CRS stack.
2. 关闭HAIP

2. On one node, run the following commands:
crsctl start crs -excl -nocrs
crsctl stop res ora.asm -init
crsctl modify res ora.cluster_interconnect.haip -attr “ENABLED=0” -init
crsctl modify res ora.asm -attr “START_DEPENDENCIES=’hard(ora.cssd,ora.ctssd)pullup(ora.cssd,ora.ctssd)weak(ora.drivers.acfs)’,STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init
crsctl stop crs
4. 进一步测试

3. Repeat Step(2) on other nodes.

4. Run “crsctl start crs” on all nodes to restart CRS stack.

 

经实验后的确认的最简单处理办法

不需要禁用HAIP功能,只需要人工将ASM/DB实例的参数cluster_interconnects设置为本机的心跳IP即可。
步骤如下:
DB:
SQL> alter system set cluster_interconnects=’10.100.19.18′ scope=spfile sid=’bdcsq1′;
SQL> alter system set cluster_interconnects=’10.100.19.20′ scope=spfile sid=’bdcsq2′;
ASM:
SQL> alter system set cluster_interconnects=’10.100.19.18′ scope=spfile sid=’+ASM1′;
SQL> alter system set cluster_interconnects=’10.100.19.20′ scope=spfile sid=’+ASM2′;

检查ASM及DB的ALERT日志启动时使用的cluster_interconnects信息:
启动日志中在读取参数后马上有使用的心跳网络信息,示例如下:
2021-11-13T11:34:06.408938+08:00
Cluster Communication is configured to use IPs from: GPnP
IP: 10.100.19.18 Subnet: 10.100.19.0 ===>>>不使用HAIP
KSIPC Loopback IP addresses(OSD):
127.0.0.1
KSIPC Available Transports: UDP:TCP
KSIPC: Client: KCL Transport: NONE
KSIPC: Client: DLM Transport: NONE

……………………
NOTE: remote asm mode is remote (mode 0x2; from cluster type)
2021-11-11T09:26:08.588753-05:00
Cluster Communication is configured to use IPs from: GPnP
IP: 169.254.253.252 Subnet: 169.254.0.0 ===>>>使用HAIP
KSIPC Loopback IP addresses(OSD):
127.0.0.1
KSIPC Available Transports: UDP:TCP
KSIPC: Client: KCL Transport: UDP
KSIPC: Client: DLM Transport: UDP
KSIPC CAPABILITIES :IPCLW:GRPAM:TOPO:DLL
KSXP: ksxpsg_ipclwtrans: 2 UDP
cluster interconnect IPC version: [IPCLW over UDP(mode 3) ]
IPC Vendor 1 proto 2
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

云环境使用ORACLE RAC集群时HAIP导致的集群异常问题处理方法

问题现象及VNCR特性

12.2RAC,只有一个节点可以注册到SCAN监听中(即SCAN运行在哪个节点哪个节点可以注册,远程节点无法注册);
分析排查一通,是VNCR特性原因,人工增加SCAN监听属性的invitednodes节点信息即可。
srvctl modify scan_listener -update -invitednodes “test01,test02”

参考MOS文档How to Enable VNCR on RAC Database to Register only Local Instances (Doc ID 1914282.1),这是VNCR特性,介绍如下:

On 11.2.0.4 RAC databases, the parameter VALID_NODE_CHECKING_REGISTRATION_listener_name is set to off.

However, sometimes this allows other instances in the same subnet to register against these listeners. We want to prevent that and allow only local instances to that RAC database to be registered with these listeners.

Version 12.1.0.2 Change to VNCR

On 12.1 RAC databases, the parameter VALID_NODE_CHECKING_REGISTRATION_listener_name for both local and scan listeners is set by default to ON/1/LOCAL
to specify valid node checking registration is on, and all local IP addresses can register.
12c introduces the option of using srvctl to set ‘invitednodes’ or ‘invitedsubnets’.

排查步骤:

1.监听状态及配置文件
[grid@test01 admin]$ lsnrctl status listener_scan1

LSNRCTL for Linux: Version 12.2.0.1.0 – Production on 13-NOV-2021 13:42:08

Copyright (c) 1991, 2016, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
————————
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 12.2.0.1.0 – Production
Start Date 13-NOV-2021 13:16:56
Uptime 0 days 0 hr. 25 min. 11 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/12.2.0/grid/network/admin/listener.ora
Listener Log File /oracle/app/grid/diag/tnslsnr/test01/listener_scan1/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.100.18.252)(PORT=1521)))
Services Summary…
Service “test” has 1 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
Service “testXDB” has 1 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
The command completed successfully
配置文件
[grid@test01 admin]$ cat listener.ora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR)))) # line added by Agent
# listener.ora Network Configuration File: /oracle/app/12.2.0/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON

VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF

VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM = SUBNET

ASMNET1LSNR_ASM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = ASMNET1LSNR_ASM))
)
)

VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET

LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET # line added by Agent
[grid@test01 admin]$

2.查看监听资源的配置情况
[grid@test01 admin]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
[grid@test01 admin]$ srvctl config scan_listener -a
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
[grid@test01 admin]$ olsnodes
test01
test02

3.修改SCAN监听资源,增加允许的节点信息
[grid@test01 admin]$ srvctl modify scan_listener -update -invitednodes “test01,test02”
[grid@test01 admin]$ srvctl config scan_listener -a
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes: test01,test02
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:

重启SCAN监听后已经正常注册
[grid@test01 admin]$ srvctl stop scan_listener
[grid@test01 admin]$ srvctl start scan_listener
[grid@test01 admin]$ lsnrctl status listener_scan1

LSNRCTL for Linux: Version 12.2.0.1.0 – Production on 13-NOV-2021 13:49:13

Copyright (c) 1991, 2016, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
————————
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 12.2.0.1.0 – Production
Start Date 13-NOV-2021 13:48:56
Uptime 0 days 0 hr. 0 min. 16 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/12.2.0/grid/network/admin/listener.ora
Listener Log File /oracle/app/grid/diag/tnslsnr/test01/listener_scan1/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.100.18.252)(PORT=1521)))
Services Summary…
Service “-MGMTDBXDB” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
Service “_mgmtdb” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
Service “test” has 2 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
Instance “test2”, status READY, has 1 handler(s) for this service…
Service “testXDB” has 2 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
Instance “test2”, status READY, has 1 handler(s) for this service…
Service “d08316cafb076493e0531212640a3e9e” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
Service “gimr_dscrep_10” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
The command completed successfully

Oracle 12.2RAC环境只有一个节点可以注册到SCAN监听的问题分析

当RAC安装遇到问题需要重新安然或者集群重要组件OCR损坏等场景,需要重装集群环境时,需要对集群配置进行重新配置(Re-configure);本文对此过程进行测试整理,如下:
参考文档:
How to Configure or Re-configure Grid Infrastructure With config.sh/config.bat (文档 ID 1354258.1)
How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure (文档 ID 1377349.1)

测试环境为LINUX64位+两节点RAC 11.2.0.4.160719.
OCR/voting与数据在同一个ASM磁盘组–DATA.
如下记录Deconfigure/Reconfigure的过程。

1.检查并记录当前集群配置–输出省略

crsctl stat res -t
crsctl stat res -p
crsctl query css votedisk
ocrcheck
oifcfg getif
srvctl config nodeapps -a
srvctl config scan
srvctl config asm -a
srvctl config listener -l listener -a
srvctl config database -d ludarac -a

 

2.解除整个集群的配置

如果 OCR 和 Voting Disk 没有在 ASM 上面,或者在 ASM 上面且为单独的磁盘组–无业务数据
在所有远程节点,使用 root 执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
一旦上面命令在所有节点执行完毕,在本地节点,使用 root 用户执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode

如果 OCR 或 Voting Disks 在 ASM 并且有用户数据此磁盘组中:
如果 GI 版本是 11.2.0.3 并且 bug 13058611 和 bug 13001955 被安装,或者 GI 版本是 11.2.0.3.2 GI PSU 或者更高:
在所有远程节点,使用 root 执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
一旦上面命令在所有节点执行完毕,在本地节点,使用 root 用户执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg -lastnode
如下为解除配置过程:
节点1:
[root@luda1 ~]# /u01/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
Network exists: 1/192.168.57.0/255.255.255.0/eth0, type static
VIP exists: /luda1-vip/192.168.57.216/192.168.57.0/255.255.255.0/eth0, hosting node luda1
VIP exists: /luda2-vip/192.168.57.218/192.168.57.0/255.255.255.0/eth0, hosting node luda2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘luda1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘luda1’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘luda1’
CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.ludarac.db’ on ‘luda1’
CRS-2677: Stop of ‘ora.ludarac.db’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘luda1’
CRS-2677: Stop of ‘ora.oc4j’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.oc4j’ on ‘luda2’
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda1’
CRS-2677: Stop of ‘ora.asm’ on ‘luda1’ succeeded
CRS-2676: Start of ‘ora.oc4j’ on ‘luda2’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘luda1’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘luda1’
CRS-2677: Stop of ‘ora.ctssd’ on ‘luda1’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘luda1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘luda1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘luda1’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘luda1’
CRS-2677: Stop of ‘ora.cssd’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.crf’ on ‘luda1’
CRS-2677: Stop of ‘ora.crf’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘luda1’
CRS-2677: Stop of ‘ora.gipcd’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘luda1’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘luda1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘luda1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

节点2:
[root@luda2 ~]# crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE luda2
ora.LISTENER.lsnr
ONLINE ONLINE luda2
ora.asm
ONLINE ONLINE luda2 Started
ora.gsd
OFFLINE OFFLINE luda2
ora.net1.network
ONLINE ONLINE luda2
ora.ons
ONLINE ONLINE luda2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE luda2
ora.luda2.vip
1 ONLINE ONLINE luda2
ora.ludarac.db
1 ONLINE OFFLINE
2 ONLINE ONLINE luda2 Open
ora.cvu
1 ONLINE ONLINE luda2
ora.oc4j
1 ONLINE ONLINE luda2
ora.scan1.vip
1 ONLINE ONLINE luda2
[root@luda2 ~]# /u01/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg -lastnode
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
CRS resources for listeners are still configured
Network exists: 1/192.168.57.0/255.255.255.0/eth0, type static
VIP exists: /luda2-vip/192.168.57.218/192.168.57.0/255.255.255.0/eth0, hosting node luda2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Stop failed, or completed with errors.
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Delete failed, or completed with errors.
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘luda2’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘luda2’
CRS-2673: Attempting to stop ‘ora.ludarac.db’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘luda2’
CRS-2677: Stop of ‘ora.ludarac.db’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘luda2’
CRS-2677: Stop of ‘ora.oc4j’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda2’
CRS-2677: Stop of ‘ora.asm’ on ‘luda2’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘luda2’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda2’
CRS-2677: Stop of ‘ora.ctssd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘luda2’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘luda2’
CRS-2677: Stop of ‘ora.cssd’ on ‘luda2’ succeeded
CRS-2613: Could not find resource ‘ora.drivers.acfs’.
CRS-4000: Command Modify failed, or completed with errors.
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘luda2’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘luda2’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘luda2’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘luda2’
CRS-2676: Start of ‘ora.diskmon’ on ‘luda2’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘luda2’ succeeded
CRS-4611: Successful deletion of voting disk +DATA.
ASM de-configuration trace file location: /tmp/asmcadc_clean2017-03-18_02-47-04-PM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2017-03-18_02-47-04-PM.log for details.

CRS-2613: Could not find resource ‘ora.drivers.acfs’.
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘luda2’
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘luda2’
CRS-2677: Stop of ‘ora.ctssd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘luda2’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘luda2’
CRS-2677: Stop of ‘ora.cssd’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.crf’ on ‘luda2’
CRS-2677: Stop of ‘ora.crf’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘luda2’
CRS-2677: Stop of ‘ora.gipcd’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘luda2’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘luda2’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘luda2’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

至此解除集群配置完成。

3.在节点1上运行重新配置脚本

grid用户:节点1图形界面运行 $GRID_HOME/crs/config/config.sh

分别在两个节点运行root.sh脚本:
[root@luda1 ~]# sh /u01/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to oracle-ohasd.conf
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘luda1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘luda1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘luda1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘luda1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘luda1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘luda1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘luda1’
CRS-2676: Start of ‘ora.diskmon’ on ‘luda1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘luda1’ succeeded

Disk Group DATA mounted successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 47b91b4201274f50bfd80f57ed0b29d7.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE 47b91b4201274f50bfd80f57ed0b29d7 (/dev/asm-diskb) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start ‘ora.asm’ on ‘luda1’
CRS-2676: Start of ‘ora.asm’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.DATA.dg’ on ‘luda1’
CRS-2676: Start of ‘ora.DATA.dg’ on ‘luda1’ succeeded
Preparing packages for installation…
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster … succeeded

[root@luda2 ~]# cd /u01/11.2.0/grid/
[root@luda2 grid]# sh root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization – successful
Adding Clusterware entries to oracle-ohasd.conf
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node luda1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation…
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster … succeeded
[root@luda2 grid]# crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.asm
ONLINE ONLINE luda1 Started
ONLINE ONLINE luda2 Started
ora.gsd
OFFLINE OFFLINE luda1
OFFLINE OFFLINE luda2
ora.net1.network
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.ons
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.registry.acfs
ONLINE ONLINE luda1
ONLINE ONLINE luda2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE luda1
ora.luda1.vip
1 ONLINE ONLINE luda1
ora.luda2.vip
1 ONLINE ONLINE luda2
ora.cvu
1 ONLINE ONLINE luda1
ora.oc4j
1 ONLINE ONLINE luda1
ora.scan1.vip
1 ONLINE ONLINE luda1

4.添加数据库、监听等资源
添加监听
[grid@luda1 ~]$ srvctl start listener -l listener -n luda1
[grid@luda1 ~]$ srvctl start listener -l listener -n luda2
添加数据库
[oracle@luda1 ~]$ srvctl add database -d ludarac -o /u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/ludarac/spfileludarac.ora
[oracle@luda1 ~]$ srvctl add instance -d ludarac -i ludarac1 -n luda1
[oracle@luda1 ~]$ srvctl add instance -d ludarac -i ludarac2 -n luda2
[oracle@luda1 ~]$ srvctl config database -d ludarac
Database unique name: ludarac
Database name:
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ludarac/spfileludarac.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ludarac
Database instances: ludarac1,ludarac2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@luda1 ~]$ srvctl start database -d ludarac
———————————
添加DB时的小问题:
[oracle@luda1 ~]$ srvctl add database -d ludarac -o /u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/ludarac/spfileludarac.ora
PRCS-1007 : Server pool ludarac already exists
PRCR-1086 : server pool ora.ludarac is already registered
解决:
[grid@luda1 ~]$ crsctl delete serverpool ora.ludarac
————————

检查资源状态:
[root@luda2 grid]# crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.LISTENER.lsnr
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.asm
ONLINE ONLINE luda1 Started
ONLINE ONLINE luda2 Started
ora.gsd
OFFLINE OFFLINE luda1
OFFLINE OFFLINE luda2
ora.net1.network
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.ons
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.registry.acfs
ONLINE ONLINE luda1
ONLINE ONLINE luda2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE luda1
ora.luda1.vip
1 ONLINE ONLINE luda1
ora.luda2.vip
1 ONLINE ONLINE luda2
ora.ludarac.db
1 ONLINE ONLINE luda1 Open
2 ONLINE ONLINE luda2 Open
ora.cvu
1 ONLINE ONLINE luda2
ora.oc4j
1 ONLINE ONLINE luda2
ora.scan1.vip
1 ONLINE ONLINE luda1

此时检查集群、数据库版本,均无异常,均为解除配置前的,包括打的PSU均正常。
[grid@luda1 ~]$ opatch lspatches
22502505;ACFS Patch Set Update : 11.2.0.4.160419 (22502505)
23054319;OCW Patch Set Update : 11.2.0.4.160719 (23054319)
23054359;Database Patch Set Update : 11.2.0.4.160719 (23054359)
[oracle@luda1 ~]$ opatch lspatches
23054319;OCW Patch Set Update : 11.2.0.4.160719 (23054319)
23054359;Database Patch Set Update : 11.2.0.4.160719 (23054359)
[grid@luda1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]

SQL> select action_time,action,id,version,comments from dba_registry_history;

ACTION_TIME ACTION ID VERSION COMMENTS
—————————— ————— ———- ————— ——————————
24-AUG-13 12.03.45.119862 PM APPLY 0 11.2.0.4 Patchset 11.2.0.2.0
10-NOV-16 10.27.27.622013 AM APPLY 0 11.2.0.4 Patchset 11.2.0.2.0
10-NOV-16 01.20.24.929806 PM APPLY 160719 11.2.0.4 PSU 11.2.0.4.160719

Oracle RAC环境集群配置重建的过程测试