Skip to content

未分类 - 13. page

问题现象及VNCR特性

12.2RAC,只有一个节点可以注册到SCAN监听中(即SCAN运行在哪个节点哪个节点可以注册,远程节点无法注册);
分析排查一通,是VNCR特性原因,人工增加SCAN监听属性的invitednodes节点信息即可。
srvctl modify scan_listener -update -invitednodes “test01,test02”

参考MOS文档How to Enable VNCR on RAC Database to Register only Local Instances (Doc ID 1914282.1),这是VNCR特性,介绍如下:

On 11.2.0.4 RAC databases, the parameter VALID_NODE_CHECKING_REGISTRATION_listener_name is set to off.

However, sometimes this allows other instances in the same subnet to register against these listeners. We want to prevent that and allow only local instances to that RAC database to be registered with these listeners.

Version 12.1.0.2 Change to VNCR

On 12.1 RAC databases, the parameter VALID_NODE_CHECKING_REGISTRATION_listener_name for both local and scan listeners is set by default to ON/1/LOCAL
to specify valid node checking registration is on, and all local IP addresses can register.
12c introduces the option of using srvctl to set ‘invitednodes’ or ‘invitedsubnets’.

排查步骤:

1.监听状态及配置文件
[grid@test01 admin]$ lsnrctl status listener_scan1

LSNRCTL for Linux: Version 12.2.0.1.0 – Production on 13-NOV-2021 13:42:08

Copyright (c) 1991, 2016, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
————————
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 12.2.0.1.0 – Production
Start Date 13-NOV-2021 13:16:56
Uptime 0 days 0 hr. 25 min. 11 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/12.2.0/grid/network/admin/listener.ora
Listener Log File /oracle/app/grid/diag/tnslsnr/test01/listener_scan1/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.100.18.252)(PORT=1521)))
Services Summary…
Service “test” has 1 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
Service “testXDB” has 1 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
The command completed successfully
配置文件
[grid@test01 admin]$ cat listener.ora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR)))) # line added by Agent
# listener.ora Network Configuration File: /oracle/app/12.2.0/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON

VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF

VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM = SUBNET

ASMNET1LSNR_ASM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = ASMNET1LSNR_ASM))
)
)

VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET

LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM = ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET # line added by Agent
[grid@test01 admin]$

2.查看监听资源的配置情况
[grid@test01 admin]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
[grid@test01 admin]$ srvctl config scan_listener -a
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
[grid@test01 admin]$ olsnodes
test01
test02

3.修改SCAN监听资源,增加允许的节点信息
[grid@test01 admin]$ srvctl modify scan_listener -update -invitednodes “test01,test02”
[grid@test01 admin]$ srvctl config scan_listener -a
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes: test01,test02
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:

重启SCAN监听后已经正常注册
[grid@test01 admin]$ srvctl stop scan_listener
[grid@test01 admin]$ srvctl start scan_listener
[grid@test01 admin]$ lsnrctl status listener_scan1

LSNRCTL for Linux: Version 12.2.0.1.0 – Production on 13-NOV-2021 13:49:13

Copyright (c) 1991, 2016, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
————————
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 12.2.0.1.0 – Production
Start Date 13-NOV-2021 13:48:56
Uptime 0 days 0 hr. 0 min. 16 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/12.2.0/grid/network/admin/listener.ora
Listener Log File /oracle/app/grid/diag/tnslsnr/test01/listener_scan1/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.100.18.252)(PORT=1521)))
Services Summary…
Service “-MGMTDBXDB” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
Service “_mgmtdb” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
Service “test” has 2 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
Instance “test2”, status READY, has 1 handler(s) for this service…
Service “testXDB” has 2 instance(s).
Instance “test1”, status READY, has 1 handler(s) for this service…
Instance “test2”, status READY, has 1 handler(s) for this service…
Service “d08316cafb076493e0531212640a3e9e” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
Service “gimr_dscrep_10” has 1 instance(s).
Instance “-MGMTDB”, status READY, has 1 handler(s) for this service…
The command completed successfully

Oracle 12.2RAC环境只有一个节点可以注册到SCAN监听的问题分析

当RAC安装遇到问题需要重新安然或者集群重要组件OCR损坏等场景,需要重装集群环境时,需要对集群配置进行重新配置(Re-configure);本文对此过程进行测试整理,如下:
参考文档:
How to Configure or Re-configure Grid Infrastructure With config.sh/config.bat (文档 ID 1354258.1)
How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure (文档 ID 1377349.1)

测试环境为LINUX64位+两节点RAC 11.2.0.4.160719.
OCR/voting与数据在同一个ASM磁盘组–DATA.
如下记录Deconfigure/Reconfigure的过程。

1.检查并记录当前集群配置–输出省略

crsctl stat res -t
crsctl stat res -p
crsctl query css votedisk
ocrcheck
oifcfg getif
srvctl config nodeapps -a
srvctl config scan
srvctl config asm -a
srvctl config listener -l listener -a
srvctl config database -d ludarac -a

 

2.解除整个集群的配置

如果 OCR 和 Voting Disk 没有在 ASM 上面,或者在 ASM 上面且为单独的磁盘组–无业务数据
在所有远程节点,使用 root 执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
一旦上面命令在所有节点执行完毕,在本地节点,使用 root 用户执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode

如果 OCR 或 Voting Disks 在 ASM 并且有用户数据此磁盘组中:
如果 GI 版本是 11.2.0.3 并且 bug 13058611 和 bug 13001955 被安装,或者 GI 版本是 11.2.0.3.2 GI PSU 或者更高:
在所有远程节点,使用 root 执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
一旦上面命令在所有节点执行完毕,在本地节点,使用 root 用户执行:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg -lastnode
如下为解除配置过程:
节点1:
[root@luda1 ~]# /u01/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
Network exists: 1/192.168.57.0/255.255.255.0/eth0, type static
VIP exists: /luda1-vip/192.168.57.216/192.168.57.0/255.255.255.0/eth0, hosting node luda1
VIP exists: /luda2-vip/192.168.57.218/192.168.57.0/255.255.255.0/eth0, hosting node luda2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘luda1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘luda1’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘luda1’
CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.ludarac.db’ on ‘luda1’
CRS-2677: Stop of ‘ora.ludarac.db’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘luda1’
CRS-2677: Stop of ‘ora.oc4j’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.oc4j’ on ‘luda2’
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda1’
CRS-2677: Stop of ‘ora.asm’ on ‘luda1’ succeeded
CRS-2676: Start of ‘ora.oc4j’ on ‘luda2’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘luda1’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda1’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘luda1’
CRS-2677: Stop of ‘ora.ctssd’ on ‘luda1’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘luda1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘luda1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘luda1’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘luda1’
CRS-2677: Stop of ‘ora.cssd’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.crf’ on ‘luda1’
CRS-2677: Stop of ‘ora.crf’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘luda1’
CRS-2677: Stop of ‘ora.gipcd’ on ‘luda1’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘luda1’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘luda1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘luda1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

节点2:
[root@luda2 ~]# crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE luda2
ora.LISTENER.lsnr
ONLINE ONLINE luda2
ora.asm
ONLINE ONLINE luda2 Started
ora.gsd
OFFLINE OFFLINE luda2
ora.net1.network
ONLINE ONLINE luda2
ora.ons
ONLINE ONLINE luda2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE luda2
ora.luda2.vip
1 ONLINE ONLINE luda2
ora.ludarac.db
1 ONLINE OFFLINE
2 ONLINE ONLINE luda2 Open
ora.cvu
1 ONLINE ONLINE luda2
ora.oc4j
1 ONLINE ONLINE luda2
ora.scan1.vip
1 ONLINE ONLINE luda2
[root@luda2 ~]# /u01/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg -lastnode
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
CRS resources for listeners are still configured
Network exists: 1/192.168.57.0/255.255.255.0/eth0, type static
VIP exists: /luda2-vip/192.168.57.218/192.168.57.0/255.255.255.0/eth0, hosting node luda2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Stop failed, or completed with errors.
CRS-2613: Could not find resource ‘ora.registry.acfs’.
CRS-4000: Command Delete failed, or completed with errors.
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘luda2’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘luda2’
CRS-2673: Attempting to stop ‘ora.ludarac.db’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘luda2’
CRS-2677: Stop of ‘ora.ludarac.db’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘luda2’
CRS-2677: Stop of ‘ora.oc4j’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda2’
CRS-2677: Stop of ‘ora.asm’ on ‘luda2’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘luda2’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda2’
CRS-2677: Stop of ‘ora.ctssd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘luda2’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘luda2’
CRS-2677: Stop of ‘ora.cssd’ on ‘luda2’ succeeded
CRS-2613: Could not find resource ‘ora.drivers.acfs’.
CRS-4000: Command Modify failed, or completed with errors.
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘luda2’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘luda2’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘luda2’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘luda2’
CRS-2676: Start of ‘ora.diskmon’ on ‘luda2’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘luda2’ succeeded
CRS-4611: Successful deletion of voting disk +DATA.
ASM de-configuration trace file location: /tmp/asmcadc_clean2017-03-18_02-47-04-PM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2017-03-18_02-47-04-PM.log for details.

CRS-2613: Could not find resource ‘ora.drivers.acfs’.
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘luda2’
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.asm’ on ‘luda2’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘luda2’
CRS-2677: Stop of ‘ora.ctssd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘luda2’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘luda2’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘luda2’
CRS-2677: Stop of ‘ora.cssd’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.crf’ on ‘luda2’
CRS-2677: Stop of ‘ora.crf’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘luda2’
CRS-2677: Stop of ‘ora.gipcd’ on ‘luda2’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘luda2’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘luda2’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘luda2’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

至此解除集群配置完成。

3.在节点1上运行重新配置脚本

grid用户:节点1图形界面运行 $GRID_HOME/crs/config/config.sh

分别在两个节点运行root.sh脚本:
[root@luda1 ~]# sh /u01/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to oracle-ohasd.conf
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘luda1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘luda1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘luda1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘luda1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘luda1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘luda1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘luda1’
CRS-2676: Start of ‘ora.diskmon’ on ‘luda1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘luda1’ succeeded

Disk Group DATA mounted successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 47b91b4201274f50bfd80f57ed0b29d7.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE 47b91b4201274f50bfd80f57ed0b29d7 (/dev/asm-diskb) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start ‘ora.asm’ on ‘luda1’
CRS-2676: Start of ‘ora.asm’ on ‘luda1’ succeeded
CRS-2672: Attempting to start ‘ora.DATA.dg’ on ‘luda1’
CRS-2676: Start of ‘ora.DATA.dg’ on ‘luda1’ succeeded
Preparing packages for installation…
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster … succeeded

[root@luda2 ~]# cd /u01/11.2.0/grid/
[root@luda2 grid]# sh root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization – successful
Adding Clusterware entries to oracle-ohasd.conf
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node luda1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation…
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster … succeeded
[root@luda2 grid]# crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.asm
ONLINE ONLINE luda1 Started
ONLINE ONLINE luda2 Started
ora.gsd
OFFLINE OFFLINE luda1
OFFLINE OFFLINE luda2
ora.net1.network
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.ons
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.registry.acfs
ONLINE ONLINE luda1
ONLINE ONLINE luda2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE luda1
ora.luda1.vip
1 ONLINE ONLINE luda1
ora.luda2.vip
1 ONLINE ONLINE luda2
ora.cvu
1 ONLINE ONLINE luda1
ora.oc4j
1 ONLINE ONLINE luda1
ora.scan1.vip
1 ONLINE ONLINE luda1

4.添加数据库、监听等资源
添加监听
[grid@luda1 ~]$ srvctl start listener -l listener -n luda1
[grid@luda1 ~]$ srvctl start listener -l listener -n luda2
添加数据库
[oracle@luda1 ~]$ srvctl add database -d ludarac -o /u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/ludarac/spfileludarac.ora
[oracle@luda1 ~]$ srvctl add instance -d ludarac -i ludarac1 -n luda1
[oracle@luda1 ~]$ srvctl add instance -d ludarac -i ludarac2 -n luda2
[oracle@luda1 ~]$ srvctl config database -d ludarac
Database unique name: ludarac
Database name:
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ludarac/spfileludarac.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ludarac
Database instances: ludarac1,ludarac2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@luda1 ~]$ srvctl start database -d ludarac
———————————
添加DB时的小问题:
[oracle@luda1 ~]$ srvctl add database -d ludarac -o /u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/ludarac/spfileludarac.ora
PRCS-1007 : Server pool ludarac already exists
PRCR-1086 : server pool ora.ludarac is already registered
解决:
[grid@luda1 ~]$ crsctl delete serverpool ora.ludarac
————————

检查资源状态:
[root@luda2 grid]# crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.LISTENER.lsnr
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.asm
ONLINE ONLINE luda1 Started
ONLINE ONLINE luda2 Started
ora.gsd
OFFLINE OFFLINE luda1
OFFLINE OFFLINE luda2
ora.net1.network
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.ons
ONLINE ONLINE luda1
ONLINE ONLINE luda2
ora.registry.acfs
ONLINE ONLINE luda1
ONLINE ONLINE luda2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE luda1
ora.luda1.vip
1 ONLINE ONLINE luda1
ora.luda2.vip
1 ONLINE ONLINE luda2
ora.ludarac.db
1 ONLINE ONLINE luda1 Open
2 ONLINE ONLINE luda2 Open
ora.cvu
1 ONLINE ONLINE luda2
ora.oc4j
1 ONLINE ONLINE luda2
ora.scan1.vip
1 ONLINE ONLINE luda1

此时检查集群、数据库版本,均无异常,均为解除配置前的,包括打的PSU均正常。
[grid@luda1 ~]$ opatch lspatches
22502505;ACFS Patch Set Update : 11.2.0.4.160419 (22502505)
23054319;OCW Patch Set Update : 11.2.0.4.160719 (23054319)
23054359;Database Patch Set Update : 11.2.0.4.160719 (23054359)
[oracle@luda1 ~]$ opatch lspatches
23054319;OCW Patch Set Update : 11.2.0.4.160719 (23054319)
23054359;Database Patch Set Update : 11.2.0.4.160719 (23054359)
[grid@luda1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]

SQL> select action_time,action,id,version,comments from dba_registry_history;

ACTION_TIME ACTION ID VERSION COMMENTS
—————————— ————— ———- ————— ——————————
24-AUG-13 12.03.45.119862 PM APPLY 0 11.2.0.4 Patchset 11.2.0.2.0
10-NOV-16 10.27.27.622013 AM APPLY 0 11.2.0.4 Patchset 11.2.0.2.0
10-NOV-16 01.20.24.929806 PM APPLY 160719 11.2.0.4 PSU 11.2.0.4.160719

Oracle RAC环境集群配置重建的过程测试

一、 问题分析

 

在hp-ux上11.2.0.4 RAC,两个节点的节点2上进行恢复数据库操作,数据量大约11T;所需应用的归档日志量大约1.3T,每个归档的大小接近2G,共655个归档日志文件;

客户在8号下午开始进行recover:Recover database until time “to_date(‘2015-12-19 15:40:00′,’yyyy-mm-dd hh24:mi:ss’)”;在9号上班时发现一夜过去,仅应用了13个归档日志文件。

在现场针对该问题进行解决并查找产生原因进行分析。.

如下为本次问题的分析及相关的模拟验证。

 

1.1 查看备份恢复脚本

首先查看了前面备份、恢复的脚本。备份文件存放在HP DP备份软件带库中,备份恢复脚本与正常恢复没有差异。如下recover脚本内容:

Recover database until time “to_date(‘2015-12-19 15:40:00′,’yyyy-mm-dd hh24:mi:ss’)”;

 

1.2 查看当前系统的资源使用情况

 

部分信息如下,系统资源使用情况较为正常。

$>/usr/sbin/kctune|grep filecache

filecache_max                       6527386828  5%            Imm (auto disabled)

filecache_min                       6527386828  5%            Imm (auto disabled)

$>sar -u 2 5

 

HP-UX P2AAAAA2 B.11.31 U ia64    12/29/15

 

16:36:24    %usr    %sys    %wio   %idle

16:36:26      10       0       0      90

16:36:28      10       0       0      90

16:36:30      10       0       0      90

16:36:32      10       0       0      90

16:36:34      10       0       0      89

 

Average       10       0       0      90

 

1.3 查看会话相关等待事件

通过对v$session视图的信息进行查询,当前系统中存在大量并行恢复的进程。当前event大多为parallel recovery slave next change。

 

col EVENT for a45

col USERNAME for a10

set linesize 180

col PROGRAM for a30

set pagesize 100

select sid,USERNAME,PROGRAM,event,SECONDS_IN_WAIT,WAIT_TIME  from v$session;

SID USERNAME                       PROGRAM                        EVENT

———- —————————— —————————— ————————————————–

1 SYS                            sqlplus@P2AAAAA2 (TNS V1-V3)   class slave wait

3 SYS                            oracle@P2AAAAA2 (PR0S)         parallel recovery slave next change

10 SYS                            oracle@P2AAAAA2 (PR0T)         parallel recovery slave next change

17                                oracle@P2AAAAA2 (PMON)         pmon timer

18 SYS                            oracle@P2AAAAA2 (PR00)         parallel recovery read buffer free

20 SYS                            oracle@P2AAAAA2 (PR0U)         parallel recovery slave next change

25                                oracle@P2AAAAA2 (PSP0)         rdbms ipc message

27 SYS                            oracle@P2AAAAA2 (PR0V)         parallel recovery slave next change

33                                oracle@P2AAAAA2 (VKTM)         VKTM Logical Idle Wait

34 SYS                            sqlplus@P2AAAAA2 (TNS V1-V3)   SQL*Net message from client

36 SYS                            oracle@P2AAAAA2 (PR0W)         parallel recovery slave next change

41                                oracle@P2AAAAA2 (GEN0)         rdbms ipc message

42 SYS                            oracle@P2AAAAA2 (PR01)         parallel recovery slave next change

49                                oracle@P2AAAAA2 (DIAG)         DIAG idle wait

50 SYS                            oracle@P2AAAAA2 (PR02)         parallel recovery slave next change

57                                oracle@P2AAAAA2 (DBRM)         rdbms ipc message

58 SYS                            oracle@P2AAAAA2 (PR03)         parallel recovery slave next change

65                                oracle@P2AAAAA2 (PING)         PING

66 SYS                            oracle@P2AAAAA2 (PR04)         parallel recovery slave next change

73                                oracle@P2AAAAA2 (ACMS)         rdbms ipc message

74 SYS                            oracle@P2AAAAA2 (PR05)         parallel recovery slave next change

81                                oracle@P2AAAAA2 (DIA0)         DIAG idle wait

82 SYS                            oracle@P2AAAAA2 (PR06)         parallel recovery slave next change

89                                oracle@P2AAAAA2 (LMON)         rdbms ipc message

90 SYS                            oracle@P2AAAAA2 (PR07)         parallel recovery slave next change

97                                oracle@P2AAAAA2 (LMD0)         ges remote message

98 SYS                            oracle@P2AAAAA2 (PR08)         parallel recovery slave next change

105                                oracle@P2AAAAA2 (RMS0)         rdbms ipc message

106 SYS                            oracle@P2AAAAA2 (PR09)         parallel recovery slave next change

113                                oracle@P2AAAAA2 (LMHB)         GCR sleep

114 SYS                            oracle@P2AAAAA2 (PR0A)         parallel recovery slave next change

121                                oracle@P2AAAAA2 (MMAN)         rdbms ipc message

122 SYS                            oracle@P2AAAAA2 (PR0B)         parallel recovery slave next change

129                                oracle@P2AAAAA2 (DBW0)         rdbms ipc message

130 SYS                            oracle@P2AAAAA2 (PR0C)         parallel recovery slave next change

137                                oracle@P2AAAAA2 (DBW1)         rdbms ipc message

138 SYS                            oracle@P2AAAAA2 (PR0D)         parallel recovery slave next change

145                                oracle@P2AAAAA2 (DBW2)         rdbms ipc message

146 SYS                            oracle@P2AAAAA2 (PR0E)         parallel recovery slave next change

153                                oracle@P2AAAAA2 (DBW3)         rdbms ipc message

154 SYS                            oracle@P2AAAAA2 (PR0F)         parallel recovery slave next change

161                                oracle@P2AAAAA2 (LGWR)         rdbms ipc message

162 SYS                            oracle@P2AAAAA2 (PR0G)         parallel recovery slave next change

169 SYS                            oracle@P2AAAAA2 (PR0H)         parallel recovery slave next change

170                                oracle@P2AAAAA2 (CKPT)         rdbms ipc message

177                                oracle@P2AAAAA2 (SMON)         smon timer

178 SYS                            oracle@P2AAAAA2 (PR0I)         parallel recovery slave next change

185                                oracle@P2AAAAA2 (RECO)         rdbms ipc message

186 SYS                            oracle@P2AAAAA2 (PR0J)         free buffer waits

193                                oracle@P2AAAAA2 (RBAL)         rdbms ipc message

194 SYS                            oracle@P2AAAAA2 (PR0K)         parallel recovery slave next change

201                                oracle@P2AAAAA2 (ASMB)         ASM background timer

202 SYS                            oracle@P2AAAAA2 (PR0L)         parallel recovery slave next change

209                                oracle@P2AAAAA2 (MMON)         rdbms ipc message

211 SYS                            oracle@P2AAAAA2 (PR0M)         parallel recovery slave next change

217                                oracle@P2AAAAA2 (MMNL)         rdbms ipc message

218 SYS                            oracle@P2AAAAA2 (PR0N)         parallel recovery slave next change

225 SYS                            oracle@P2AAAAA2 (PR0O)         parallel recovery slave next change

233                                oracle@P2AAAAA2 (MARK)         wait for unread message on broadcast channel

235 SYS                            oracle@P2AAAAA2 (PR0P)         parallel recovery slave next change

241 SYS                            oracle@P2AAAAA2 (PR0Q)         parallel recovery slave next change

249 SYS                            sqlplus@P2AAAAA2 (TNS V1-V3)   SQL*Net message to client

250 SYS                            oracle@P2AAAAA2 (PR0R)         parallel recovery slave next change

 

63 rows selected.

SQL> select sid,USERNAME,PROGRAM,event  from v$session where PROGRAM like ‘%PR%’;

 

SID USERNAME                       PROGRAM                        EVENT

———- —————————— —————————— ————————————————–

3 SYS                            oracle@P2AAAAA2 (PR0T)         parallel recovery slave next change

10 SYS                            oracle@P2AAAAA2 (PR0U)         parallel recovery slave next change

20 SYS                            oracle@P2AAAAA2 (PR0V)         parallel recovery slave next change

27 SYS                            oracle@P2AAAAA2 (PR0W)         parallel recovery slave next change

36 SYS                            oracle@P2AAAAA2 (PR01)         parallel recovery slave next change

42 SYS                            oracle@P2AAAAA2 (PR02)         parallel recovery slave next change

50 SYS                            oracle@P2AAAAA2 (PR03)         parallel recovery slave next change

58 SYS                            oracle@P2AAAAA2 (PR04)         parallel recovery slave next change

66 SYS                            oracle@P2AAAAA2 (PR05)         parallel recovery slave next change

74 SYS                            oracle@P2AAAAA2 (PR06)         parallel recovery slave next change

82 SYS                            oracle@P2AAAAA2 (PR07)         parallel recovery slave next change

90 SYS                            oracle@P2AAAAA2 (PR08)         parallel recovery slave next change

98 SYS                            oracle@P2AAAAA2 (PR09)         parallel recovery slave next change

106 SYS                            oracle@P2AAAAA2 (PR0A)         parallel recovery slave next change

114 SYS                            oracle@P2AAAAA2 (PR0B)         parallel recovery slave next change

122 SYS                            oracle@P2AAAAA2 (PR0C)         parallel recovery slave next change

130 SYS                            oracle@P2AAAAA2 (PR0D)         parallel recovery slave next change

138 SYS                            oracle@P2AAAAA2 (PR0E)         parallel recovery slave next change

146 SYS                            oracle@P2AAAAA2 (PR0F)         parallel recovery slave next change

154 SYS                            oracle@P2AAAAA2 (PR0G)         parallel recovery slave next change

162 SYS                            oracle@P2AAAAA2 (PR0H)         parallel recovery slave next change

169 SYS                            oracle@P2AAAAA2 (PR0I)         parallel recovery slave next change

178 SYS                            oracle@P2AAAAA2 (PR0J)         parallel recovery slave next change

186 SYS                            oracle@P2AAAAA2 (PR0K)         parallel recovery slave next change

194 SYS                            oracle@P2AAAAA2 (PR0L)         parallel recovery slave next change

202 SYS                            oracle@P2AAAAA2 (PR0M)         parallel recovery slave next change

211 SYS                            oracle@P2AAAAA2 (PR0N)         parallel recovery slave next change

218 SYS                            oracle@P2AAAAA2 (PR0O)         parallel recovery slave next change

225 SYS                            oracle@P2AAAAA2 (PR0P)         parallel recovery slave next change

235 SYS                            oracle@P2AAAAA2 (PR0Q)         parallel recovery slave next change

241 SYS                            oracle@P2AAAAA2 (PR0R)         parallel recovery slave next change

249 SYS                            oracle@P2AAAAA2 (PR0S)         parallel recovery slave next change

250 SYS                            oracle@P2AAAAA2 (PR00)         parallel recovery control message reply

 

33 rows selected.

$>ps -ef|grep ora_pro*

oracle 25652     1  3 16:18:12 ?         1:05 ora_pr0k_p1aaaaa2

oracle 25674     1 40 16:18:12 ?         0:57 ora_pr0u_p1aaaaa2

oracle 25623     1  9 16:18:11 ?         1:13 ora_pr06_p1aaaaa2

oracle 29093 27745  0 16:34:45 pts/5     0:00 grep ora_pro*

oracle 25658     1 28 16:18:12 ?         0:54 ora_pr0m_p1aaaaa2

oracle 25617     1 21 16:18:11 ?         1:17 ora_pr03_p1aaaaa2

oracle 25672     1 16 16:18:12 ?         1:21 ora_pr0t_p1aaaaa2

oracle 25662     1 33 16:18:12 ?         1:19 ora_pr0o_p1aaaaa2

oracle 25629     1 16 16:18:12 ?         1:17 ora_pr09_p1aaaaa2

oracle 25666     1 43 16:18:12 ?         1:03 ora_pr0q_p1aaaaa2

oracle 25645     1 31 16:18:12 ?         1:24 ora_pr0h_p1aaaaa2

oracle 25649     1 42 16:18:12 ?         0:52 ora_pr0j_p1aaaaa2

oracle 25678     1 22 16:18:12 ?         1:19 ora_pr0w_p1aaaaa2

oracle 25647     1 33 16:18:12 ?         1:21 ora_pr0i_p1aaaaa2

oracle 25670     1 21 16:18:12 ?         0:55 ora_pr0s_p1aaaaa2

oracle 25676     1 12 16:18:12 ?         1:13 ora_pr0v_p1aaaaa2

oracle 25627     1 16 16:18:11 ?         1:10 ora_pr08_p1aaaaa2

oracle 25668     1  4 16:18:12 ?         0:53 ora_pr0r_p1aaaaa2

oracle 25643     1 32 16:18:12 ?         1:06 ora_pr0g_p1aaaaa2

oracle 25613     1 16 16:18:11 ?         1:10 ora_pr01_p1aaaaa2

oracle 25641     1 19 16:18:12 ?         0:59 ora_pr0f_p1aaaaa2

oracle 25654     1  7 16:18:12 ?         0:57 ora_pr0l_p1aaaaa2

oracle 25639     1  0 16:18:12 ?         1:15 ora_pr0e_p1aaaaa2

oracle 25664     1 46 16:18:12 ?         1:14 ora_pr0p_p1aaaaa2

oracle 25609     1 254 16:18:10 ?         6:21 ora_pr00_p1aaaaa2

oracle 25625     1 22 16:18:11 ?         0:50 ora_pr07_p1aaaaa2

oracle 25621     1 61 16:18:11 ?         0:54 ora_pr05_p1aaaaa2

oracle 25619     1 14 16:18:11 ?         1:07 ora_pr04_p1aaaaa2

oracle 25637     1 17 16:18:12 ?         2:16 ora_pr0d_p1aaaaa2

oracle 25635     1 55 16:18:12 ?         0:50 ora_pr0c_p1aaaaa2

oracle 25660     1 33 16:18:12 ?         1:15 ora_pr0n_p1aaaaa2

oracle 25631     1 22 16:18:12 ?         1:02 ora_pr0a_p1aaaaa2

oracle 25633     1 13 16:18:12 ?         1:03 ora_pr0b_p1aaaaa2

oracle 25615     1 27 16:18:11 ?         1:05 ora_pr02_p1aaaaa2

 

 

1.4 对数据库内存及并行相关参数进行查验

通过对内存相关参数进行查验,当前使用的是AMM自动内存管理,共分配了1G内存。

当前主机是128G内存的,这个值明显偏小。

对于并行回滚、并行服务等相关参数进行查验,均为系统默认值。

SQL> select status,instance_name from gv$instance;

 

STATUS                   INSTANCE_NAME

———————— ——————————–

MOUNTED                  p1aaaaa2

 

SQL> show parameter memo

NAME                                 TYPE                   VALUE

———————————— ———————- ——————————

hi_shared_memory_address             integer                0

memory_max_target                    big integer            1G

memory_target                        big integer            1G

shared_memory_address                integer                0

 

SQL> show parameter sga

NAME                                 TYPE                   VALUE

———————————— ———————- ——————————

lock_sga                             boolean                FALSE

pre_page_sga                         boolean                FALSE

sga_max_size                         big integer            1G

sga_target                           big integer            0

SQL> show parameter pga

NAME                                 TYPE                   VALUE

———————————— ———————- ——————————

pga_aggregate_target                 big integer            0

SQL> col name for a40

SQL> select * from v$sgainfo;

NAME                                          BYTES RESIZE

—————————————- ———- ——

Fixed SGA Size                              2212392 No

Redo Buffers                                9756672 No

Buffer Cache Size                         159383552 Yes

Shared Pool Size                          381681664 Yes

Large Pool Size                            79691776 Yes

Java Pool Size                              4194304 Yes

Streams Pool Size                                 0 Yes

Shared IO Pool Size                               0 Yes

Granule Size                                4194304 No

Maximum SGA Size                         1068937216 No

Startup overhead in Shared Pool           228519832 No

Free SGA Memory Available                 432013312

12 rows selected.

———————————————

SQL> show parameter PARALLEL_MAX_SERVERS

NAME                                 TYPE                   VALUE

———————————— ———————- ——————————

parallel_max_servers                 integer                120

SQL> show parameter fast_start_parallel

NAME                                 TYPE                   VALUE

———————————— ———————- ——————————

fast_start_parallel_rollback         string                 LOW

SQL> show parameter cpu

NAME                                 TYPE                   VALUE

———————————— ———————- ——————————

cpu_count                            integer                32

parallel_threads_per_cpu             integer                2

resource_manager_cpu_allocation      integer                32

SQL>  show parameter RECOVERY_PARALLELISM

NAME                                 TYPE                   VALUE

———————————— ———————- ——————————

recovery_parallelism                 integer                0

 

问题查到这里,已经有些眉目;较大可能是因为并行恢复引起的争用;同时数据库内存资源分配较少,也会对并行恢复造成影响。

因此环境为迁移后的生产环境,目前进行的是数据恢复测试;因此,为了确认是否并行引起的问题,对恢复进程进行了10046跟踪。

 

 

 

 

1.5 使用trace命令对recover的过程进行跟踪

 

SQL> alter session set tracefile_identifier=’rman7_10046′;

 

Session altered.

 

SQL> alter session set events ‘10046 trace name context forever,level 12′;

 

Session altered.

 

SQL> recover database using backup controlfile until cancel;

ORA-00279: change 10286665324215 generated at 12/19/2015 04:02:42 needed for

thread 2

ORA-00289: suggestion : +DATA/p2aaaaa/arch/2_67571_790336456.dbf

ORA-00280: change 10286665324215 for thread 2 is in sequence #67571

 

 

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

auto

 

trace文件分析:

*** 2015-12-29 11:31:01.992

WAIT #11529215044981989888: nam=’class slave wait’ ela= 1009897 slave id=0 p2=0 p3=0 obj#=-1 tim=341067374554

WAIT #11529215044981989888: nam=’parallel recovery control message reply’ ela= 108015 p1=0 p2=0 p3=0 obj#=-1 tim=341067482664

 

*** 2015-12-29 11:31:03.110

WAIT #11529215044981989888: nam=’class slave wait’ ela= 1009993 slave id=0 p2=0 p3=0 obj#=-1 tim=341068492679

WAIT #11529215044981989888: nam=’parallel recovery control message reply’ ela= 109933 p1=0 p2=0 p3=0 obj#=-1 tim=341068602680

—这里的ela= 109933是微秒,约为1秒

*** 2015-12-29 11:31:04.230

WAIT #11529215044981989888: nam=’class slave wait’ ela= 1009912 slave id=0 p2=0 p3=0 obj#=-1 tim=341069612682

WAIT #11529215044981989888: nam=’parallel recovery control message reply’ ela= 109877 p1=0 p2=0 p3=0 obj#=-1 tim=341069722665

 

*** 2015-12-29 11:31:05.350

WAIT #11529215044981989888: nam=’class slave wait’ ela= 1009978 slave id=0 p2=0 p3=0 obj#=-1 tim=341070732669

WAIT #11529215044981989888: nam=’parallel recovery control message reply’ ela= 109908 p1=0 p2=0 p3=0 obj#=-1 tim=341070842664

 

*** 2015-12-29 11:31:06.470

WAIT #11529215044981989888: nam=’class slave wait’ ela= 1009980 slave id=0 p2=0 p3=0 obj#=-1 tim=341071852670

WAIT #11529215044981989888: nam=’parallel recovery control message reply’ ela= 109948 p1=0 p2=0 p3=0 obj#=-1 tim=341071962664

 

*** 2015-12-29 11:31:07.590

WAIT #11529215044981989888: nam=’class slave wait’ ela= 1009983 slave id=0 p2=0 p3=0 obj#=-1 tim=341072972669

WAIT #11529215044981989888: nam=’parallel recovery control message reply’ ela= 109950 p1=0 p2=0 p3=0 obj#=-1 tim=341073082665

 

———-tkprof格式化后的输出:

从如下输出可以看到在并行恢复上确实存在大量等待。

Elapsed times include waiting on following events:

Event waited on                             Times   Max. Wait  Total Waited

—————————————-   Waited  ———-  ————

KSV master wait                                 8        0.00          0.00

parallel recovery control message reply       568        0.11         60.67

class slave wait                              573        1.01        554.60

SQL*Net break/reset to client                   8        0.00          0.00

SQL*Net message to client                       6        0.00          0.00

SQL*Net message from client                     6        7.20          7.20

latch free                                      2        0.00          0.00

master exit                                    65        3.01         10.12

********************************************************************************

 

通过以上分析,可以发现时间主要花费在class slave wait/parallel recovery control message reply等待事件上。

 

1.6 问题初步小结

通过以上分析,可以发现应用归档日志时大量时间花费在class slave wait/parallel recovery control message reply等待事件上。

通过对数据库相关参数的确认,当前数据库的内存设置偏小。

同时因为CPU数量多达32,对应的默认就会产生32个并行恢复进程。

通过在执行recover时对v$session视图进行查询,以及10046对recover恢复命令进行跟踪,可以确认是并行恢复时的争用引起的recover非常缓慢。

 

二、解决与验证

经过上一步的分析,基本确认是并行争用引起的问题,因此采取如下措施:

计划通过调大内存相关参数,然后再限制并行恢复进程的方式对此问题进行解决与验证。

 

2.1 调整内存相关参数

首先关闭AMM,设置SGA为50G,PGA为10G。

设置内存参数后,并未做其它设置。重新启动数据库,并进行归档日志的恢复应用;此时的恢复速度已经有了较大提高。

经测算,恢复速度大约在每分钟8G归档日志。

 

2.2 设置应用归档日志并发数量再次提高恢复速度

归档日志应用进程的数量,默认是取决于初始化参数CPU_COUNT中的CPU数量,默认启动的日志应用进程的数量与CPU数量相同。

因此,对于归档日志应用进程的数量,也就有两种方式来控制:

1.在恢复过程中人为临时修改cpu_count的值

2.在恢复命令中显式指定并行度

cpu_count默认是在启动时根据主机的CPU数量生成,因此本次不显式指定此参数。通过在命令中指定并行度的方式限制恢复的并行度。

分别通过指定并行度为16、8、4,对不同并行度时的恢复速度进行了对比;在并行为16与8时恢复速度基本一致,达到了每分钟恢复10G左右归档日志。

 

三、相关知识点与总结

3.1 问题总结

在大部分环境中,在创建数据库或者异机恢复时,一般都会对于数据库实例的内存根据经验值进行合理设置;因此很少出现RECOVER时非常慢之类的问题。

从而对于并行恢复是快或者慢的问题,也是很少被注意到。

在此次的故障解决中,通过v$session查看recover时的event,通过10046对恢复过程进行trace,通过这些分析步骤可以确定到问题的原因。通过分步修改内存及使用不同的recover并行度进行测试,可以找到合理的并行参数,来最终得到最优的恢复性能。

 

3.2 关于恢复的并行数量相关参数:

大事务或死事务的回滚进程数可以通过参数fast_start_parallel_rollback来设置是否启用并行恢复。

此参数有3个值:

FALSE  –>Parallel rollback is disabled 即FALSE时关闭并行回滚,使用SMON的串行恢复;

LOW   –>Limits the maximum degree of parallelism to 2 * CPU_COUNT   LOW是11GR2默认值,

HIGH  –>Limits the maximum degree of parallelism to 4 * CPU_COUNT

对应的回滚进程是:ora_p025_orcl  ora_p022_orcl 这种都是后台启动的并行回滚的进程。

 

并行回滚的并发度如果参数是high或low,4倍或2倍的cpu数,也会受到另外一些参数的影响,如PARALLEL_MAX_SERVERS(主要决定于CPU_COUNT, PARALLEL_THREADS_PER_CPU, PGA_AGGREGATE_TARGET),会对最大的并发度限制。

RECOVERY_PARALLELISM 此参数只对instanceor crash recovery有效; Media recovery 不受此参数影响。

而经过测试,recover恢复时的并行进程数量,与以上参数值关系不大。

参考官方文档,对于recover恢复时的并行进程数量,如果在recover命令中进行指定,则按照指定并行度进行;如果未指定,则默认按照cpu_count值,生成相应数量的并行恢复进程。

http://docs.oracle.com/cd/E11882_01/backup.112/e10643/rcmsynta2001.htm#RCMRF140

PARALLEL

Specifies parallel recovery (default).

By default, the database uses parallel media recovery to improve performance

of the roll forward phase of media recovery. To override the default behavior of

performing parallel recovery, use the RECOVER with the NOPARALLEL option,

or RECOVER PARALLEL 0.

In parallel media recovery, the database uses a “division of labor” approach to

allocate different processes to different data blocks while rolling forward,

thereby making the operation more efficient. The number of processes is

derived from the CPU_COUNT initialization parameter, which by default equals

the number of CPUs on the system. For example, if parallel recovery is

performed on a system where CPU_COUNT is 4, and only one datafile is

recovered, then four spawned processes read blocks from the datafile and apply

redo.

Typically, recovery is I/O-bound on reads from and writes to data blocks.

Parallelism at the block level may only help recovery performance if it increases

total I/Os, for example, by bypassing operating system restrictions on

asynchronous I/Os. Systems with efficient asynchronous I/O see little benefit

from parallel media recovery.

 

3.3 对RMAN中命令进行10046跟踪

在RMAN备份恢复中的“异常”问题,也可以通过在RMAN窗口中执行10046命令来进行跟踪:

RMAN> sql “alter session set tracefile_identifier=”rman_10046””;
RMAN> sql “alter session set events ”10046 trace name context forever,level 12””;

 

 

3.4 参考文档

RMAN Backup Performance (文档 ID 360443.1)

RMAN Restore Performance (文档 ID 740911.1)

Advise On How To Improve Rman Performance (文档 ID 579158.1)

如何收集用来诊断性能问题的10046 Trace(SQL_TRACE) (文档 ID 1523462.1)

Interpreting Raw SQL_TRACE output (文档 ID 39817.1)

TKProf Interpretation (9i and above) (文档 ID 760786.1)

Oracle RMAN工具recover应用归档日志速度慢的排查思路

使用IMPDP导入数据时,通常我们会设置并行,希望数据库在导入数据、创建索引等耗时间长的动作中能够使用并行技术来加快导入动作的执行速度;但是在12.2版本上,IMPDP已经设置了并行,但是在trace中发现索引创建始终使用串行,而不是并行。

SQL> conn test/test@PDB1
SQL> create table a(m number,n number) parallel 4;
SQL> create index a_ind on a(m) parallel 3;
SQL> !expdp test/test@PDB1 dumpfile=b.dmp directory=my_dir
SQL> !impdp test/test@PDB1 directory=my_dir dumpfile=b.dmp parallel=2 TRACE=480301

DW trace显示index创建使用的是parallel=1,而不是parallel=2!。

CDB2_dw00_6658.trc
=====================
PARSING IN CURSOR #140037561274968 len=170 dep=2 uid=79 oct=1 lid=79
tim=841576694 hv=1135291776 ad=’61d1c0b0′ sqlid=’apjngud1uqbc0′
CREATE TABLE “TEST”.”A” (“M” NUMBER, “N” NUMBER) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE “USERS” PARALLEL 4
END OF STMT

=====================
PARSING IN CURSOR #140037558550112 len=114 dep=2 uid=79 oct=9 lid=79
tim=842113538 hv=68235534 ad=’6374e7a8′ sqlid=’0u96wjh212c8f’
CREATE INDEX “TEST”.”A_IND” ON “TEST”.”A” (“M”) PCTFREE 10 INITRANS 2
MAXTRANS 255 TABLESPACE “USERS” PARALLEL 1 <=======PARALLEL 1, even if parallel=3 was set during index creation phase
END OF STMT

但是在12.1.0.2却没有这个问题,该并行并行:

R1201_dw00_29326.trc :
=====================
PARSING IN CURSOR #140427279060200 len=115 dep=2 uid=111 oct=9 lid=111
tim=8385394705 hv=1693801083 ad=’77900cf8′ sqlid=’3t4ktqdkgaqmv’
CREATE INDEX “TEST”.”A_IND” ON “TEST”.”A” (“M”) PCTFREE 10 INITRANS 2
MAXTRANS 255 TABLESPACE “SYSTEM” PARALLEL 2 <========PARALLEL 2
END OF STMT

这是ORACLE bug,ORACLE开发的解释是,这是期待的行为,因为,“我们发现这样更快”!

BUG 26091146 – IMPDP CREATE INDEX WITH PARALLEL 1 IGNORING COMMAND LINE PARALLEL=2, Development explained that this is an expected behavior supplying the following explanation:

“General support for parallel import of most object type, including indexes, is a 12.2 feature, which led to study of parallel creation of individual indexes. What was found was that using parallel index creation was generally slower than non-parallel. That led to a decision to backport the change to not use parallel index creation.”

因为在12.2新feature的开发过程中,我们研究了一下impdp时的index的创建,发现“一般情况下”串行比并行建索引更快,所以我们决定把impdp时的索引都改成串行创建,并且在创建完成后,再使用’ALTER INDEX … PARALLEL n’ 设置索引的并行度,以实现查询时的并行效果。

所以,ORACLE开发不认为是BUG;因此在导入数据时,需要注意这个地方,如果时间紧急,要考虑其他方式的并行建索引(如IMPDP导入不建索引,导出建索引SQL文本并人工并行执行等方式灵活处理)。

Oracle在12.2版本使用IMPDP并行导入数据时未并行建索引的分析

关于重建索引有用与否的讨论有很多。一般而言,极少需要重建 B 树索引,基本原因是 B 树索引很大程度上可以自我管理或自我平衡。

认为需要重建索引的最常见理由有:
– 索引碎片在不断增加
– 索引不断增加,删除的空间没有重复使用
– 索引 clustering factor (群集因子)不同步

事实上,大多数索引都能保持平衡和完整,因为空闲的叶条目可以重复使用。插入/更新和删除操作确实会导致索引块周围的可用空间形成碎片,但是一般来说这些碎片都会被正确的重用。
Clustering factor 群集因子可以反映给定的索引键值所对应的表中的数据排序情况。重建索引不会对群集因子产生影响,要改变集群因子只能通过重组表的数据。

另外,重建索引的影响非常明显,请仔细阅读以下说明:

1. 大多数脚本都依赖 index_stats 动态表。此表使用以下命令填充:

analyze index ... validate structure;

尽管这是一种有效的索引检查方法,但是它在分析索引时会获取独占表锁。特别对于大型索引,它的影响会是巨大的,因为在此期间不允许对表执行 DML 操作。虽然该方法可以在不锁表的情况下在线运行,但是可能要消耗额外的时间。

2. 重建索引的直接结果是 REDO 活动可能会增加,总体的系统性能可能会受到影响。

插入/更新/删除操作会导致索引随着索引的分割和增长不断发展。重建索引后,它将连接的更为紧凑;但是,随着对表不断执行 DML 操作,必须再次分割索引,直到索引达到平衡为止。结果,重做活动增加,且索引分割更有可能对性能产生直接影响,因为我们需要将更多的 I/O、CPU 等用于索引重建。经过一段时间后,索引可能会再次遇到“问题”,因此可能会再被标记为重建,从而陷入恶性循环。因此,通常最好是让索引处于自然平衡和(或)至少要防止定期重建索引。

3. 通常是优先考虑index coalesce(索引合并),而不是重建索引。索引合并有如下优点:

- 不需要占用近磁盘存储空间 2 倍的空间
- 可以在线操作
- 无需重建索引结构,而是尽快地合并索引叶块,这样可避免系统开销过大,请见第 2 点中的解释。

注意:例如,如要将索引转移到其他表空间,则需要重建索引。

综上所述,强烈建议不要定期重建索引,而应使用合适的诊断工具,下面分享一个Oracle推荐的分析索引结构的脚本以及使用建议:

 

1). 创建一个将包含索引统计信息表的用户

2). 为此用户分配“dba”权限。并授予选择dba_tablespaces;<user>

3). 执行位于脚本部分中的代码。

如果脚本以 SYS 以外的用户身份运行,则在创建包正文时可能会遇到 ORA-942 错误。

使用定义者权限调用过程时,角色将丢失,因此除非显式授予以下 SELECT 权限,否则创建包正文将失败:

grant select on dba_tablespaces to <user>;
grant select on dba_indexes to <user>;
grant select on dba_tables to <user>;
grant select on dba_ind_columns to <user>;
grant select on dba_tab_cols to <user>;
grant select on dba_objects to <user>;
grant select on v_$parameter to <user>;

并且执行分析脚本前需要分析索引的统计信息,以下为例:

 

SQL> exec dbms_stats.gather_schema_stats(‘SCOTT’);

SQL> exec index_util.inspect_schema (‘SCOTT’);

 

脚本样例:

 

CREATE TABLE index_log (
 owner          VARCHAR2(30),
 index_name     VARCHAR2(30),
 last_inspected DATE,
 leaf_blocks    NUMBER,    
 target_size    NUMBER,
 idx_layout     CLOB);

ALTER TABLE index_log ADD CONSTRAINT pk_index_log PRIMARY KEY (owner,index_name);

CREATE TABLE index_hist (
 owner          VARCHAR2(30),
 index_name     VARCHAR2(30),
 inspected_date DATE,
 leaf_blocks    NUMBER,    
 target_size    NUMBER,
 idx_layout     VARCHAR2(4000));

ALTER TABLE index_hist ADD CONSTRAINT pk_index_hist PRIMARY KEY  (owner,index_name,inspected_date);

--
-- Variables:
--  vMinBlks: Specifies the minimum number of leaf blocks for scanning the index
--            Indexes below this number will not be scanned/reported on
--  vScaleFactor: The scaling factor, defines the threshold of the estimated leaf block count
--                to be smaller than the supplied fraction of the current size.
--  vTargetUse : Supplied percentage utilisation. For example 90% equates to the default pctfree 10
--  vHistRet : Defines the number of records to keep in the INDEX_HIST table for each index entry
--

CREATE OR REPLACE PACKAGE index_util AUTHID CURRENT_USER IS
vMinBlks     CONSTANT POSITIVE := 1000;
vScaleFactor CONSTANT NUMBER := 0.6;
vTargetUse   CONSTANT POSITIVE := 90;  -- equates to pctfree 10  
vHistRet     CONSTANT POSITIVE := 10;  -- (#) records to keep in index_hist
 procedure inspect_schema (aSchemaName IN VARCHAR2);
 procedure inspect_index (aIndexOwner IN VARCHAR2, aIndexName IN VARCHAR2, aTableOwner IN VARCHAR2, aTableName IN VARCHAR2, aLeafBlocks IN NUMBER);
END index_util;
/

CREATE OR REPLACE PACKAGE BODY index_util IS
procedure inspect_schema (aSchemaName IN VARCHAR2) IS
 begin
 FOR r IN (select table_owner, table_name, owner index_owner, index_name, leaf_blocks
           from dba_indexes  
           where owner = upper(aSchemaname)
             and index_type in ('NORMAL','NORMAL/REV','FUNCTION-BASED NORMAL')
             and partitioned = 'NO'  
             and temporary = 'N'  
             and dropped = 'NO'  
             and status = 'VALID'  
             and last_analyzed is not null  
           order by owner, table_name, index_name) LOOP

   IF r.leaf_blocks > vMinBlks THEN
   inspect_index (r.index_owner, r.index_name, r.table_owner, r.table_name, r.leaf_blocks);
   END IF;
  END LOOP;
 commit;
end inspect_schema;
procedure inspect_index (aIndexOwner IN VARCHAR2, aIndexName IN VARCHAR2, aTableOwner IN VARCHAR2, aTableName IN VARCHAR2, aLeafBlocks IN NUMBER) IS
 vLeafEstimate number;  
 vBlockSize    number;
 vOverhead     number := 192; -- leaf block "lost" space in index_stats
 vIdxObjID     number;
 vSqlStr       VARCHAR2(4000);
 vIndxLyt      CLOB;
 vCnt          number := 0;
  TYPE IdxRec IS RECORD (rows_per_block number, cnt_blocks number);
  TYPE IdxTab IS TABLE OF IdxRec;
  l_data IdxTab;
begin  
  select a.block_size into vBlockSize from dba_tablespaces a,dba_indexes b where b.index_name=aIndexName and b.owner=aIndexOwner and a.tablespacE_name=b.tablespace_name;
 select round (100 / vTargetUse *       -- assumed packing efficiency
              (ind.num_rows * (tab.rowid_length + ind.uniq_ind + 4) + sum((tc.avg_col_len) * (tab.num_rows) )  -- column data bytes  
              ) / (vBlockSize - vOverhead)  
              ) index_leaf_estimate  
   into vLeafEstimate  
 from (select  /*+ no_merge */ table_name, num_rows, decode(partitioned,'YES',10,6) rowid_length  
       from dba_tables
       where table_name  = aTableName  
         and owner       = aTableOwner) tab,  
      (select  /*+ no_merge */ index_name, index_type, num_rows, decode(uniqueness,'UNIQUE',0,1) uniq_ind  
       from dba_indexes  
       where table_owner = aTableOwner  
         and table_name  = aTableName  
         and owner       = aIndexOwner  
         and index_name  = aIndexName) ind,  
      (select  /*+ no_merge */ column_name  
       from dba_ind_columns  
       where table_owner = aTableOwner  
         and table_name  = aTableName
         and index_owner = aIndexOwner   
         and index_name  = aIndexName) ic,  
      (select  /*+ no_merge */ column_name, avg_col_len  
       from dba_tab_cols  
       where owner = aTableOwner  
         and table_name  = aTableName) tc  
 where tc.column_name = ic.column_name  
 group by ind.num_rows, ind.uniq_ind, tab.rowid_length;

 IF vLeafEstimate < vScaleFactor * aLeafBlocks THEN
  select object_id into vIdxObjID
  from dba_objects  
  where owner = aIndexOwner
    and object_name = aIndexName;
   vSqlStr := 'SELECT rows_per_block, count(*) blocks FROM (SELECT /*+ cursor_sharing_exact ' ||
             'dynamic_sampling(0) no_monitoring no_expand index_ffs(' || aTableName ||
             ',' || aIndexName || ') noparallel_index(' || aTableName ||
             ',' || aIndexName || ') */ sys_op_lbid(' || vIdxObjID ||
             ', ''L'', ' || aTableName || '.rowid) block_id, ' ||
             'COUNT(*) rows_per_block FROM ' || aTableOwner || '.' || aTableName || ' GROUP BY sys_op_lbid(' ||
             vIdxObjID || ', ''L'', ' || aTableName || '.rowid)) group by rows_per_block order by rows_per_block';
   execute immediate vSqlStr BULK COLLECT INTO l_data;
  vIndxLyt := '';

   FOR i IN l_data.FIRST..l_data.LAST LOOP
    vIndxLyt := vIndxLyt || l_data(i).rows_per_block || ' - ' || l_data(i).cnt_blocks || chr(10);
   END LOOP;

   select count(*) into vCnt from index_log where owner = aIndexOwner and index_name = aIndexName;

   IF vCnt = 0   
    THEN insert into index_log values (aIndexOwner, aIndexName, sysdate, aLeafBlocks, round(vLeafEstimate,2), vIndxLyt);
    ELSE vCnt := 0;

         select count(*) into vCnt from index_hist where owner = aIndexOwner and index_name = aIndexName;

         IF vCnt >= vHistRet THEN
           delete from index_hist
           where owner = aIndexOwner
             and index_name = aIndexName
             and inspected_date = (select MIN(inspected_date)
                                   from index_hist
                                   where owner = aIndexOwner
                                     and index_name = aIndexName);
         END IF;

          insert into index_hist select * from index_log where owner = aIndexOwner and index_name = aIndexName;

         update index_log  
         set last_inspected = sysdate,
             leaf_blocks = aLeafBlocks,
             target_size = round(vLeafEstimate,2),
             idx_layout = vIndxLyt
        where owner = aIndexOwner and index_name = aIndexName;

   END IF;
  END IF;
 END inspect_index;
END index_util;
/

脚本执行输出的样例:

 

SQL> select owner, index_name, last_inspected, leaf_blocks, target_size 

  from index_log

OWNER INDEX_NAME LAST_INSP LEAF_BLOCKS TARGET_SIZE

------------------------------ ------------------------------ --------- ----------- -----------

SYS I_ARGUMENT1 17-JUN-10 432 303

SYS I_ARGUMENT2 17-JUN-10 282 186

SYS I_COL1 17-JUN-10 288 182

SYS I_DEPENDENCY1 17-JUN-10 109 103

SYS I_DEPENDENCY2 17-JUN-10 136 95

SYS I_H_OBJ#_COL# 17-JUN-10 258 104

SYS WRH$_SQL_PLAN_PK 17-JUN-10 118 59

SYS WRI$_ADV_PARAMETERS_PK 17-JUN-10 210 121

SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST 17-JUN-10 2268 1313

SYS I_WRI$_OPTSTAT_H_ST 17-JUN-10 1025 963

SYS I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST 17-JUN-10 338 191

 

SQL> select idx_layout from index_log where owner='SCOTT' and index_name='T_IDX';



IDX_LAYOUT 

------------

104 - 1 

204 - 1 

213 - 1 

219 - 1 

221 - 2 

222 - 1 

223 - 2 

224 - 1 

225 - 1 

230 - 1 

231 - 3 

235 - 3 

236 - 1 

238 - 3 

239 - 2 

241 - 1 

242 - 2 

243 - 1 

245 - 3 

247 - 1 

249 - 1 

250 - 1 

252 - 3 

255 - 1 

257 - 2 

263 - 2 

264 - 1 

267 - 1 

268 - 1 

276 - 1 

283 - 1 

296 - 1 

345 - 1
SQL> select to_char(inspected_date,'DD-MON-YYYY HH24:MI:SS') inspected_date, 

leaf_blocks, target_size 

from index_hist 

where index_name='T_IDX';



INSPECTED_DATE LEAF_BLOCKS TARGET_SIZE

-------------------- ----------- -----------

10-MAR-2010 10:04:04 432 303

10-APR-2010 10:04:03 435 430

10-MAY-2010 10:04:02 431 301

 

 

 

 

 

 

索引重建的必要性与影响以及相关分析索引结构的脚本