Skip to content

All posts by Guang Cai Li - 7. page

在ORACLE数据库中SQL效率的突然衰减通常是因为目标SQL的执行计划发生改变。(应用升级导致SQL执行次数变化非常大也有可能导致,但是不是单个SQL的问题了)
此时,通常通过重新收集统计信息或修改SQL(加HINT)来解决。但是有时收集统计信息并不能解决,而且无法修改SQL,此时就需要用到其它方法来固定执行计划。

常用的固定执行计划的方法有以下:

9I:stored outline
10G:SQL PROFILE(automatic/manual)
11G:SPM /SQL PROFILE(automatic/manual)

1.使用SQL PROFILE来稳定执行计划
是9I:stored outline的进化,9I:stored outline能实现的功能SQL PROFILE也完全能实现。
SQL PROFILE的优点:
更容易生成,更改和控制
对SQL语句的支持上做得更好,适应范围更广

很容易实现锁定或稳定执行计划、在不能修改目标SQL的SQL文本情况下使目标SQL语句按指定执行计划运行。

#############
SQLT工具中的COE系列脚本-sqlt\utl目录:
1.coe_xfr_sql_profile.sql脚本
适用于当前数据库中同一SQL有多个执行计划(内存或AWR数据中)时进行绑定;
通过替换hint数据也可以实现将另一个SQL(一般是加HINT)的较好执行计划,绑定到目标SQL上。

2.coe_load_sql_profile.sql
同一数据库中两个SQL(一般有一个是加了HINT)的执行计划,用加hint的更优执行计划,替换目标SQL当前的执行计划。
跨数据库也可以进行这种替换执行计划。

3.coe_load_sql_baseline.sql
同一数据库中两个SQL(一般有一个是加了HINT)的执行计划,用加hint的更优执行计划,替换目标SQL当前的执行计划。
跨数据库也可以进行这种替换执行计划。
脚本执行步骤:
1.获取原始SQL文本、修改SQL的PHV
2.为modified_sql手动创建基线
3.通过DBMS_SPM.ALTER_SQL_PLAN_BASELINE attribute_value修改基线对应成原始SQL
4.CREATE_STGTAB_BASELINE创建表并导出BASELINE数据

参考MOS文档:
Automatic SQL Tuning and SQL Profiles (文档 ID 271196.1)

Whitepaper: SQL Plan Management in Oracle Database 11g
Master Note: Plan Stability Features (Including SQL Plan Management (SPM)) (文档 ID 1359841.1)
FAQ: SQL Plan Management (SPM) Frequently Asked Questions (文档 ID 1524658.1)
How to Use SQL Plan Management (SPM) – Plan Stability Worked Example (文档 ID 456518.1)
How to Accept & Unaccept Plans in SQL Plan Baselines (文档 ID 1309799.1)
How to Drop Plans from the SQL Plan Management (SPM) Repository (文档 ID 790039.1)
Loading Hinted Execution Plans into SQL Plan Baseline. (文档 ID 787692.1)
–>Document 215187.1 SQLTXPLAIN offers a script ( ./utl/coe_load_sql_baseline.sql ) to automate the steps
Transporting SQL PLAN Baselines from One Database to Another. (文档 ID 880485.1)

迁移SQL PLAN Baselines
How to Load SQL Plans into SQL Plan Management (SPM) from the Automatic Workload Repository (AWR) (文档 ID 789888.1)
How to Move a SQL Tuning Set from One Database to Another (文档 ID 751068.1)
Automatic SQL Tuning and SQL Profiles (文档 ID 271196.1)
How to Move SQL Profiles from One Database to Another (Including to Higher Versions) (文档 ID 457531.1)

 

1.通过COE系列脚本:coe_xfr_sql_profile.sql脚本固定执行计划的测试:

—注意脚本中force_match=>FALSE要修改为force_match=>TRUE
–删除SQLPROFILE:
SQL>exec dbms_sqltune.drop_sql_profile(name => ‘coe_2r2qpwfmc2fwp_190799060′);
PL/SQL procedure successfully completed.

情况1:当前内存或AWR中已经有同一SQL的不同执行计划,且不同的执行计划中有正确 的执行计划,此时使用方法如下:
1.通过SQL_TEXT查出SQL_ID,执行如下命令生成脚本,再执行脚本来固定执行计划:
SQL> @coe_xfr_sql_profile.sql

Parameter 1:
SQL_ID (required)

Enter value for 1: d3124k0m9ppwf

PLAN_HASH_VALUE AVG_ET_SECS
————— ———–
4127820982 .035

Parameter 2: —输入通过SQL_ID得到的不同HASH_VALUE中的正确值:4127820982
Execute coe_xfr_sql_profile_d3124k0m9ppwf_4127820982.sql
on TARGET system in order to create a custom SQL Profile
with plan 4127820982 linked to adjusted sql_text.

COE_XFR_SQL_PROFILE completed.
在SQLPLUS窗口中执行此coe_xfr_sql_profile_d3124k0m9ppwf_4127820982.sql文件然后验证即可。

验证SQL执行情况:
select EXECUTIONS,PLAN_HASH_VALUE,ELAPSED_TIME/1000000,ELAPSED_TIME/1000000/EXECUTIONS,LAST_ACTIVE_TIME,ROWS_PROCESSED
from v$sql where EXECUTIONS>0 and sql_id=’g87an0j5djjpm’ order by last_active_time desc;

 

2.通过SPM固定执行计划的方法

ORACLE在11G推出了SPM sql plan management,是一种主动的稳定执行计划的手段,能够保证只有被验证过的执行计划才会被启用。
在目标SQL产生新执行计划后,新执行计划不会被启用,需要由用户确认是否启用。
SPM首先可以主动稳定执行计划,同时对于可能出现的更高效的执行计划,也可能得到使用。
启用SPM后,每个SQL对应一个SQL PLAN baseline–Oracle SQL执行计划基线,存储对应的执行计划.当SQL有多个执行计划,该SQL可能有多个SQL PLAN baseline–dba_sql_plan_baselines.
dba_sql_plan_baselines中的列ENABLED/ACCEPTED用来描述一个SQL PLAN baseline所对应的执行计划是否被启用。
只有列ENABLED/ACCEPTED值均为YES才会被启用。当一条SQL有超过1列ENABLED/ACCEPTED均为YES的执行计划,ORACLE会选择COST小的执行计划。
在ORACLE 11G及以上版本,有如下 两种方法可以生成目标SQL的SQL PLAN baseline.
1.自动捕获,手工切换使用
2.手工生成/批量导入(适合大版本升级,确保升级后执行计划不变更)
########################################
命令汇总:
show parameter use_sql_plan
alter session set optimizer_capture_sql_plan_baselines=TRUE;
alter session set optimizer_use_sql_plan_baselines=FALSE;
set linesize 300
set pagesize 1000
select SQL_HANDLE,SQL_TEXT,PLAN_NAME,ORIGIN,ENABLED,ACCEPTED from dba_sql_plan_baselines
where sql_text like ‘select object_id,object_name from t8 where object_id between 300 and 400%’;

###将dba_sql_plan_baselines的列ACCEPTED设置为YES。(只能从NO改为YES,不能从YES改为NO)
var temp varchar2(1000);
exec :temp := dbms_spm.evolve_sql_plan_baseline(sql_handle=>’SQL_4fa41df4e6247591′,plan_name=>’SQL_PLAN_4z90xymm28xcjec50a0c0′,verify=>’NO’,commit=>’YES’);

###将dba_sql_plan_baselines的列ENABLED设置为YES。–需要指定sql_handle,plan_name
var temp varchar2(1000);
exec :temp := dbms_spm.alter_sql_plan_baseline(sql_handle=>’SQL_4fa41df4e6247591′,plan_name=>’SQL_PLAN_4z90xymm28xcj9576b900′,attribute_name=>’ENABLED’,attribute_value=>’YES’);
###将dba_sql_plan_baselines的列ENABLED设置为NO,禁用了此执行计划。–需要指定sql_handle,plan_name
var temp varchar2(1000);
exec :temp := dbms_spm.alter_sql_plan_baseline(sql_handle=>’SQL_4fa41df4e6247591′,plan_name=>’SQL_PLAN_4z90xymm28xcjec50a0c0′,attribute_name=>’ENABLED’,attribute_value=>’NO’);

exec DBMS_STATS.GATHER_TABLE_STATS ( ownname=>’BYS’,tabname =>’T8′,estimate_percent=>100,cascade=>TRUE);
exec DBMS_STATS.SET_INDEX_STATS( ownname=>’BYS’,indname =>’IDX_T8′,clstfct=>10000000,no_invalidate=>false);

验证SQL执行情况:
select EXECUTIONS,PLAN_HASH_VALUE,ELAPSED_TIME/1000000,ELAPSED_TIME/1000000/EXECUTIONS,LAST_ACTIVE_TIME,ROWS_PROCESSED
from v$sql where EXECUTIONS>0 and sql_id=’g87an0j5djjpm’ order by last_active_time desc;

Oracle固定执行计划的方法介绍与测试

1. HANGANALYZE绍及命令

此部分信息主要参考MOS文档:Interpreting HANGANALYZE trace files to diagnose hanging and performance problems for 9i and 10g. (文档 ID 215858.1)

1.1 使用hanganalyze的场景:

当数据库出现严重的性能问题时,问题的原因可能是HANG。 使用HANGANALYZE生成的跟踪文件,将能够快速确定两个或多个进程是否死锁以及多少其他进程可能会受到影响。 通过使用HANGANALYZE,还可以确定在发生HANG时是否需要执行性能分析来诊断真实问题。

 

很多时候数据库并没有hang住,而只是由于数据库的性能问题,处理的时间比较长。Hanganalyze工具使用内核调用检测会话在等待什么资源,报告出占有者和等待者的相互关系。另外,它还会将一些比较”interesting”的进程状态dump出来,这个取决于我们使用hanganalyze的分析级别。

 

“true” hang,可理解为数据库内部发生死锁。普通的DML产生死锁时oracle服务器会自动监测他们的依赖关系并回滚其中一个操作, 终止这种相互等待的局面。而当这种死锁发生在争夺内核级别的资源(比如说是pins或latches)时,Oracle并不能自动的监测并处理这种死锁。

 

hanganalyze工具从8.1.6可用,在9i增强了诊断RAC环境下的信息,在任一节点执行即可报告出整个集群下的所有会话的信息。

 

1.2 SYSTEMSTATEHANGANALYZE在处理hang问题时的对比:

SYSTEMSTATE处理hang问题时有如下不足:

SYSTEMSTATE dump reads the SGA in a “dirty” manner, so it may be inconsistent when the time to dump all the process is long.

转储时间过长时可能会读到非一致的数据

SYSTEMSTATE dump usually dumps a lot of information (most of which is not needed to determine the source of the hang), which makes difficult to determine the dependencies between processes quickly.

通过会转储大量无关的信息,不利于快速分析

SYSTEMSTATE dumps do not identify “interesting” processes on which to perform additional dumps (ERRORSTACK or PROCESS STATE).

1.3 HANGANALYZE处理hang问题的方式:

HANGANALYZE使用内部内核调用来确定会话是否正在等待资源,并报告阻塞者和被阻塞者之间的关系。 此外,它将“interesting”进程转储,并且根据在执行HANGANALYZE时使用的level在这些进程上执行自动PROCESSSTATE转储和ERRORSTACKS。

注意:HANGANALYZE不是用来替换SYSTEMSTATE转储,而是可以在诊断复杂问题的同时作为解释SYSTEMSTATE的路线图。

 

 

1.4 HANGANALYZE命令示例及level

会话级别的:
SQL>ALTER SESSION SET EVENTS ‘immediate trace name HANGANALYZE level <level>’;实例级别: ===》》需要以SYSDBA; sqlplus -prelim / as sysdba 或者sqlplus  / as sysdba
SQL>ORADEBUG hanganalyze <level>

 

单实例:

oradebug setmypid

oradebug unlimit

oradebug tracefile_name

oradebug hanganalyze 3

 

 

集群范围的:
SQL>ORADEBUG setmypid
SQL>ORADEBUG setinst all
SQL>ORADEBUG -g def hanganalyze <level>

或者:

For 11g:

sqlplus ‘/ as sysdba’

oradebug setorapname reco

oradebug  unlimit

oradebug -g all hanganalyze 3

exit

 

For 10g, run oradebug setmypid instead of oradebug setorapname reco:

sqlplus ‘/ as sysdba’

oradebug setmypid

oradebug unlimit

oradebug -g all hanganalyze 3

oradebug tracefile_name

exit

各个level的含义如下:
1-2:只有hanganalyze输出,不dump任何进程
3:Level2+Dump出在IN_HANG状态的进程
4:Level3+Dump出在等待链里面的blockers(状态为LEAF/LEAF_NW/IGN_DMP)
5:Level4+Dump出所有在等待链中的进程(状态为NLEAF)

10 :Dump all processes (IGN state)

Oracle官方建议不要超过level 3,一般level 3也能够解决问题,超过level 3会给系统带来较大负载。

 

1.5 HANGANALYZE trace文件分析- State of ALL nodes部分

LINUX64+11.2.0.4单实例使用删除不提交模拟TX等待时的TRACE:

 

State of ALL nodes  =====》》》全部节点的状态, 单实例时和state of local nodes内容一样

([nodenum]/cnode/sid/sess_srno/session/ospid/state/[adjlist]):

[0]/1/1/7/0xc38b1360/1966/LEAF/  =====》》》 LEAF/持锁会话,即它不等待任何会话或资源

[31]/1/32/31/0xc3852840/2211/NLEAF/[0][41] =====》》》NLEAF表明等待会话nodenum=0/41

[41]/1/42/11/0xc3833f80/2038/NLEAF/[0] =====》》》NLEAF表明是等待会话nodenum=0

 

关于此部分信息各个列的含义解释:

nodenum  =====》》》HANGANALYZE工具用于标识每个会话的连续序号

cnode =====》》》为实例编号,从1开始,在RAC中会显示为1、2、3这样,oracle9i起可用

sid         = Session ID

sess_srno   = Serial#

session  这个可能是进程的PADDR

ospid        = OS Process Id

state       = State of the node,详细的状态见下面。

[adjlist]:当state为NLEAF时adjlist列会有值,对应阻塞会话的[nodenum]值(可能有多个阻塞会话,RAC时显示的[nodenum]值可能在另一节点上此时在State of LOCAL nodes部分会找不到对应[nodenum])

predecessor: 表示waiter node,即等待此会话的节点  =====》》》11.2.0.3、11.2.0.4未发现有此列

 

state列的不同值:

IN_HANG:这表示该node处于死锁状态,通常还有其他node(blocker)也处于该状态

持锁会话:

LEAF 持锁会话,即它不等待任何会话或资源

LEAF_NW它也是持锁会话

LEAF/LEAF_NW:该node通常是blocker.通过”predecessor”列可以判断这个node是否是blocker。LEAF说明该NODE没有等待其他资源,而LEAF_NW则可能是没有等待其他资源或者是在使用CPU。

被阻塞会话:

NLEAF 表明是处于等待的会话(被阻塞),此时adjlist列会阻塞会话的[nodenum]值;

当有许多行会话信息中的state列都是NLEAF状态,可能是数据库发生性能问题而不是数据库hang。

其它状态:

SINGLE_NODE/SINGLE_NODE_NW: This can be described the same as LEAF and LEAF_NW, except that they don’t have processes depending on them

IGN/IGN_DMP:这类会话通常被认为是空闲会话,除非其adjlist列里存在node。如果是非空闲会话则说明其adjlist里的node正在等待其他node释放资源。

 

 

 

 

2. HANGANALYZE分析示例1

示例1:linux32位+10.2.0.1.0单实例 update产生TX锁的TRACE文件:

State of nodes
([nodenum]/cnode/sid/sess_srno/session/ospid/state/start/finish/[adjlist]/predecessor):
[145]/0/146/5/0x786fc944/3858/LEAF/9/10//153

–这里的LEAF表示该SID阻塞了predecessor=153,nodenum153对应的SID=154,即为被阻塞者
[148]/0/149/1/0x78700160/3806/SINGLE_NODE/11/12//none
[150]/0/151/1/0x787026c8/3804/SINGLE_NODE/13/14//none
[153]/0/154/5/0x78705ee4/3903/NLEAF/15/16/[145]/none

–这里的NLEAF表示该会话被阻塞了,adjlist对应的nodenum=145对应的SID=146,即为阻塞者.

 

示例2:HP-UX +9.2.0.6.0单实例 split分区时HANG

步骤1:split分区时HANG住后执行hanganalyze进行分析如下:

HANG ANALYSIS:
==============
Cycle 1 :<cnode/sid/sess_srno/proc_ptr/ospid/wait_event>:
<0/329/43816/0x4d6b5638/23487/rowcachelock>
–<0/254/19761/0x4d687438/23307/librarycachelock>
Cycle 2 :<cnode/sid/sess_srno/proc_ptr/ospid/wait_event>:
<0/295/57125/0x4d6b8978/19237/rowcachelock>
Cycle 3 :<cnode/sid/sess_srno/proc_ptr/ospid/wait_event>:
<0/295/57125/0x4d6b8978/19237/rowcachelock>
Open chains found:
Other chains found:
Chain 1 :<cnode/sid/sess_srno/proc_ptr/ospid/wait_event>:
<0/312/10459/0x4d69f9b8/25247/NoWait>
Extra information that will be dumped at higher levels:
[level  3] :   4 node dumps — [IN_HANG]
[level  5] :   1 node dumps — [SINGLE_NODE] [SINGLE_NODE_NW] [IGN_DMP]
[level 10] : 223 node dumps — [IGN]

State of nodes
([nodenum]/cnode/sid/sess_srno/session/ospid/state/start/finish/[adjlist]/predecessor):
[0]/0/1/1/0x4d7146c0/5132/IGN/1/2//none
……………………………………………………
[238]/0/239/57618/0x4d7b18a0/13476/IN_HANG/395/402/[294][238][328][253]/none
……………………………………………………
[253]/0/254/19761/0x4d7bb710/23307/IN_HANG/397/400/[328][238][294]/294
………………………………………………………………
[294]/0/295/57125/0x4d7d6820/19237/IN_HANG/396/401/[294][238][253]/238
[328]/0/329/43816/0x4d7ecf40/23487/IN_HANG/398/399/[253]/253
………………………………………………………………

cycle表示oracle内部确定的死锁。其中我们的当前手工执行split的295进程也在里面。我们观察其他的进程在做什么,如329:
SQL>selectmachine,status,program,sql_textfromv$sessiona,v$sqlareab
2  wherea.sql_address=b.addressanda.sid=329;
MACHINE   STATUS    PROGRAM                              SQL_TEXT
——— ——-  ———         ——————————————————————-
hl_rdb01  ACTIVE   sqlplus@hl_rdb01(TNSV1-V3)   ALTER TABLEA_PDA_SP_STATS PLITPARTITION P_MAXAT(20090609) INTO(PARTITION P_20090608  TABLESPACETS_DATA_A  ,PARTITION P_MAX TABLESPACETS_DATA_A)
SQL>select event from v$session_wait where sid=329;
EVENT
——————————————–
row cache lock
发现也是在执行split语句,但是问了同事,他已经把之前运行失败的脚本完全kill掉了。估计在数据库中进程挂死了,没有完全的释放。
kill掉329号进程后,发现还是挂住,所以我们继续做hanganlyze:

 

==============
HANG ANALYSIS:
==============
Cycle 1 :<cnode/sid/sess_srno/proc_ptr/ospid/wait_event>:
<0/295/57125/0x4d6b8978/19237/rowcachelock>
Cycle 2 :<cnode/sid/sess_srno/proc_ptr/ospid/wait_event>:
<0/254/19761/0x4d687438/23307/librarycachelock>
–<0/239/57618/0x4d6b74f8/13476/rowcachelock>

([nodenum]/cnode/sid/sess_srno/session/ospid/state/start/finish/[adjlist]/predecessor):

[238]/0/239/57618/0x4d7b18a0/13476/IN_HANG/395/402/[294][238][328][253]/none
[253]/0/254/19761/0x4d7bb710/23307/IN_HANG/397/400/[328][238][294]/294
[294]/0/295/57125/0x4d7d6820/19237/IN_HANG/396/401/[294][238][253]/238
[328]/0/329/43816/0x4d7ecf40/23487/IN_HANG/398/399/[253]/253

 

对STATE OF NODES部分分析,可以发现:

nodenum=238在等待[294][238][328][253],没有阻塞别的会话

nodenum=294即执行分区操作的进程,此进程阻塞了nodenum=238的,同时在等待nodenum= [294][238][253]释放资源

nodenum=253阻塞了nodenum=294,即执行分区的会话,同时在等待nodenum=[328][238][294]

nodenum=328的阻塞了nodenum=253,同时也被253阻塞

简单来说,就是 nodenum=328阻塞了nodenum=253,nodenum=253阻塞了nodenum=294.

可以尝试kill nodenum=328,不能解决了再kill nodenum=253,再不行了kill [238]

 

我们继续把其他的进程杀掉。终于295的split执行成功。
SQL>ALTER TABLEA_PDA_SP_STAT SPLIT  PARTITIONP_MAXAT(20090609)
INTO(PARTITIONP_20090608 TABLESPACETS_DATA_A  ,PARTITION P_MAX TABLESPACETS_DATA_A)
Table altered.
Elapsed:00:31:03.21
继续执行split下一个分区,也很快完成。
SQL>ALTER TABLEA_PDA_SP_STATS PLITPARTITION P_MAXAT(20090610)
2   INTO(PARTITIONP_20090609 TABLESPACETS_DATA_A
3   ,PARTITIONP_MAX TABLESPACETS_DATA_A);
Table altered.
Elapsed:00:00:00.02
至此,问题解决.

 

示例3:11.2.0.3两节点RAC模拟TX锁时两个节点均有会话hang

 

============================================================================

Extra information that will be dumped at higher levels:

[level  4] :   1 node dumps — [LEAF] [LEAF_NW]

[level  5] :   3 node dumps — [NO_WAIT] [INVOL_WT] [SINGLE_NODE] [NLEAF] [SINGLE_NODE_NW]

 

State of ALL nodes

([nodenum]/cnode/sid/sess_srno/session/ospid/state/[adjlist]):

[9]/1/10/193/0x95231978/24694/NLEAF/[136]

[136]/1/137/2227/0x95376400/18342/LEAF/ [481]/2/10/3041/0x95231978/24814/NLEAF/[136]

[726]/2/255/3581/0x94d0a760/1112/NLEAF/[136]

 

分析如下:

—->nodenum=9的 session(即:1号实例上sid为10的session)的blocker(阻塞者)为nodenum=136的 session

——>nodenum=136的 session是blocker(阻塞者),位于1号节点SID=137

->nodenum=481的 session(即:2号实例上sid为10的session)的blocker(阻塞者)为nodenum=136的 session

->nodenum=726的 session(即:2号实例上sid为255的session)的blocker(阻塞者)为nodenum=136的 session

 

  •  *** 2015-11-13 19:20:12.061

=============================================================================

  •  END OF HANG ANALYSIS

============================================================================

  •  *** 2015-11-13 19:20:12.064
  •  =============================================================================
  •  HANG ANALYSIS DUMPS:
  •    oradebug_node_dump_level: 3
  •  =============================================================================
  •  State of LOCAL nodes
  •  ([nodenum]/cnode/sid/sess_srno/session/ospid/state/[adjlist]):
  •  [9]/1/10/193/0x95231978/24694/NLEAF/[136]
  •  [136]/1/137/2227/0x95376400/18342/LEAF/
  •     ·  No processes qualify for dumping.
  •     ·  =============================================================================
  •  HANG ANALYSIS DUMPS: END
  •  =============================================================================
  •  session (kjzddbx) switches to a new action
  •  PORADATA SERIAL:1 RINST:2 RDIAG:31036
  •  PORADATA SERIAL:1 RINST:2 REMOTE EXEC OSPID:31036 =====》》远程节点的执行信息
  •  PORADATA SERIAL:1 RINST:2 TRACEFILE:/u02/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_diag_31036.trc  =====》》远程节点的TRACE文件信息
  •  PORADATA SERIAL:1 RINST:2  DONE

 

节点2的TRACE文件可以发现:

  •  State of LOCAL nodes
  •  ([nodenum]/cnode/sid/sess_srno/session/ospid/state/[adjlist]):
  •  [481]/2/10/3041/0x95231978/24814/NLEAF/[136]
  •  [726]/2/255/3581/0x94d0a760/1112/NLEAF/[136]

—->可以看到,sid为10的session的blocker(阻塞者)是 nodenum=136,而这个136不在State of LOCAL nodes中,这说明这个nodenum=136在对方实例上。

—->可以看到,sid为255的session的blocker(阻塞者)是 nodenum=136,而这个136不在State of LOCAL nodes中,这说明这个nodenum=136在对方实例上。

 

Oracle的hanganalyze分析工具介绍及测试示例分析

PGA自动管理的介绍

PGA–Process Global Area,常被称为Program Global Area。
In Oracle8i,DBA需要修改如下参数来调整PGA:
SORT_AREA_SIZE, HASH_AREA_SIZE,
BITMAP_MERGE_AREA_SIZE, and CREATE_BITMAP_AREA_SIZE, etc.
Oracle9i开始,使用PGA自动管理,只需要设置 PGA_AGGREGATE_TARGET. 参数。
PGA_AGGREGATE_TARGET的值在实例运行期间可以动态改变。

SQL执行内存自动管理功能是通过设置 WORKAREA_SIZE_POLICY 为AUTO并指定PGA_AGGREGATE_TARGET的值来实现的。这两个参数可以动态设置。
从9I R2开始,可以通过PGA Advisory 来设置更合理的PGA大小。

PGA_AGGREGATE_TARGET 参数并不是设置一个hard limit on pga size,像PL/SQL的工作区它影响不到。
它分为work area和work area之外的部分。
所谓work area,就是session要执行SQL,需要在数据库内部分配的,为了存放中间结果的内存。
比如sort area,为了排序用的内存,比如hash area,为了hash join用的内存,这一部分直接和SQL执行相关,影响SQL执行的效率,比如更大hash area会让hash join更快。
pga_aggregate_target实际上只限制work area的大小。当work area达到pga_aggregate_target的限制,则会产生4030错误。
隐含参数_pga_max_size只是限制单个进程使用的work area大小,也是在pga_aggregate_target的限制之内的更小限制。
而work area之外的内存,不被pga_aggregate_target和_pga_max_size所限制。
所以你经常会看到PGA的大小超过了pga_aggregate_target。这通常是因为PL/SQL中的变量和数组中装入了巨大的数据造成的。
通常的例子是bulk collect,imp/exp,sql loader等工具也会产生类似的问题。

在9iR2, PGA_AGGREGATE_TARGET 只对dedicated server起作用。
在10G,PGA_AGGREGATE_TARGET 对dedicated /ASHARED server都起伤作用。
在11G使用AMM时 PGA_AGGREGATE_TARGET的设置会使AMM设置PGA不会低于此值。
the size of a process没有数据库参数限制,只有通过OS的kernel limits or user shell limits。

参考ORACLE MOS文档:

Automatic PGA Memory Management (文档 ID 223730.1)

FAQ: ORA-4030 [Video] (文档 ID 399497.1)
Keep in mind that the PGA_AGGREGATE_TARGET does not limit the amount of PGA memory usage. It is only a target and is used to dynamically size the process work areas. It does not affect other areas of the PGA that are allowed to grow beyond this limit.

There are certain areas of PGA that cannot be controlled by initialization parameters. Such areas include PL/SQL memory collections such as PL/SQL tables and VARRAYs, and local PL/SQL variables. Depending on the programming code and amount of data being handled these areas can grow very large (up to 20G internal limit on 10) and can consume large amounts of memory. This memory growth can be controlled by good programming practices. As an example, use LIMIT clause with BULK COLLECT.

PGA相关查询脚步

##查AWR快照间隔中PGA的增长情况
SQL> set linesize 180
SQL> SELECT SNAP_ID,INSTANCE_NUMBER,name,value FROM DBA_HIST_PGASTAT where name in(‘max processes count’,’total PGA allocated’) and snap_id between 24943 and 24946;

SNAP_ID INSTANCE_NUMBER NAME VALUE
———- ————— —————————— ————————
24943 1 max processes count 126
24943 1 total PGA allocated 608,856,064
24944 1 max processes count 126
24944 1 total PGA allocated 630,258,688
24945 1 max processes count 126
24945 1 total PGA allocated 678,053,888
24946 1 max processes count 126
24946 1 total PGA allocated 618,394,624

8 rows selected.

SELECT a.SNAP_ID,a.INSTANCE_NUMBER,a.name,a.value,b.BEGIN_INTERVAL_TIME,b.END_INTERVAL_TIME FROM DBA_HIST_PGASTAT a,DBA_HIST_SNAPSHOT b
where a.name in(‘max processes count’,’total PGA allocated’) and a.snap_id=b.snap_id and b.BEGIN_INTERVAL_TIME>sysdate-1;

#####################################################3
2.PGA大小设置与调整
1.PGA设置的规则:
– For OLTP systems PGA_AGGREGATE_TARGET = (physical memory * 80%) * 20% 物理内存*0.16
– For DSS systems PGA_AGGREGATE_TARGET = (physical memory * 80%) * 50% 物理内存*0.4

2.查看PGA效率–
select
trunc (
(sum(case when name like ‘workarea executions – optimal’
then value else 0 end) *100) /
(
sum(case when name like ‘workarea executions – optimal’
then value else 0 end) +
sum(case when name like ‘workarea executions – one pass’
then value else 0 end) +
sum(case when name like ‘workarea executions – multipass’
then value else 0 end)
)
) optimal_percent
from v$sysstat
where name like ‘workarea executions – %’
/

workarea memory allocated 总的PGA KB
workarea executions – optimal 最佳值
workarea executions – onepass 单次不可避免的排序写入磁盘
workarea executions – multipass 多次排序写入磁盘–应该避免

3.实例级别的PGA使用统计信息
SELECT * FROM V$PGASTAT;
ORACLE 9IR2,10G,11G ,PGA命中率算法如下:
total bytes processed * 100
PGA Cache Hit Ratio = ——————————————————
(total bytes processed + total extra bytes read/written)

4.查看PGA中BUCKET分布、使用情况–碎片情况
SELECT LOW_OPTIMAL_SIZE/1024 low_kb,(HIGH_OPTIMAL_SIZE+1)/1024 high_kb,
optimal_executions, onepass_executions, multipasses_executions
FROM v$sql_workarea_histogram
WHERE total_executions != 0;

SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
onepass_count, round(onepass_count*100/total, 2) onepass_perc,
multipass_count, round(multipass_count*100/total, 2) multipass_perc
FROM
(SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
sum(OPTIMAL_EXECUTIONS) optimal_count,
sum(ONEPASS_EXECUTIONS) onepass_count,
sum(MULTIPASSES_EXECUTIONS) multipass_count
FROM v$sql_workarea_histogram
WHERE low_optimal_size > 64*1024); —- for 64 K optimal size

5.查看PGA中有哪些操作–是否写入临时段
SELECT to_number(decode(SID, 65535, NULL, SID)) sid,
operation_type OPERATION,trunc(EXPECTED_SIZE/1024) ESIZE,
trunc(ACTUAL_MEM_USED/1024) MEM, trunc(MAX_MEM_USED/1024) “MAX MEM”,
NUMBER_PASSES PASS, trunc(TEMPSEG_SIZE/1024) TSIZE
FROM V$SQL_WORKAREA_ACTIVE
ORDER BY 1,2;

6.pga_target_advice视图的使用
– PGA_AGGREGATE_TARGET
– STATISTICS_LEVEL. Set this to TYPICAL (the default) or ALL;

SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
ESTD_OVERALLOC_COUNT
FROM v$pga_target_advice;

#######################################
1.
TOP 10工作区
Finding top ten work areas requiring the most cache memory:

select *
from
(select workarea_address, operation_type, policy, estimated_optimal_size
from v$sql_workarea
order by estimated_optimal_size DESC)
where ROWNUM <=10;

2. 找出使用内存最多的工作区
Finding the percentage of work areas using maximum memory:

select operation_type, total_executions * 100 / optimal_executions “%cache”
From v$sql_workarea
Where policy=’AUTO’
And optimal_executions > 0
Order By operation_type;

3. 找出当前系统中分配内存最多的工作区
Finding the top ten biggest work areas currently allocated in the system:

select c.sql_text, w.operation_type, top_ten.wasize
From (Select *
From (Select workarea_address, actual_mem_used wasize
from v$sql_workarea_active
Order by actual_mem_used)
Where ROWNUM <=10) top_ten,
v$sql_workarea w,
v$sql c
Where w.workarea_address=top_ten.workarea_address
And c.address=w.address
And c.child_number = w.child_number
And c.hash_value=w.hash_value;

4. Finding the percentage of memory that is over and under allocated:

select total_used,
under*100/(total_used+1) percent_under_use,
over*100/(total_used+1) percent_over_used
From
( Select
sum(case when expected_size > actual_mem_used
then actual_mem_used else 0 end) under,
sum(case when expected_size < actual_mem_used
then actual_mem_used else 0 end) over,
sum(actual_mem_used) total_used
From v$sql_workarea_active
Where policy=’AUTO’) usage;

5.List largest process:
/* Do NOT eliminate all background process because certain background processes do need to be monitored at times */

SELECT pid, spid, substr(username,1,20) “USER”, program, pga_used_mem, pga_alloc_mem, pga_freeable_mem, pga_max_mem
FROM v$process
WHERE pga_alloc_mem = (SELECT max(pga_alloc_mem)
FROM v$process
WHERE program NOT LIKE ‘%LGWR%’);

Oracle PGA自动管理的介绍与相关查询方法

1.死事务概念

死事务出现在异常关闭数据库或者事务进程不正常结束,比如KILL -9,shutdown abort的情况下。
由于死事务已经无法通过v$transaction来观察,所以必须通过内部表来进行判断。
这个内部表是x$ktuxe,该表会记录Dead事务的恢复进度:

死事务的回滚进程数可以通过参数fast_start_parallel_rollback来设置是否启用并行恢复。
此参数有3个值:
FALSE –>Parallel rollback is disabled 即FALSE时关闭并行回滚,使用SMON的串行恢复;
LOW –>Limits the maximum degree of parallelism to 2 * CPU_COUNT LOW是11GR2默认值,
HIGH –>Limits the maximum degree of parallelism to 4 * CPU_COUNT
对应的回滚进程是:ora_p025_orcl ora_p022_orcl 这种都是后台启动的并行回滚的进程。
修改此参数值,

2.关于并行回滚:

并行回滚是oracle的一个特性,初衷是为了加速失败事务的回滚,缩小回滚时间,Oracle设计这个特性的初衷大多时候都是实现了。
但是在一些情况下,并行回滚的进程间会产生资源的争用,
如buffer busy waits/wait for a undo record等,严重时会导致并行回滚互相阻塞,导致回滚恢复非常非常慢。
因为这个特性可能导致的数据库hang或者部分业务hang的情况。
根据经验总结一翻并行回滚的情况怕遇见2种情况:
1.并行回滚相关的bug,此类bug有不少
2.大事务的回滚
事务恢复相关的MOS文档:Master Note: Troubleshooting Database Transaction Recovery (文档 ID 1494886.1)。

对此并行回滚类问题的分析步骤:
1.分析alert.log
2.分析block=1的进程是否存在,是否存在死事务
3.分析发生情况时的awr以及ash
4.分析发生情况时的io/cpu情况
5.分析相关操作时的等待

而并发度如果是high或low,4倍或2倍的cpu数,也会受到另外一些参数的影响,如PARALLEL_MAX_SERVERS,这个才是真正最大的并发度设置。
PARALLEL_MAX_SERVERS这个参数的默认值为
PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5。
PARALLEL_THREADS_PER_CPU 和CPU_COUNT都是初始化参数。
concurrent_parallel_users 根据是否启用automatic memory management ,如果禁用automatic memory management 则这个值为1,如果pga自动管理则这个值是2,如果是automatic memory management 启用,则这个值是4。

_cleanup_rollback_entries –>number of undo entries to apply per transaction clean,调大会增快回滚速度,但是是静态参数,需要重启设置。

当并行回滚HANG住时,可以关闭并行回滚:
Master Note: Troubleshooting Database Transaction Recovery (文档 ID 1494886.1)中说法:
If you change the value of this parameter, then transaction recovery will be stopped and restarted with the new implied degree of parallelism.
alter system set fast_start_parallel_rollback=false;

How to Disable Parallel Transaction Recovery When Parallel Txn Recovery is Active (文档 ID 238507.1)中做法是先禁用SMON恢复(可不进行而直接KILL并行恢复的进程),OS上KILL 并行恢复的slave进程,再修改fast_start_parallel_rollback=false,再开启SMON恢复。
过程如下:
SQL> select pid, program from v$process where program like ‘%SMON%’;
PID PROGRAM
———- ————————————————
6 oracle@stsun7 (SMON)

SQL> oradebug setorapid ‘SMON’s Oracle PID’;
SQL> oradebug event 10513 trace name context forever, level 2

select SPID from V$PROCESS where PID in (select PID from V$FAST_START_SERVERS);
then kill these processes from the OS level with : kill -9 n

alter system set fast_start_parallel_roll.lback=false;

SQL> oradebug setorapid ‘SMON’s Oracle PID’;
SQL> oradebug event 10513 trace name context off
跟踪smon进程
event = ‘10500 trace name context forever, level 1’

3.关于关闭事务回滚:

—是不建议的,一般用在UNDO段损坏数据库无法打开时。
Disable Transaction Recovery
Disabling Transaction recovery could temporarily open up the database.
SMON will stop performing all the transaction recovery.
There is no way to stop only one Transaction recovery / few transaction recoveries.
When the transaction recovery is disabled, any data modified by this dead transaction but required by other transactions will be recovered on “as needed” basis.
We always suggest to disable the transaction recovery only with the help of Oracle Support.
NOTE: Oracle does not recommend to Disable the Transaction recovery and open the Database for Business Use.

 

4.大事务回滚的常用脚本

1.查询死事务及其大小:
select ktuxeusn USN, ktuxeslt Slot, ktuxesqn Seq, ktuxesta State, ktuxesiz Undo,to_char(sysdate,’DD-MON-YYYY HH24:MI:SS’) “Current_Time”
from x$ktuxe
where ktuxesta <> ‘INACTIVE’
and ktuxecfl like ‘%DEAD%’
order by ktuxesiz asc;

select distinct KTUXECFL,count(*) from x$ktuxe group by KTUXECFL;

KTUXECFL COUNT(*)
———————— ———-
DEAD 1
NONE 2393
SCO|COL 8

===========
ktuxeusn – Undo Segment Number
ktuxeslt – Slot number
ktuxesqn – Sequence
ktuxesta – State
ktuxesiz – Undo Blocks Remaining
ktuxecfl – Flag
===========
USN SLOT SEQ STATE UNDO
——- ———- ——- ———- ———-

9 9 335 ACTIVE 10337

2.查询包含死事务的UNDO段的信息:
select useg.segment_name, useg.segment_id, useg.tablespace_name, useg.status
from dba_rollback_segs useg
where useg.segment_id in (select unique ktuxeusn
from x$ktuxe
where ktuxesta <> ‘INACTIVE’
and ktuxecfl like ‘%DEAD%’);

SEGMENT_NAME SEGMENT_ID TABLESPACE_NAME STATUS
——————– —————– ————————— ————
_SYSSMU9$ 9 UNDOTBS1 ONLINE

select usn,name
from v$rollname
where usn in (select unique ktuxeusn
from x$ktuxe
where ktuxesta <> ‘INACTIVE’
and ktuxecfl like ‘%DEAD%’);

3.大事务恢复所需的时间
alter session set nls_date_format=’yyyymmdd hh24:mi:ss’;
select usn, state, undoblockstotal “Total”, undoblocksdone “Done”, undoblockstotal-undoblocksdone “ToDo”,
decode(cputime,0,’unknown’,sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400)) “Estimated time to complete”
from v$fast_start_transactions;
USN STATE Total Done ToDo Estimated time to Complete
——– —————- ——– ——– ——– ———————————
5 RECOVERING 16207 14693 1514 11-may-2012 08:05:40

4.对于死事务回滚所需时间的评估:
select distinct KTUXECFL,count(*) from x$ktuxe group by KTUXECFL;
KTUXECFL COUNT(*)
———————— ———-
DEAD 1
NONE 2393
SCO|COL 8

select ADDR,KTUXEUSN,KTUXESLT,KTUXESQN,KTUXESIZ from x$ktuxe where KTUXECFL=’DEAD’;
ADDR KTUXEUSN KTUXESLT KTUXESQN KTUXESIZ
—————- ———- ———- ———- ———-
00002B92FF5D5F68 15 12 314961 43611
–KTUXESIZ代表需要回滚的回滚块数。

declare
l_start number;
l_end number;
begin
select ktuxesiz
into l_start
from x$ktuxe
where KTUXEUSN = 10
and KTUXESLT = 39; ———这里根据实际数字来填写
dbms_lock.sleep(60); ———单位是秒,建议适当增大一些,评估出的时间更准确。
select ktuxesiz
into l_end
from x$ktuxe
where KTUXEUSN = 10
and KTUXESLT = 39; ———这里根据实际数字来填写
dbms_output.put_line(‘time cost Day:’ ||
round(l_end / (l_start – l_end) / 60, 2));
end;
/
(l_start – l_end) –>dbms_lock.sleep(60);指定的60秒间隔恢复的块数
l_end –>还需要恢复的块数
l_end / (l_start – l_end) /得出所需恢复的分钟数;
——————————————得出所需小时
declare
l_start number;
l_end number;
begin
select ktuxesiz
into l_start
from x$ktuxe
where KTUXEUSN = 10
and KTUXESLT = 39;
dbms_lock.sleep(60);
select ktuxesiz
into l_end
from x$ktuxe
where KTUXEUSN = 10
and KTUXESLT = 39;
dbms_output.put_line(‘time cost Day:’ ||
round(l_end / (l_start – l_end)/60, 2));
end;
/


5.查询并行恢复时Slaves进程数量
select * from v$fast_start_servers;
STATE UNDOBLOCKSDONE PID XID
———– ————————– ———- —————-
RECOVERING 54 18 001F005C00001BD6
RECOVERING 0 20 001F005C00001BD6

如果只有1个进程STATE 是RECOVERING ,其它都是IDLE; 建议关闭并行回滚–>FAST_START_PARALLEL_ROLLBACK=FALSE。

Oracle大事务回滚的介绍与案例分析

1.kfod-Kernel Files OSM Disk工具介绍

 

KFOD(Kernel Files Osm Disk)用来从操作系统层面模拟disk discovery

 该工具可以从$GRID_HOME/bin目录(或者是10gR2ASM_HOME/bin目录)下找到。

注意:

1).KFOD也会在安装时使用(通过OUIDBCA或者ASMCA),为了执行disk discovery

2).在失败安装的情况下(例如,$GRID_HOME/bin不存在),KFOD能在stage文件夹下找到:/grid/stage/ext/bin/,同时需要设置LD_LIBRARY_PATH/grid/stage/ext/lib

 

需要GI启动时使用?

[grid@luda5 ~]$ kfod  asm_diskstring=’/dev/asmdisk*’

KFOD-00301: Unable to contact Cluster Synchronization Services (CSS). Return code 2 from kgxgncin.

 

参考文档:

ASM tools used by Support : KFOD, KFED, AMDU (文档 ID 1485597.1)

 

 

[grid@luda5 ~]$  kfod di=all group=diskgroup ds=true

——————————————————————————–

 Disk          Size Path                                     Disk Group   User     Group   

================================================================================

   1:       1024 Mb /dev/asm-diskb                           OCR          grid     asmadmin

   2:       4096 Mb /dev/asm-diskc                           DATA         grid     asmadmin

KFOD returned no data

[grid@luda5 ~]$ kfod di=all op=groups

——————————————————————————–

Group          Size          Free Redundancy Name           

================================================================================

   1:       1024 Mb        628 Mb     EXTERN OCR            

   2:       4096 Mb       3376 Mb     EXTERN DATA

[grid@luda5 ~]$ kfod disks=all status=true

——————————————————————————–

 Disk          Size Header    Path                                     User     Group   

================================================================================

   1:       1024 Mb MEMBER    /dev/asm-diskb                           grid     asmadmin

   2:       4096 Mb MEMBER    /dev/asm-diskc                           grid     asmadmin

——————————————————————————–

ORACLE_SID ORACLE_HOME                                                          

================================================================================

     +ASM1 /u01/11.2.0/grid

 

kfod status=TRUE asm_diskstring=’/dev/asmdisk*’ disk=all dscvgroup=TRUE

 

2.kfed-Kernel Files metadata EDitor工具介绍

 

 

KFED可以用来在ASM磁盘组无法mount时分析ASM磁盘头信息或者对磁盘头的修复。

KFED11.1开始安装; 对于旧版本,需要手动构建。

cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk ikfed

但是它不包括GI软件安装包中,因此在GI安装完成之前不可用。

 

2.1 asm磁盘头的自动备份:

从版本10.2.0.5开始ASM 会对disk header做一个额外的备份。即aun=1 的最后第二个block中备份了一份KFBTYP_DISKHEAD 这个ASM Disk header的作用是当真的KFBTYP_DISKHEAD被意外覆盖或损坏时可以使用Oracle 工具 KFED使用repair选项来修复Disk header

ASM由于各种故障,非常容易导致disk header block损坏,对它的备份和回复显的尤为重要

 

 

ASM磁盘头自动备份的存放的块位置:可用kfed read /dev/asm-diskb blkn=510 验证

始终保存在第2AU的倒数第2个块上(PST)

对于默认的磁盘组,AU Size1M,每个AU中可以存储256个块,块号为0-255

1AU存储256个块,第2AU最后1个块号为255,倒数第2个块号是第2AU的第254个块,也就是整体的第510个块(从第1AU的第1个块往后算起)。

对于AU Size8M的磁盘组,每个AU可以存储2048个块,块号为0-2047。第1AU存储2048个块,第2AU最后1个块号为2047,倒数第2个块号是2046,也就是整体的第4094个块(从第1AU的第1个块往后算起)。

 

2.2 asm磁盘头手动备份的方法:

其它的RAC ASM磁盘头手动备份恢复方式–1/2较有用。

 

1.直接做dd来备份磁盘的前4K,磁盘头信息丢失时,dd回来

备份:dd if=/dev/asm-diskb    of=/home/oracle/asmheader-b.dd   bs=4096 count=1

恢复:dd if=/home/oracle/asmheader-b.dd    of=/dev/asm-diskb

 

2.利用oraclekfed工具来备份,将磁盘头信息转换成文本明文形式记录下来,恢复时使用kfed merge进去

备份:kfed read  /dev/asm-diskb      aunum=0 blknum=0 text=raw1.txt

恢复:kfed write /dev/asm-diskb      aunum=0 blknum=0 text=raw1.txt

 

3.11G asmcmd里面增加了md_backup,md_restore备份指令,但这个只是备份DGmeta结构信息,恢复必须是DG已经mount,然

后导入结构信息,类似于exp/imp,这种方式对于DG损坏是不起作用的

备份:asmcmd md_backup -b /home/oracle/asmheader-b.dd.txt

恢复:asmcmd md_restore -t full -g ocr -i -b /home/oracle/asmheader-b.dd.txt

3.kfed工具使用自动备份恢复损坏的asm磁盘头

LINUX64+11.2.0.4 RAC模拟ASM磁盘磁盘头损坏导致集群无法启动

使用命令:

[grid@luda5 ~]$ kfed read /dev/asm-diskb

[grid@luda5 oracle]$ kfed read /dev/asm-diskb blkn=510

[grid@luda5 ~]$ kfed repair /dev/asm-diskb

 

[grid@luda5 ~]$ dd if=/dev/asm-diskb skip=510 bs=4096 count=1   of=/dev/asm-diskb   bs=4096 count=1

1.模拟存放OCR/voting的磁盘头损坏

 

/dev/asm-diskb是存放OCR/voting的磁盘

[root@luda5 ~]# dd if=/dev/zero of=/dev/asm-diskb bs=4096 count=1   模拟磁盘头损坏

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.006142 s, 667 kB/s

[root@luda5 ~]# crsctl start has

CRS-4123: Oracle High Availability Services has been started.

 

[root@luda5 ~]# ps -ef|grep css

root      5642     1  0 09:57 ?        00:00:00 /u01/11.2.0/grid/bin/cssdmonitor

root      5666     1  0 09:57 ?        00:00:00 /u01/11.2.0/grid/bin/cssdagent

grid      5677     1  0 09:57 ?        00:00:00 /u01/11.2.0/grid/bin/ocssd.bin

root      5949  2285  0 09:58 pts/0    00:00:00 grep css

[root@luda5 ~]# su – grid

cd [grid@luda5 ~]$ cd /u01/11.2.0/grid/log/luda5/

 

[grid@luda5 luda5]$ tail alertluda5.log

2017-04-13 09:57:24.948:

[cssd(5677)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/11.2.0/grid/log/luda5/cssd/ocssd.log

2017-04-13 09:57:40.000:

[cssd(5677)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/11.2.0/grid/log/luda5/cssd/ocssd.log

2017-04-13 09:57:55.052:

[

 

[grid@luda5 cssd]$ gpnptool get

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/11.2.0/grid/bin/gpnptool.bin get -o-

 

<?xml version=”1.0″ encoding=”UTF-8″?><gpnp:GPnP-Profile Version=”1.0″ xmlns=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:gpnp=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:orcl=”http://www.oracle.com/gpnp/2005/11/gpnp-profile” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd” ProfileSequence=”4″ ClusterUId=”0a9dcd2199ec4faaff5f633c29682816″ ClusterName=”luda-cluster1″ PALocation=””><gpnp:Network-Profile><gpnp:HostNetwork id=”gen” HostName=”*”><gpnp:Network id=”net1″ IP=”192.168.57.0″ Adapter=”eth0″ Use=”public”/><gpnp:Network id=”net2″ IP=”192.168.56.0″ Adapter=”eth1″ Use=”cluster_interconnect”/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400″/><orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/asm-disk*” SPFile=”+OCR/luda-cluster1/asmparameterfile/registry.253.939414867″/><ds:Signature xmlns:ds=”http://www.w3.org/2000/09/xmldsig#”><ds:SignedInfo><ds:CanonicalizationMethod Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”/><ds:SignatureMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#rsa-sha1″/><ds:Reference URI=””><ds:Transforms><ds:Transform Algorithm=”http://www.w3.org/2000/09/xmldsig#enveloped-signature”/><ds:Transform Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”> <InclusiveNamespaces xmlns=”http://www.w3.org/2001/10/xml-exc-c14n#” PrefixList=”gpnp orcl xsi”/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#sha1″/><ds:DigestValue>nrC6yv5F/a9GJYQH/+LQMk0vN88=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>gad/sRBw75TF/RMoZfUG7owOxxSndeUjdm4gPjBir42T0RR8qsQowSKPeJEp+ouBjAeVIrI+RyCb8roy8/28hrsQg/xJsbNoZGR6EFK8ZBZsos85kaIufwoTakiWCBPerDwSH6J2vZE9XeZqhYGwffSyF2WDKX7aafJhK+5qh/Y=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>

Success.

 

########################################################

########################################################

2.集群未启动问题排查:

kfed读取/dev/asm-diskb磁盘头,发现已经无法正常读取。

[grid@luda5 ~]$ kfed read /dev/asm-diskb

kfbh.endian:                          0 ; 0x000: 0x00

kfbh.hard:                            0 ; 0x001: 0x00

kfbh.type:                            0 ; 0x002: KFBTYP_INVALID

kfbh.datfmt:                          0 ; 0x003: 0x00

kfbh.block.blk:                       0 ; 0x004: blk=0

kfbh.block.obj:                       0 ; 0x008: file=0

kfbh.check:                           0 ; 0x00c: 0x00000000

kfbh.fcn.base:                        0 ; 0x010: 0x00000000

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

7F3310D66400 00000000 00000000 00000000 00000000  […………….]

  Repeat 255 times

KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

 

—–>>关闭HAS

[root@luda5 ~]# crsctl stop has

CRS-2796: The command may not proceed when Cluster Ready Services is not running

CRS-4687: Shutdown command has completed with errors.

CRS-4000: Command Stop failed, or completed with errors.

[root@luda5 ~]# ps -ef|grep has

root      1165     1  0 09:49 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run

root      5455     1  0 09:56 ?        00:00:03 /u01/11.2.0/grid/bin/ohasd.bin reboot

root      7020  2285  0 10:02 pts/0    00:00:00 grep has

[root@luda5 ~]# kill -9 5455

[root@luda5 ~]# ps -ef|grep css

root      5642     1  0 09:57 ?        00:00:00 /u01/11.2.0/grid/bin/cssdmonitor

root      5666     1  0 09:57 ?        00:00:00 /u01/11.2.0/grid/bin/cssdagent

grid      5677     1  0 09:57 ?        00:00:01 /u01/11.2.0/grid/bin/ocssd.bin

root      7094  2285  0 10:02 pts/0    00:00:00 grep css

[root@luda5 ~]# kill -9 5677

 

[root@luda5 ~]# ps -ef|grep has

root      1165     1  0 09:49 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run

root      7160  2285  0 10:02 pts/0    00:00:00 grep has

 

 

3.asm磁盘头损坏用自动备份来恢复方法:

验证ASM磁盘头的自动备份是正常

首先使用如下命令验证ASM磁盘头的自动备份是正常的:–1M AUSIZE时,在第510数据块

ASM Disk Header Copy所处的位置:aun=1 的最后第二个block中备份了一份KFBTYP_DISKHEAD

即存放在PST的第blkn=254(AUSIZE=1M)blkn=1022(AUSIZE=4M),注意ASM一般从0开始计数

[grid@luda5 oracle]$ kfed read /dev/asm-diskb blkn=510

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfbh.datfmt:                          1 ; 0x003: 0x01

kfbh.block.blk:                     254 ; 0x004: blk=254

kfbh.block.obj:              2147483648 ; 0x008: disk=0

kfbh.check:                  2208422217 ; 0x00c: 0x83a1d949

kfbh.fcn.base:                        0 ; 0x010: 0x00000000

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8

kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000

kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000

kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000

kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000

kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000

kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000

kfdhdb.compat:                186646528 ; 0x020: 0x0b200000

kfdhdb.dsknum:                        0 ; 0x024: 0x0000

kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL

kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER

kfdhdb.dskname:                OCR_0000 ; 0x028: length=8

kfdhdb.grpname:                     OCR ; 0x048: length=3

kfdhdb.fgname:                 OCR_0000 ; 0x068: length=8

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.crestmp.hi:             33050356 ; 0x0a8: HOUR=0x14 DAYS=0x17 MNTH=0x3 YEAR=0x7e1

kfdhdb.crestmp.lo:           2295894016 ; 0x0ac: USEC=0x0 MSEC=0x224 SECS=0xd MINS=0x22

kfdhdb.mntstmp.hi:             33051050 ; 0x0b0: HOUR=0xa DAYS=0xd MNTH=0x4 YEAR=0x7e1

kfdhdb.mntstmp.lo:            372738048 ; 0x0b4: USEC=0x0 MSEC=0x1e2 SECS=0x23 MINS=0x5

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000       ————->>块大小

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000   ————->>AUSIZE大小

kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80

kfdhdb.dsksize:                    1024 ; 0x0c4: 0x00000400

kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002

kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001

kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002

kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002

kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000

kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000

kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000

kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000

kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000

kfdhdb.grpstmp.hi:             33050356 ; 0x0e4: HOUR=0x14 DAYS=0x17 MNTH=0x3 YEAR=0x7e1

kfdhdb.grpstmp.lo:           2295721984 ; 0x0e8: USEC=0x0 MSEC=0x17c SECS=0xd MINS=0x22

kfdhdb.vfstart:                     352 ; 0x0ec: 0x00000160

kfdhdb.vfend:                       384 ; 0x0f0: 0x00000180

kfdhdb.spfile:                       58 ; 0x0f4: 0x0000003a

kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001

kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000

kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000

kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000

kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000

kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000

kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000

kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000

kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000

kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000

kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000

kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000

kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000

kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000

kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000

kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000

kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000

kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000

kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000

kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000

kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000

kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000

kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000

kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000

kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000

kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000

kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000

kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000

kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000

kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000

kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000

kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000

kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000

kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000

kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000

kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000

kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000

kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000

kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000

kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000

kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000

kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000

kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000

kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000

kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000

kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000

kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000

kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000

kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000

kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000

kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000

kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000

kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000

kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000

kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000

kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000

kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000

kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000

kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

[root@luda5 ~]# su – grid

[grid@luda5 ~]$ kfed repair -help

as/mlib         ASM Library [asmlib=’lib’]

aun/um          AU number to examine or update [AUNUM=number]

aus/z           Allocation Unit size in bytes [AUSZ=number]

blkn/um         Block number to examine or update [BLKNUM=number]

blks/z          Metadata block size in bytes [BLKSZ=number]

ch/ksum         Update checksum before each write [CHKSUM=YES/NO]

cn/t            Count of AUs to process [CNT=number]

de/v            ASM device to examine or update [DEV=string]

dm/pall         Don’t suppress repeated lines when dumping corrupt blocks [DMPALL=YES/NO]

o/p             KFED operation type [OP=READ/WRITE/MERGE/REPAIR/NEW/FORM/FIND/STRUCT]

p/rovnm         Name for provisioning purposes [PROVNM=string]

s/eek           AU number to seek to [SEEK=number]

te/xt           File name for translated block text [TEXT=string]

ty/pe           ASM metadata block type number [TYPE=number]

 

kfed修复磁盘头损坏命令:

[grid@luda5 ~]$ kfed repair /dev/asm-diskb

[grid@luda5 ~]$ kfed read /dev/asm-diskb

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfbh.datfmt:                          1 ; 0x003: 0x01

kfbh.block.blk:                       0 ; 0x004: blk=0

kfbh.block.obj:              2147483648 ; 0x008: disk=0

kfbh.check:                  1504642484 ; 0x00c: 0x59af05b4

kfbh.fcn.base:                        0 ; 0x010: 0x00000000

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8

kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000

kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000

kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000

kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000

kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000

kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000

kfdhdb.compat:                186646528 ; 0x020: 0x0b200000

kfdhdb.dsknum:                        0 ; 0x024: 0x0000

kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL

kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER

kfdhdb.dskname:                OCR_0000 ; 0x028: length=8

kfdhdb.grpname:                     OCR ; 0x048: length=3

kfdhdb.fgname:                 OCR_0000 ; 0x068: length=8

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.crestmp.hi:             33050356 ; 0x0a8: HOUR=0x14 DAYS=0x17 MNTH=0x3 YEAR=0x7e1

kfdhdb.crestmp.lo:           2295894016 ; 0x0ac: USEC=0x0 MSEC=0x224 SECS=0xd MINS=0x22

kfdhdb.mntstmp.hi:             33051049 ; 0x0b0: HOUR=0x9 DAYS=0xd MNTH=0x4 YEAR=0x7e1

kfdhdb.mntstmp.lo:           3426309120 ; 0x0b4: USEC=0x0 MSEC=0x255 SECS=0x3 MINS=0x33

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80

kfdhdb.dsksize:                    1024 ; 0x0c4: 0x00000400

kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002

kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001

kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002

kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002

kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000

kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000

kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000

kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000

kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000

kfdhdb.grpstmp.hi:             33050356 ; 0x0e4: HOUR=0x14 DAYS=0x17 MNTH=0x3 YEAR=0x7e1

kfdhdb.grpstmp.lo:           2295721984 ; 0x0e8: USEC=0x0 MSEC=0x17c SECS=0xd MINS=0x22

kfdhdb.vfstart:                     352 ; 0x0ec: 0x00000160

kfdhdb.vfend:                       384 ; 0x0f0: 0x00000180

kfdhdb.spfile:                       58 ; 0x0f4: 0x0000003a

kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001

kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000

kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000

kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000

kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000

kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000

kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000

kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000

kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000

kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000

kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000

kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000

kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000

kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000

kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000

kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000

kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000

kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000

kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000

kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000

kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000

kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000

kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000

kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000

kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000

kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000

kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000

kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000

kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000

kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000

kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000

kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000

kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000

kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000

kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000

kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000

kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000

kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000

kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000

kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000

kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000

kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000

kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000

kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000

kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000

kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000

kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000

kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000

kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000

kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000

kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000

kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000

kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000

kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000

kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000

kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000

kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000

kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000

kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

 

[grid@luda5 ~]$ exit

logout

[root@luda5 ~]# crsctl start has

CRS-4123: Oracle High Availability Services has been started.

验证集群可以正常启动

 [grid@luda5 luda5]$ crsctl stat res -t

——————————————————————————–

NAME           TARGET  STATE        SERVER                   STATE_DETAILS       

——————————————————————————–

Local Resources

——————————————————————————–

ora.DATA.dg

               ONLINE  ONLINE       luda5                                         

ora.LISTENER.lsnr

               ONLINE  ONLINE       luda5                                         

ora.OCR.dg

               ONLINE  ONLINE       luda5                                         

ora.asm

               ONLINE  ONLINE       luda5                     Started             

ora.gsd

               OFFLINE OFFLINE      luda5                                         

ora.net1.network

               ONLINE  ONLINE       luda5                                         

ora.ons

               ONLINE  ONLINE       luda5                                         

——————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       luda5                                         

ora.luda5.vip

      1        ONLINE  ONLINE       luda5                                         

ora.luda6.vip

      1        ONLINE  INTERMEDIATE luda5                     FAILED OVER         

ora.ludarac.db

      1        ONLINE  ONLINE       luda5                     Open                

      2        ONLINE  OFFLINE                                                   

ora.cvu

      1        ONLINE  ONLINE       luda5                                         

ora.oc4j

      1        ONLINE  ONLINE       luda5                                         

ora.scan1.vip

      1        ONLINE  ONLINE       luda5          

Oracle kfod及kfed的介绍及修复损坏的asm磁盘头