Skip to content

等待事件

Oracle10g版本后enq: TX – allocate ITL entry等待事件的根源以及解决思路

客户昨日一套核心系统有进程挂住,top等待事件为enq: TX – allocate ITL entry,判断为itl争用的原因,这方面以前做了不少测试并没有整理,这里顺便做个整理作为诊断手段用。该内容主要分为主要的三个部分:

一.itl的解释以及原理
二.数据块上的initrans不能扩展分配slot导致的itl争用测试
三.索引块上递归事务专用的itl slot争用的识别判断

 

一.ITL原理解释

ITL(Interested Transaction List)是Oracle数据块内部的一个组成部分,用来记录该块所有发生的事务,一个itl可以看作是一个记录,在一个时间,可以记录一个事务(包括提交或者未提交事务)。当然,如果这个事务已经提交,那么这个itl的位置就可以被反复使用了,因为itl类似记录,所以,有的时候也叫itl槽位。Oracle的每个数据块中都有一个或者多个事务槽,每一个对数据块的并发访问事务都会占用一个事务槽。 表和索引的事务槽ini_trans是1、max_trans是255,在oracle10g中,不能修改max_trans这个参数,因为oracle10g忽略了这个参数。如果一个事务一直没有提交,那么,这个事务将一直占用一个itl槽位,itl里面记录了事务信息,回滚段的嵌入口,事务类型等等。如果这个事务已经提交,那么,itl槽位中还保存的有这个事务提交时候的SCN号。如dump一个块,就可以看到itl信息:

Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0006.002.0000158e  0x0080104d.00a1.6e  --U-  734  fsc 0x0000.6c9deff0
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000

如果在并发量特别大的系统中,最好分配足够的itl个数(10g之前的版本),其实它并浪费不了太多的空间,或者,设置足够的pctfree,保证itl能扩展,但是pctfree有可能是被行数据给消耗掉的,如update可能一下占满块空间,所以,也有可能导致块内部的空间不够而导致itl等待,所以在通常情况下,10g版本后引起itl等待的原因往往是因为块的空间不足导致,并不是tran事务槽数量不足,在正常情况下2k的数据块最多可以拥有41个itl,4k数据块最多拥有83,8k最多用友169个itl(以itl 24byte为单位)。INITRANS不足的问题不会出现在索引数据块上,当发现没有足够空间分配ITL slot时,无论是枝点块还是叶子块,数据块会发生分裂(Index Block Split)。

有一种特殊情况也会引起ITL的等待,就是在索引上的递归事务itl争用,这种情况比较特殊。在索引的枝节点上,有且只有一个ITL slot,它是用于当发生节点分裂的递归事务(Recursive Transaction)。在叶子节点上,第一条ITL Slot也是用于分裂的递归事务的。在一个用户事务中,如果发生多次分裂,每一次分裂都是由一个单独的递归事务控制的,如果下层节点分裂导致其父节点分裂,它们的分裂则由同一个递归事务控制。当2个事务同时需要分裂一个枝节点或者叶子节点时,或者枝节点下的2个子节点分别被2个事务分裂,就会造成这种ITL等待。

 

2.数据块上的initrans不能扩展分配slot导致的itl争用测试

我们做个测试关于如果数据块没有空间留给itl slot扩展时候的测试,创建表luda,指定pctfree为0,同时指定initrans为1然后填满相关数据块,再对块满的数据进行更新模拟出itl的等待。
创建表luda,并指定pctfree为0,initrans为1

SQL> create table luda(a int) pctfree 0 initrans 1;

Table created.

 

插入大量数据

SQL> begin
2 for i in 1..20000 loop
3 insert into luda values(i);
4 end loop;
5 end;
6 /

PL/SQL procedure successfully completed.

SQL> commit ;

Commit complete.

 

SQL> select f,b,count(*) from (select dbms_rowid.rowid_relative_fno(rowid) f,dbms_rowid.rowid_block_number(rowid) b
2 from luda) group by f,b order by 3;

F B COUNT(*)
---------- ---------- ----------
1 61101 200
1 61093 734
1 61089 734
1 61095 734
1 61085 734
1 61099 734
1 61074 734
1 61077 734
1 61080 734
1 61092 734
1 61100 734
1 61083 734
1 61091 734
1 61097 734
1 61098 734
1 61075 734
1 61076 734
1 61078 734
1 61081 734
1 61084 734
1 61087 734
1 61096 734
1 61079 734
1 61094 734
1 61088 734
1 61090 734
1 61082 734
1 61086 734

 

插入20018条数据后可以发现该表有26个数据块,填满了除了61101块意外的其他数据块。
接着导出已经填满的数据块61074.

SQL> alter system dump datafile 1 block 61074;

System altered.

 

Block header dump: 0x0040ee92
Object id on Block? Y
seg/obj: 0xcb0a csc: 0x00.bb6a1 itc: 2 flg: - typ: 1 - DATA
fsl: 0 fnx: 0x0 ver: 0x01

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0005.020.0000013b 0x0080078c.013c.3a --U- 734 fsc 0x0000.000bb765
0x02 0x0000.000.00000000 0x00000000.0000.00 ---- 0 fsc 0x0000.00000000
--可以发现initrans为1的情况下默认是有2个事务槽,itc=2
data_block_dump,data header at 0xd7fe45c
===============
tsiz: 0x1fa0
hsiz: 0x5ce
pbl: 0x0d7fe45c
bdba: 0x0040ee92
76543210
flag=--------
ntab=1
nrow=734
frre=-1
fsbo=0x5ce
fseo=0xbf8

avsp=0x4
tosp=0x4
0xe:pti[0] nrow=734 offs=0

 

块满的情况测试slot的分配,根据前面的查询结果我们知道单个块的存储行数为734行,也可以通过dump中的nrow=734得知,所以我们在这部测试中依次更新第100,200,300行的数据。

session 1 更新第100行的数据:

SQL> update luda set a=a
2 where dbms_rowid.ROWID_BLOCK_NUMBER(rowid)=61074
3 and dbms_rowid.ROWID_ROW_NUMBER(rowid)=100;

1 row updated.

 

session 2更新第200行的数据:

SQL> update luda set a=a
2 where dbms_rowid.ROWID_BLOCK_NUMBER(rowid)=61074
3 and dbms_rowid.ROWID_ROW_NUMBER(rowid)=200;

1 row updated.

 

session 3更新第300行的数据,session3的sid为158,并且在执行过程中session 3 hang住:

SQL> select sid from v$mystat where rownum=1;

SID
----------
158

SQL> update luda set a=a
2 where dbms_rowid.ROWID_BLOCK_NUMBER(rowid)=61074
3 and dbms_rowid.ROWID_ROW_NUMBER(rowid)=300;
--此时进程hang住
 

alter system dump datafile 1 block 61074;

 

Block header dump: 0x0040ee92
Object id on Block? Y
seg/obj: 0xcb0a csc: 0x00.bb97e itc: 2 flg: - typ: 1 - DATA
fsl: 0 fnx: 0x0 ver: 0x01

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.014.00000159 0x008006bb.01f1.12 ---- 1 fsc 0x0000.00000000
0x02 0x0005.00e.0000013c 0x0080083c.014b.20 ---- 1 fsc 0x0000.00000000
--通过此时的dump我们也可以发现原先为被占用的2个事务槽已经被占用而且事务未提交。
data_block_dump,data header at 0xd77645c
===============
tsiz: 0x1fa0
hsiz: 0x5ce
pbl: 0x0d77645c
bdba: 0x0040ee92
76543210
flag=--------
ntab=1
nrow=734
frre=-1
fsbo=0x5ce
fseo=0xbf8
avsp=0x4
tosp=0x4
0xe:pti[0] nrow=734 offs=0

 

 
--查询158进程的等待事件为itl的相关等待事件enq: TX - allocate ITL entry
SQL> select sid,event from v$session where sid=158;

SID EVENT
---------- ----------------------------------------------------------------
158 enq: TX - allocate ITL entry

从以上验证了空间不足的情况下会导致itl无法分配引起enq: TX – allocate ITL entry等待事件的产生。

接下来测试块不满的情况,在表luda中目前有数据的块为26个,其中块号为61101的块只有200条数据,只占用该块30%的空间,为了测试需要对61101号块采用4个session分别对第10,20,30,40行进行更新:

session 1:

 

SQL> update luda set a=a
2 where dbms_rowid.ROWID_BLOCK_NUMBER(rowid)=61101
3 and dbms_rowid.ROWID_ROW_NUMBER(rowid)=10;

1 row updated.

 

 

session2:

 

SQL> update luda set a=a
2 where dbms_rowid.ROWID_BLOCK_NUMBER(rowid)=61101
3 and dbms_rowid.ROWID_ROW_NUMBER(rowid)=20;

1 row updated.

 

session3:

 

SQL> update luda set a=a
2 where dbms_rowid.ROWID_BLOCK_NUMBER(rowid)=61101
3 and dbms_rowid.ROWID_ROW_NUMBER(rowid)=30;

1 row updated.

 

session4:

SQL> update luda set a=a
2 where dbms_rowid.ROWID_BLOCK_NUMBER(rowid)=61101
3 and dbms_rowid.ROWID_ROW_NUMBER(rowid)=40;

1 row updated.

 

以上4个session都没有遇到阻塞,导出61101号块可以发现该块有4个itl slot,自动扩展了2个slot,验证了在空间足够的情况下itl slot会自动扩展。

SQL> alter system dump datafile 1 block 61101;

System altered.

 

--

Block header dump: 0x0040eead
Object id on Block? Y
seg/obj: 0xcb0a csc: 0x00.bc003 itc: 4 flg: O typ: 1 - DATA
fsl: 0 fnx: 0x0 ver: 0x01

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.014.00000159 0x008006bb.01f1.13 ---- 1 fsc 0x0000.00000000
0x02 0x0009.016.0000013b 0x0080075c.0143.37 ---- 1 fsc 0x0000.00000000
0x03 0x0005.00e.0000013c 0x0080083c.014b.21 ---- 1 fsc 0x0000.00000000
0x04 0x0002.026.00000139 0x00800130.00fc.09 ---- 1 fsc 0x0000.00000000
--itc=4,在61101块上存在4条未提交的事务,分别是我们刚才执行的sesson1-4.

data_block_dump,data header at 0xe17048c
===============
tsiz: 0x1f70
hsiz: 0x1a2
pbl: 0x0e17048c
bdba: 0x0040eead
76543210
flag=--------
ntab=1
nrow=200
frre=-1
fsbo=0x1a2
fseo=0x1957
avsp=0x16c6
tosp=0x16c6
0xe:pti[0] nrow=200 offs=0

 

以上测试说明了在Oracle10g以来,itl的争用事件产生在数据块上主要是受块空间的影响,如果大量事务在相关block上执行操作但是当块无法满足分配事务槽,则会产生itl相关争用。
下一节主要是区分是索引块还是表数据块的争用. http://www.ludatou.com/?p=1920

latch:cache buffers chains

官网说明:

Latch: cache buffers chains
Identifier:
Registered In:

Description:
Blocks in the buffer cache are placed on linked lists
(cache buffer chains) which hang off a hash table.
The hash chain that a block is placed on is based on the DBA
and CLASS of the block. Each hash chain is protected by a
single child latch. Processes need to get the relevant latch
to allow them the scan a hash chain for a buffer so that the
linked list does not change underneath them.

Contention: Contention for these latches can be caused by:

– Very long buffer chains.
There is a known problem that can result in long
buffer chains –
– very very heavy access to a single block.
This would require the application to be reviewed.

To identify the heavily accessed buffer chain look at
the latch stats for this latch under
and match this to .

*** IMPORTANT: As of Oracle8i there are many hash buckets
to each latch and so there will be lots
of buffers under each latch.
In 8i the steps below will not help much.

Eg: Given ADDR from V$LATCH_CHILDREN for a heavily contended
child latch:
select dbafil, dbablk, class, state
from X$BH where HLADDR=’address of latch’;

One of these is ‘potentially’ a hot block in the database.

**Please see Note 163424.1 How To Identify a Hot Block Within The Database
to correctly identify this issue

Once the object/table is found you can reduce the number of blocks requested
on the particular object/table by redesigning the application or by
spreading the hits in the buffer cache over different hash chains.
You can achieve this by implementing PARTITIONING and storing segements of
the same table/object in different files.

*NOTE* IF YOU ARE RUNNING 8.1.7:

Please see Note 176129.1 ALERT: LATCH FREE And FREE_BUFFER_WAITS
Cause Performance Degradation/Hang

查找热点块所属对象的方法:

1、执行sql:

select CHILD#  "cCHILD"
,      ADDR    "sADDR"
,      GETS    "sGETS"
,      MISSES  "sMISSES"
,      SLEEPS  "sSLEEPS"
from v$latch_children
where name = 'cache buffers chains'
order by 5, 1, 2, 3;

找到sleep count较高的地址(ADDR),再执行下面的语句

column segment_name format a35
select /*+ RULE */
  e.owner ||'.'|| e.segment_name  segment_name,
  e.extent_id  extent#,
  x.dbablk - e.block_id + 1  block#,
  x.tch,
  l.child#
from
  sys.v$latch_children  l,
  sys.x$bh  x,
  sys.dba_extents  e
where
  x.hladdr  = 'ADDR' and
  e.file_id = x.file# and
  x.hladdr = l.addr and
  x.dbablk between e.block_id and e.block_id + e.blocks -1
  order by x.tch desc ;

2、由于上面的方法执行起来可能比较慢,而且有时候是针对某个session的事件进行查找的,因此可以用下面的方法:

v$session_wait中找到P1RAW地址,然后执行:

SELECT FILE# , dbablk, class, state ,tch
FROM x$bh WHERE hladdr='' order by tch;

找出touch较高的文件及数据块,最后执行

select * from dba_extents where file_id= and   between block_id and block_id + blocks -1

有关log file parallel write等待

该事件只在lgwr进程上发生的等待事件,为lgwr进程将内存中redo entry写进redo file的IO工作期间等待该事件。此等待事件与db file parallel write等待原理类似。比如在IO系统没达到高负荷时候,但是脏缓冲图的数据量过多时候,db file parallel write的等待事件就会增多,与此相同,当IO没问题时候,但是重做数据量过多的时候,log file parallel write等待就可能会增加。该问题一方面受限于oracle lgwr的处理能力,一方面受限于redo entry的产生量(归根到底还是应用方面)。

如果想要降低不是因为IO问题引起的log file parallel write的等待,一般都从以下几点思考

1.减少提交次数
2.尽量少产生redo entry
3.避免hot backup
4.提高redo所在文件系统的io性能

该等待事件在平常能留给我们调整的余地较小,在目前的OLTP容灾普及的情况下,nologging方式基本不可能。

浅谈log buffer space等待事件

什么是log buffer space?

该等待事件问进程等待log buffer空间,因为此时写入log buffer的redo entry速度要高于lgwr 将redo entry写入online redolog的速度,导致log buffer塞满进而产生log buffer space等待事件。深入一点解释就是在往log buffer写入redo entry时候要先获取redo allocation latch,在获取该latch后开始寻获log buffer中剩余的空闲区域写入数据,如果此时没有剩余空间给与写入redo entry则发生log buffer space等待事件,此时可能也会等待log file switch completion事件,因为如果此时遭遇redo online log写满的情况则会发生log switch的情况,在切换过程 logbuffer瞬间写满日志,lgwr进程等待切换结束则出现switch相关等待事件,同时在切换完成后由于大量进程瞬间往log buffer space写入redo entry导致buffer塞满则此时等待log buffer space事件。

应该搜集那些信息来分析’log buffer space’等待事件?

1.发生此事件时候的日志切换频率
2.实例的内存设置以及AWR或者statspack报告
3.发生此事件时候实例的日志切换频率
4.发生此事件时候系统的整体IO负载
5.如果为Dataguard环境,检查DG的模式以及日志传输的延时情况

触发’log buffer space’等待事件的原因?

1.系统IO能力不足
2.日志切换过于频繁
3.过小的log buffer
4.Dataguard在最大可用模式以上保护级别的络传输效率低下

如何处理log buffer space等待事件?

导致log buffer space的原因前面分析过了,针对前面的情况大致的列出一些处理思路。
1.系统IO能力不足这个在log file sync的讨论中介绍过了该如何处理,尽量提升io能力,将redo log放在io能力高的存储上
2.日志切换过于频繁这个时候一般都会导致log buffer space和log file switch completion的等待,如果日志小了,频繁的切换会导致此2个事件的产生,所以一般建议将redo设置大一些,并放在io能力高的文件系统
3.过小的log buffer,这个情况和2有相似点,可能同时存在redo logfile size过小,log bufer过小的情况,再此时如果把log buffer设置大点log file sync等待就可能增加,所以针对不同的log buffer大小对log buffer space事件的影响做个测试验证下。

测试:

1.构建T1表10w条记录
2.在log buffer 256k时候update t1表
3.在log buffer 10m时候update t1表
然后分别确认2和3的log buffer space等待情况

1.构建T1表

SQL> select count(*) from t1;

  COUNT(*)
----------
     150068

SQL> insert into t1 select * from t1;

150068 rows created.

SQL> commit;

Commit complete.

SQL> select count(*) from t1;

  COUNT(*)
----------
    300136

SQL> alter table t1 add num number;

Table altered.

SQL> alter table t1 modify num varchar2(200);

Table altered.

SQL> select num from t1 where rownum < 2;

NUM
--------------------------------------------------------------------------------
AAAMr8AAEAAAAA8AAA

2.在log buffer 2M时候update t1表

在表t1中更新30w条数据,同时观察等待
SQL> show sga

Total System Global Area  318767104 bytes
Fixed Size                  1267260 bytes
Variable Size             100665796 bytes
Database Buffers          213909504 bytes
Redo Buffers                2924544 bytes
SQL> conn luda/luda
Connected.
SQL> @sess_stat
SP2-0158: unknown SET option "header"

EVENT                                                            TOTAL_WAITS TIME_WAITED
---------------------------------------------------------------- ----------- -----------
SQL*Net message from client                                               11        4439
db file sequential read                                                   55           0
SQL*Net message to client                                                 12           0

SQL> update t1 set object_name='LUDATOU',num=rowid;

300408 rows updated.

SQL> @sess_stat
SP2-0158: unknown SET option "header"

EVENT                                                            TOTAL_WAITS TIME_WAITED
---------------------------------------------------------------- ----------- -----------
SQL*Net message from client                                               14        8837
log buffer space                                                          11          37
log file switch completion                                                 8           7
db file scattered read                                                   544           6
events in waitclass Other                                                 31           1
SQL*Net message to client                                                 15           0
db file sequential read                                                   70           0
观察整体的等待

SQL> @snaper all,out 5 3 214

-- Session Snapper v2.02 by Tanel Poder ( http://www.tanelpoder.com )

----------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                               ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
----------------------------------------------------------------------------------------------------------------------
    214, LUDA      , STAT, opened cursors cumulative               ,           198,       39.6,
    214, LUDA      , STAT, user calls                              ,             2,         .4,
    214, LUDA      , STAT, recursive calls                         ,          2007,      401.4,
    214, LUDA      , STAT, recursive cpu usage                     ,            12,        2.4,
    214, LUDA      , STAT, session logical reads                   ,        390254,     78.05k,
    214, LUDA      , STAT, CPU used by this session                ,           198,       39.6,
    214, LUDA      , STAT, DB time                                 ,           261,       52.2,
    214, LUDA      , STAT, user I/O wait time                      ,             6,        1.2,
    214, LUDA      , STAT, session uga memory                      ,         65464,     13.09k,
    214, LUDA      , STAT, session uga memory max                  ,         65464,     13.09k,
    214, LUDA      , STAT, messages sent                           ,           365,         73,
    214, LUDA      , STAT, session pga memory                      ,         65536,     13.11k,
    214, LUDA      , STAT, session pga memory max                  ,         65536,     13.11k,
    214, LUDA      , STAT, enqueue requests                        ,           836,      167.2,
    214, LUDA      , STAT, enqueue conversions                     ,           113,       22.6,
    214, LUDA      , STAT, enqueue releases                        ,           834,      166.8,
    214, LUDA      , STAT, physical read total IO requests         ,           559,      111.8,
    214, LUDA      , STAT, physical read total multi block requests,           544,      108.8,
    214, LUDA      , STAT, physical read total bytes               ,      35151872,      7.03M,
    214, LUDA      , STAT, db block gets                           ,        383374,     76.67k,
    214, LUDA      , STAT, db block gets from cache                ,        383374,     76.67k,
    214, LUDA      , STAT, consistent gets                         ,          6880,      1.38k,
    214, LUDA      , STAT, consistent gets from cache              ,          6880,      1.38k,
    214, LUDA      , STAT, consistent gets - examination           ,          1695,        339,
    214, LUDA      , STAT, physical reads                          ,          4291,      858.2,
    214, LUDA      , STAT, physical reads cache                    ,          4291,      858.2,
    214, LUDA      , STAT, physical read IO requests               ,           559,      111.8,
    214, LUDA      , STAT, physical read bytes                     ,      35151872,      7.03M,
    214, LUDA      , STAT, db block changes                        ,        655501,     131.1k,
    214, LUDA      , STAT, change write time                       ,           105,         21,
    214, LUDA      , STAT, redo synch writes                       ,             1,         .2,
    214, LUDA      , STAT, free buffer requested                   ,         13313,      2.66k,
    214, LUDA      , STAT, switch current to new buffer            ,          4241,      848.2,
    214, LUDA      , STAT, write clones created in foreground      ,             3,         .6,
    214, LUDA      , STAT, physical reads cache prefetch           ,          3732,      746.4,
    214, LUDA      , STAT, shared hash latch upgrades - no wait    ,             5,          1,
    214, LUDA      , STAT, calls to kcmgcs                         ,           132,       26.4,
    214, LUDA      , STAT, calls to kcmgas                         ,          9027,      1.81k,
    214, LUDA      , STAT, calls to get snapshot scn: kcmgss       ,           786,      157.2,
    214, LUDA      , STAT, redo entries                            ,        326611,     65.32k,
    214, LUDA      , STAT, redo size                               ,      96942748,     19.39M,
    214, LUDA      , STAT, redo buffer allocation retries          ,            15,          3,
    214, LUDA      , STAT, redo log space requests                 ,             8,        1.6,
    214, LUDA      , STAT, redo log space wait time                ,             7,        1.4,
    214, LUDA      , STAT, redo ordering marks                     ,          4744,      948.8,
    214, LUDA      , STAT, redo subscn max counts                  ,          6443,      1.29k,
    214, LUDA      , STAT, undo change vector size                 ,      37734184,      7.55M,
    214, LUDA      , STAT, no work - consistent read gets          ,          2974,      594.8,
    214, LUDA      , STAT, cleanouts only - consistent read gets   ,          1661,      332.2,
    214, LUDA      , STAT, immediate (CR) block cleanout applicatio,          1661,      332.2,
    214, LUDA      , STAT, deferred (CURRENT) block cleanout applic,          2582,      516.4,
    214, LUDA      , STAT, commit txn count during cleanout        ,          1661,      332.2,
    214, LUDA      , STAT, cleanout - number of ktugct calls       ,          1661,      332.2,
    214, LUDA      , STAT, IMU Flushes                             ,             1,         .2,
    214, LUDA      , STAT, table scans (short tables)              ,           173,       34.6,
    214, LUDA      , STAT, table scans (long tables)               ,             1,         .2,
    214, LUDA      , STAT, table scan rows gotten                  ,        325791,     65.16k,
    214, LUDA      , STAT, table scan blocks gotten                ,          4607,      921.4,
    214, LUDA      , STAT, table fetch by rowid                    ,             9,        1.8,
    214, LUDA      , STAT, cluster key scans                       ,             8,        1.6,
    214, LUDA      , STAT, cluster key scan block gets             ,             8,        1.6,
    214, LUDA      , STAT, rows fetched via callback               ,             1,         .2,
    214, LUDA      , STAT, index crx upgrade (positioned)          ,             2,         .4,
    214, LUDA      , STAT, index fetch by key                      ,            11,        2.2,
    214, LUDA      , STAT, index scans kdiixs1                     ,            14,        2.8,
    214, LUDA      , STAT, session cursor cache hits               ,           177,       35.4,
    214, LUDA      , STAT, session cursor cache count              ,             1,         .2,
    214, LUDA      , STAT, cursor authentications                  ,             2,         .4,
    214, LUDA      , STAT, buffer is pinned count                  ,        296391,     59.28k,
    214, LUDA      , STAT, buffer is not pinned count              ,            26,        5.2,
    214, LUDA      , STAT, workarea executions - optimal           ,             5,          1,
    214, LUDA      , STAT, parse time cpu                          ,             1,         .2,
    214, LUDA      , STAT, parse time elapsed                      ,             1,         .2,
    214, LUDA      , STAT, parse count (total)                     ,           188,       37.6,
    214, LUDA      , STAT, parse count (hard)                      ,             6,        1.2,
    214, LUDA      , STAT, execute count                           ,           198,       39.6,
    214, LUDA      , STAT, bytes sent via SQL*Net to client        ,           147,       29.4,
    214, LUDA      , STAT, bytes received via SQL*Net from client  ,           208,       41.6,
    214, LUDA      , STAT, SQL*Net roundtrips to/from client       ,             1,         .2,
    214, LUDA      , STAT, sorts (memory)                          ,             9,        1.8,
    214, LUDA      , STAT, sorts (rows)                            ,            33,        6.6,
    214, LUDA      , TIME, hard parse elapsed time                 ,          7721,     1.54ms,      .2%, |          |
    214, LUDA      , TIME, repeated bind elapsed time              ,            23,      4.6us,      .0%, |          |
    214, LUDA      , TIME, parse time elapsed                      ,         17186,     3.44ms,      .3%, |          |
    214, LUDA      , TIME, DB CPU                                  ,       1980114,   396.02ms,    39.6%, |@@@@      |
    214, LUDA      , TIME, sql execute elapsed time                ,       2544856,   508.97ms,    50.9%, |@@@@@@    |
    214, LUDA      , TIME, DB time                                 ,       2547798,   509.56ms,    51.0%, |@@@@@@    |
    214, LUDA      , WAIT, log buffer space                        ,        373101,    74.62ms,     7.5%, |@         |
    214, LUDA      , WAIT, log file switch completion              ,         73175,    14.64ms,     1.5%, |@         |
    214, LUDA      , WAIT, db file sequential read                 ,          1750,      350us,      .0%, |          |
    214, LUDA      , WAIT, db file scattered read                  ,         59996,       12ms,     1.2%, |@         |
    214, LUDA      , WAIT, SQL*Net message to client               ,             4,       .8us,      .0%, |          |
    214, LUDA      , WAIT, SQL*Net message from client             ,       1448235,   289.65ms,    29.0%, |@@@       |
--  End of snap 1, end=2014-07-16 16:34:54, seconds=5

通过以上测试我们可以发现,在2M左右大小的log buffer中,更新表t1过程总体产生的log buffer space等待为74.62ms,log buffer space等待触发次数11次.
继续第二步测试。

3.在log buffer 10M时候update t1表

SQL> conn luda/luda
Connected.
SQL>  update t1 set object_name='LUDATOU',num=rowid;

300408 rows updated.

SQL> @sess_stat
SP2-0158: unknown SET option "header"

EVENT                                                            TOTAL_WAITS TIME_WAITED
---------------------------------------------------------------- ----------- -----------
SQL*Net message from client                                               12        2951
log file switch (checkpoint incomplete)                                    4         112
log file switch completion                                                 5          45
events in waitclass Other                                                  4          11
db file scattered read                                                   544           5
Data file init write                                                      21           4
log buffer space                                                           2           2
control file parallel write                                                9           1
SQL*Net message to client                                                 13           0
db file parallel read                                                      1           0
db file single write                                                       3           0
control file sequential read                                              60           0
db file sequential read                                                   71           0

13 rows selected.

同时观测总体等待
SQL>  @snaper all,out 5 3 214


-- Session Snapper v2.02 by Tanel Poder ( http://www.tanelpoder.com )

----------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                               ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
----------------------------------------------------------------------------------------------------------------------
    214, LUDA      , STAT, opened cursors cumulative               ,            80,         16,
    214, LUDA      , STAT, opened cursors current                  ,             1,         .2,
    214, LUDA      , STAT, user calls                              ,             1,         .2,
    214, LUDA      , STAT, recursive calls                         ,           955,        191,
    214, LUDA      , STAT, recursive cpu usage                     ,             9,        1.8,
    214, LUDA      , STAT, session logical reads                   ,        344227,     68.85k,
    214, LUDA      , STAT, CPU used by this session                ,           196,       39.2,
    214, LUDA      , STAT, user I/O wait time                      ,             7,        1.4,
    214, LUDA      , STAT, messages sent                           ,           107,       21.4,
    214, LUDA      , STAT, session pga memory                      ,       4587520,     917.5k,
    214, LUDA      , STAT, session pga memory max                  ,       4587520,     917.5k,
    214, LUDA      , STAT, enqueue waits                           ,             1,         .2,
    214, LUDA      , STAT, enqueue requests                        ,           220,         44,
    214, LUDA      , STAT, enqueue conversions                     ,            26,        5.2,
    214, LUDA      , STAT, enqueue releases                        ,           218,       43.6,
    214, LUDA      , STAT, physical read total IO requests         ,           575,        115,
    214, LUDA      , STAT, physical read total multi block requests,           513,      102.6,
    214, LUDA      , STAT, physical read total bytes               ,      33865728,      6.77M,
    214, LUDA      , STAT, physical write total IO requests        ,            32,        6.4,
    214, LUDA      , STAT, physical write total multi block request,            10,          2,
    214, LUDA      , STAT, physical write total bytes              ,      10797056,      2.16M,
    214, LUDA      , STAT, db block gets                           ,        335817,     67.16k,
    214, LUDA      , STAT, db block gets from cache                ,        335817,     67.16k,
    214, LUDA      , STAT, consistent gets                         ,          8410,      1.68k,
    214, LUDA      , STAT, consistent gets from cache              ,          8410,      1.68k,
    214, LUDA      , STAT, consistent gets - examination           ,          4064,      812.8,
    214, LUDA      , STAT, physical reads                          ,          4050,        810,
    214, LUDA      , STAT, physical reads cache                    ,          4050,        810,
    214, LUDA      , STAT, physical read IO requests               ,           531,      106.2,
    214, LUDA      , STAT, physical read bytes                     ,      33177600,      6.64M,
    214, LUDA      , STAT, db block changes                        ,        611828,    122.37k,
    214, LUDA      , STAT, prefetch clients - default              ,             1,         .2,
    214, LUDA      , STAT, change write time                       ,            49,        9.8,
    214, LUDA      , STAT, free buffer requested                   ,         12576,      2.52k,
    214, LUDA      , STAT, switch current to new buffer            ,          4030,        806,
    214, LUDA      , STAT, physical reads cache prefetch           ,          3522,      704.4,
    214, LUDA      , STAT, shared hash latch upgrades - no wait    ,             3,         .6,
    214, LUDA      , STAT, calls to kcmgcs                         ,             5,          1,
    214, LUDA      , STAT, calls to kcmgas                         ,          8532,      1.71k,
    214, LUDA      , STAT, calls to get snapshot scn: kcmgss       ,           273,       54.6,
    214, LUDA      , STAT, redo entries                            ,        305759,     61.15k,
    214, LUDA      , STAT, redo size                               ,      91112824,     18.22M,
    214, LUDA      , STAT, redo buffer allocation retries          ,             7,        1.4,
    214, LUDA      , STAT, redo log space requests                 ,             6,        1.2,
    214, LUDA      , STAT, redo log space wait time                ,            59,       11.8,
    214, LUDA      , STAT, redo ordering marks                     ,          4489,      897.8,
    214, LUDA      , STAT, redo subscn max counts                  ,          8528,      1.71k,
    214, LUDA      , STAT, undo change vector size                 ,      35556100,      7.11M,
    214, LUDA      , STAT, no work - consistent read gets          ,           133,       26.6,
    214, LUDA      , STAT, cleanouts only - consistent read gets   ,          4030,        806,
    214, LUDA      , STAT, immediate (CR) block cleanout applicatio,          4030,        806,
    214, LUDA      , STAT, commit txn count during cleanout        ,          4030,        806,
    214, LUDA      , STAT, cleanout - number of ktugct calls       ,          4030,        806,
    214, LUDA      , STAT, auto extends on undo tablespace         ,             2,         .4,
    214, LUDA      , STAT, IMU pool not allocated                  ,             1,         .2,
    214, LUDA      , STAT, IMU- failed to get a private strand     ,             1,         .2,
    214, LUDA      , STAT, table scans (short tables)              ,            55,         11,
    214, LUDA      , STAT, table scans (long tables)               ,             1,         .2,
    214, LUDA      , STAT, table scan rows gotten                  ,        313370,     62.67k,
    214, LUDA      , STAT, table scan blocks gotten                ,          4135,        827,
    214, LUDA      , STAT, table fetch by rowid                    ,             9,        1.8,
    214, LUDA      , STAT, cluster key scans                       ,             8,        1.6,
    214, LUDA      , STAT, cluster key scan block gets             ,             8,        1.6,
    214, LUDA      , STAT, rows fetched via callback               ,             1,         .2,
    214, LUDA      , STAT, index crx upgrade (positioned)          ,             2,         .4,
    214, LUDA      , STAT, index fetch by key                      ,            11,        2.2,
    214, LUDA      , STAT, index scans kdiixs1                     ,            14,        2.8,
    214, LUDA      , STAT, session cursor cache hits               ,            59,       11.8,
    214, LUDA      , STAT, cursor authentications                  ,             2,         .4,
    214, LUDA      , STAT, buffer is pinned count                  ,        285420,     57.08k,
    214, LUDA      , STAT, buffer is not pinned count              ,            26,        5.2,
    214, LUDA      , STAT, workarea executions - optimal           ,             5,          1,
    214, LUDA      , STAT, parse time cpu                          ,             1,         .2,
    214, LUDA      , STAT, parse count (total)                     ,            70,         14,
    214, LUDA      , STAT, parse count (hard)                      ,             6,        1.2,
    214, LUDA      , STAT, execute count                           ,            80,         16,
    214, LUDA      , STAT, bytes received via SQL*Net from client  ,           194,       38.8,
    214, LUDA      , STAT, sorts (memory)                          ,             9,        1.8,
    214, LUDA      , STAT, sorts (rows)                            ,            33,        6.6,
    214, LUDA      , TIME, hard parse elapsed time                 ,          7737,     1.55ms,      .2%, |          |
    214, LUDA      , TIME, repeated bind elapsed time              ,            46,      9.2us,      .0%, |          |
    214, LUDA      , TIME, parse time elapsed                      ,         11032,     2.21ms,      .2%, |          |
    214, LUDA      , TIME, DB CPU                                  ,       1620000,      324ms,    32.4%, |@@@@      |
    214, LUDA      , TIME, sql execute elapsed time                ,       2440182,   488.04ms,    48.8%, |@@@@@     |
    214, LUDA      , TIME, DB time                                 ,       2442540,   488.51ms,    48.9%, |@@@@@     |
    214, LUDA      , WAIT, Data file init write                    ,         25778,     5.16ms,      .5%, |@         |
    214, LUDA      , WAIT, control file sequential read            ,           447,     89.4us,      .0%, |          |
    214, LUDA      , WAIT, control file parallel write             ,          6142,     1.23ms,      .1%, |          |
    214, LUDA      , WAIT, log buffer space                        ,         19503,      3.9ms,      .4%, |          |
    214, LUDA      , WAIT, log file switch (checkpoint incomplete) ,        127979,     25.6ms,     2.6%, |@         |
    214, LUDA      , WAIT, log file switch completion              ,        446061,    89.21ms,     8.9%, |@         |
    214, LUDA      , WAIT, db file sequential read                 ,           419,     83.8us,      .0%, |          |
    214, LUDA      , WAIT, db file scattered read                  ,         49967,     9.99ms,     1.0%, |@         |
    214, LUDA      , WAIT, db file single write                    ,           998,    199.6us,      .0%, |          |
    214, LUDA      , WAIT, SQL*Net message from client             ,       2462840,   492.57ms,    49.3%, |@@@@@     |

通过测试3可以发现在10m左右的log buffer 中log buffer space总的等待时间为3.9ms,总体等待次数2次。

测试3和测试2说明了在log buffer加大的情况下,log buffer space等待次数降低,但是log file switch completion等待也随着增加,所以调整log buffer space从整体的等待来看加大log buffer并不是绝对的。
另一种方法来降低log buffer space的办法就是减少redo的产生量,比如nologging或者直接路径写。

4.DG中的日志传输延时影响

在这种情况中可以参考前面的gc block lost中网络的调整,保证日志的传输效率。
还有一种就是压缩日志大小。