Skip to content

等待事件 - 3. page

使用Strace让oracle hgng住的测试

数据库hang住的原因很多,以往碰到得案例不少,大部分都和内存的抖动引起的进程僵死,或者bug造成,或者其他类似如归档满了,dg最高级模式下网络阻塞等原因.今天这里我介绍的这个案例偏门,几乎很难碰到.这里测试的版本为11.2.0.2版本,基于linux内核2.6.18 ×64. 且10g和11r1版本此类测试无效.这应该算是11.0.2.2的一个bug.

在linux中有一个命令strace,它常被用来跟踪进程执行时的系统调用和所接收的信号。
具体的参数含义如下:

-c 统计每一系统调用的所执行的时间,次数和出错的次数等.
-d 输出strace关于标准错误的调试信息.
-f 跟踪由fork调用所产生的子进程.
-ff 如果提供-o filename,则所有进程的跟踪结果输出到相应的filename.pid中,pid是各进程的进程号.
-F 尝试跟踪vfork调用.在-f时,vfork不被跟踪.
-h 输出简要的帮助信息.
-i 输出系统调用的入口指针.
-q 禁止输出关于脱离的消息.
-r 打印出相对时间关于,,每一个系统调用.
-t 在输出中的每一行前加上时间信息.
-tt 在输出中的每一行前加上时间信息,微秒级.
-ttt 微秒级输出,以秒了表示时间.
-T 显示每一调用所耗的时间.
-v 输出所有的系统调用.一些调用关于环境变量,状态,输入输出等调用由于使用频繁,默认不输出.
-V 输出strace的版本信息.
-x 以十六进制形式输出非标准字符串
-xx 所有字符串以十六进制形式输出.
-a column
设置返回值的输出位置.默认 为40.
-e expr
指定一个表达式,用来控制如何跟踪.格式如下:
[qualifier=][!]value1[,value2]...
qualifier只能是 trace,abbrev,verbose,raw,signal,read,write其中之一.value是用来限定的符号或数字.默认的 qualifier是 trace.感叹号是否定符号.例如:
-eopen等价于 -e trace=open,表示只跟踪open调用.而-etrace!=open表示跟踪除了open以外的其他调用.有两个特殊的符号 all 和 none.
注意有些shell使用!来执行历史记录里的命令,所以要使用\\.
-e trace=set
只跟踪指定的系统 调用.例如:-e trace=open,close,rean,write表示只跟踪这四个系统调用.默认的为set=all.
-e trace=file
只跟踪有关文件操作的系统调用.
-e trace=process
只跟踪有关进程控制的系统调用.
-e trace=network
跟踪与网络有关的所有系统调用.
-e strace=signal
跟踪所有与系统信号有关的 系统调用
-e trace=ipc
跟踪所有与进程通讯有关的系统调用
-e abbrev=set
设定 strace输出的系统调用的结果集.-v 等与 abbrev=none.默认为abbrev=all.
-e raw=set
将指 定的系统调用的参数以十六进制显示.
-e signal=set
指定跟踪的系统信号.默认为all.如 signal=!SIGIO(或者signal=!io),表示不跟踪SIGIO信号.
-e read=set
输出从指定文件中读出 的数据.例如:
-e read=3,5
-e write=set
输出写入到指定文件中的数据.
-o filename
将strace的输出写入文件filename
-p pid
跟踪指定的进程pid.
-s strsize
指定输出的字符串的最大长度.默认为32.文件名一直全部输出.
-u username
以username 的UID和GID执行被跟踪的命令

如果做过在linux的开发,那么对这个命令一定不会陌生,哈哈。闲话不多说,我们接着猪蹄,今天我们主要是采用-p -o这2个参数,用它对oracle的进程进行跟踪调用。
首先我对lgwr进程进行跟踪调用,执行如下:

[root@ludatou ~]# ps -ef | grep lgwr
root     10136  9578  0 06:30 pts/1    00:00:00 grep lgwr
ora10g   32496     1  0 Apr15 ?        00:00:04 ora_lgwr_lu11r2
[root@ludatou ~]# export ORACLE_SID=lu11r2
[root@ludatou ~]# strace -p $(pgrep -fx ora_lgwr_$ORACLE_SID) -o /tmp/l.out -T &
[1] 10257
[root@ludatou ~]# Process 32496 attached - interrupt to quit

这个时候日志l.out的输出大致如下(我db是空闲的):

gettimeofday({1397602111, 401986}, NULL) = 0 <0.000032>
gettimeofday({1397602111, 402111}, NULL) = 0 <0.000032>
gettimeofday({1397602111, 402233}, NULL) = 0 <0.000032>
gettimeofday({1397602111, 402356}, NULL) = 0 <0.000031>
gettimeofday({1397602111, 402510}, NULL) = 0 <0.000032>
gettimeofday({1397602111, 402537}, NULL) = 0 <0.000011>
gettimeofday({1397602111, 402551}, NULL) = 0 <0.000007>
gettimeofday({1397602111, 402560}, NULL) = 0 <0.000005>
times(NULL)                             = 436324232 <0.000002>
gettimeofday({1397602111, 402570}, NULL) = 0 <0.000003>
gettimeofday({1397602111, 402573}, NULL) = 0 <0.000002>
semtimedop(131073, 0xbfa12088, 1, {3, 0}

这个时候数据库运行正常,正常读写提交,具体如下:

SQL> conn luda/luda
Connected.
SQL> create table t1 as select * from dba_objects;

Table created.

SQL> alter table t1 add ludatou varchar2(200);

Table altered.

SQL> update t1 set ludatou=250;

50068 rows updated.

SQL> commit;

Commit complete.

下一步我将strace跟踪lgwr的进程杀掉后,再看看数据库是否运行正常,具体如下:

1.杀掉跟踪进程

[root@ludatou ~]# kill %1
[root@ludatou ~]# Process 32496 detached
--这个时候提示守护进程和32496进程(lgwr)分离

2.在数据库层面执行
SQL> update t1 set ludatou=250;

50068 rows updated.

SQL> commit;

到这一步会发现….一直没反应,查看全库等待事件为log file sync.这个时候检测lgwr的进程状态可以发现为如下:

[ora11g@ludatou ~]$ ps $(pgrep -fx ora_lgwr_$ORACLE_SID)
  PID TTY      STAT   TIME COMMAND
32496 ?        Ts     0:00 ora_lgwr_lu11r2

进程32496为Ts的状态,意味着该进程为停止状态或者该进程被其他进程控制.但是实际上我已经把strace停止。我们将其唤醒:

kill -SIGCONT 32496
[ora11g@ludatou ~]$ ps $(pgrep -fx ora_lgwr_$ORACLE_SID)
  PID TTY      STAT   TIME COMMAND
32496 ?        Ss     0:00 ora_lgwr_lu11r2

这个时候进程变为ss状态,而且数据库也随之恢复正常。这不知道应算是数据库的bug还是linux的bug,个人更倾向linux,毕竟我执行退出strace了,实际上strace进程还附加在lgwr进程上不断callout进程。这个测试我只在11.2.0.2,11.1.0.7,10.2.0.4版本上测试过,目前只有11.2.0.2有这个情况。

Troubleshooting 'latch: cache buffers chains' Wait Contention

最近在好几个项目上遭遇LCBC无外乎都是CPU异常导致,先把这方面官方诊断的文章共享出来,后面描述一些极端场景的案例。

If you have high contention, you need to look at the statements that perform the most buffer gets and then look at their access paths to determine whether these are performing as efficiently as you would like.

Typical solutions are:-

  • Look for SQL that accesses the blocks in question and determine if the repeated reads are necessary. This may be within a single session or across multiple sessions.
  • Check for suboptimal SQL (this is the most common cause of the events) – look at the execution plan for the SQL being run and try to reduce the gets per executions which will minimize the number of blocks being accessed and therefore reduce the chances of multiple sessions contending for the same block.

Further information can be found in:

Note:390374.1 Oracle Performance Diagnostic Guide (OPDG) (Doc ID 390374.1)
Note:163424.1 How To Identify a Hot Block Within The Database Buffer Cache.
Note:62172.1 Understanding and Tuning Buffer Cache and DBWR (Doc ID 62172.1)

 

Worked example:

Problem: Database is slow and ‘latch: cache buffers chains’ is high in the waits in AWR.

Start with Top 5 Waits:

Top 5 Timed Events                                      Avg    %Total
~~~~~~~~~~~~~~~~~~                                      wait   Call
Event                          Waits        Time (s)    (ms)   Time   Wait Class
—————————— ———— ———– —— —— ———-
latch: cache buffers chains          74,642      35,421    475    6.1 Concurrenc
CPU time                                         11,422           2.0
log file sync                        34,890       1,748     50    0.3 Commit
latch free                            2,279         774    340    0.1 Other
db file parallel write               18,818         768     41    0.1 System I/O
————————————————————-

High cache buffers chains latch indicates that there is likely to be something reading a lot of buffers. Typically the SQL with the most gets is likely to be that which is contending:

SQL ordered by Gets         DB/Inst:  Snaps: 1-2
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets:   265,126,882
-> Captured SQL account for   99.8% of Total
                            Gets                CPU      Elapsed
Buffer Gets    Executions   per Exec     %Total Time (s) Time (s)  SQL Id
————– ———— ———— —— ——– ——— ————-
   256,763,367       19,052     13,477.0   96.8 ######## ######### a9nchgksux6x2
Module: JDBC Thin Client
SELECT * FROM SALES ….
     1,974,516      987,056          2.0    0.7    80.31    110.94 ct6xwvwg3w0bv
SELECT COUNT(*) FROM ORDERS ….

The Query with SQL_ID a9nchgksux6x2 is reading 100x more buffers than the 2nd most ‘hungry’ statement and CPU and Elapsed are off the ‘scale’ of the report.  This is a prime candidate for the cause of the CBC latch issues.

You can also link this information to the Top  Segments by Logical Reads:

Segments by Logical Reads
-> Total Logical Reads:     265,126,882
-> Captured Segments account for   98.5% of Total
           Tablespace                      Subobject  Obj.       Logical
Owner         Name    Object Name            Name     Type         Reads  %Total
———- ———- ——————– ———- —– ———— ——-
DMSUSER    USERS      SALES                           TABLE  212,206,208   80.04
DMSUSER    USERS      SALES_PK                        INDEX   44,369,264   16.74
DMSUSER    USERS      SYS_C0012345                    INDEX    1,982,592     .75
DMSUSER    USERS      ORDERS_PK                       INDEX      842,304     .32
DMSUSER    USERS      INVOICES                        TABLE      147,488     .06
          ————————————————————-

The top object read is SALES and the top SQL is a select from SALES which appears to correlate towards this being a potential problem select.

This SQL should be investigated to see if the Gets per Exec or the Executions figure per hour has changed in any way (comparison to previous reports would show this) and if so the reasons for that change investigated and resolved.

In this case the statement is reading > 10,000 buffers per execution and executing > 15,000 times
so both of these may need to be adjusted to get better performance.

Note: This is a simple example where there is a high likelihood that the ‘biggest’ query is the culprit but it is not always the ‘Top’ SQL that causes the problem. For example, contention may occur on a statement with a smaller total if it is only executed a small number of times so that  it may not appear as the top sql. It may still make millions of buffer gets, but will appear lower in the list because other sqls are performing many times, just not contending.

So, if the first SQL is not the culprit then look at the others.