Skip to content

Troubleshooting 'latch: cache buffers chains' Wait Contention

最近在好几个项目上遭遇LCBC无外乎都是CPU异常导致,先把这方面官方诊断的文章共享出来,后面描述一些极端场景的案例。

If you have high contention, you need to look at the statements that perform the most buffer gets and then look at their access paths to determine whether these are performing as efficiently as you would like.

Typical solutions are:-

  • Look for SQL that accesses the blocks in question and determine if the repeated reads are necessary. This may be within a single session or across multiple sessions.
  • Check for suboptimal SQL (this is the most common cause of the events) – look at the execution plan for the SQL being run and try to reduce the gets per executions which will minimize the number of blocks being accessed and therefore reduce the chances of multiple sessions contending for the same block.

Further information can be found in:

Note:390374.1 Oracle Performance Diagnostic Guide (OPDG) (Doc ID 390374.1)
Note:163424.1 How To Identify a Hot Block Within The Database Buffer Cache.
Note:62172.1 Understanding and Tuning Buffer Cache and DBWR (Doc ID 62172.1)

 

Worked example:

Problem: Database is slow and ‘latch: cache buffers chains’ is high in the waits in AWR.

Start with Top 5 Waits:

Top 5 Timed Events                                      Avg    %Total
~~~~~~~~~~~~~~~~~~                                      wait   Call
Event                          Waits        Time (s)    (ms)   Time   Wait Class
—————————— ———— ———– —— —— ———-
latch: cache buffers chains          74,642      35,421    475    6.1 Concurrenc
CPU time                                         11,422           2.0
log file sync                        34,890       1,748     50    0.3 Commit
latch free                            2,279         774    340    0.1 Other
db file parallel write               18,818         768     41    0.1 System I/O
————————————————————-

High cache buffers chains latch indicates that there is likely to be something reading a lot of buffers. Typically the SQL with the most gets is likely to be that which is contending:

SQL ordered by Gets         DB/Inst:  Snaps: 1-2
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets:   265,126,882
-> Captured SQL account for   99.8% of Total
                            Gets                CPU      Elapsed
Buffer Gets    Executions   per Exec     %Total Time (s) Time (s)  SQL Id
————– ———— ———— —— ——– ——— ————-
   256,763,367       19,052     13,477.0   96.8 ######## ######### a9nchgksux6x2
Module: JDBC Thin Client
SELECT * FROM SALES ….
     1,974,516      987,056          2.0    0.7    80.31    110.94 ct6xwvwg3w0bv
SELECT COUNT(*) FROM ORDERS ….

The Query with SQL_ID a9nchgksux6x2 is reading 100x more buffers than the 2nd most ‘hungry’ statement and CPU and Elapsed are off the ‘scale’ of the report.  This is a prime candidate for the cause of the CBC latch issues.

You can also link this information to the Top  Segments by Logical Reads:

Segments by Logical Reads
-> Total Logical Reads:     265,126,882
-> Captured Segments account for   98.5% of Total
           Tablespace                      Subobject  Obj.       Logical
Owner         Name    Object Name            Name     Type         Reads  %Total
———- ———- ——————– ———- —– ———— ——-
DMSUSER    USERS      SALES                           TABLE  212,206,208   80.04
DMSUSER    USERS      SALES_PK                        INDEX   44,369,264   16.74
DMSUSER    USERS      SYS_C0012345                    INDEX    1,982,592     .75
DMSUSER    USERS      ORDERS_PK                       INDEX      842,304     .32
DMSUSER    USERS      INVOICES                        TABLE      147,488     .06
          ————————————————————-

The top object read is SALES and the top SQL is a select from SALES which appears to correlate towards this being a potential problem select.

This SQL should be investigated to see if the Gets per Exec or the Executions figure per hour has changed in any way (comparison to previous reports would show this) and if so the reasons for that change investigated and resolved.

In this case the statement is reading > 10,000 buffers per execution and executing > 15,000 times
so both of these may need to be adjusted to get better performance.

Note: This is a simple example where there is a high likelihood that the ‘biggest’ query is the culprit but it is not always the ‘Top’ SQL that causes the problem. For example, contention may occur on a statement with a smaller total if it is only executed a small number of times so that  it may not appear as the top sql. It may still make millions of buffer gets, but will appear lower in the list because other sqls are performing many times, just not contending.

So, if the first SQL is not the culprit then look at the others.