Skip to Main Content
  • Questions
  • poor performance in db file sequential read from buffer cache when enabling foreign key constraint

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Chris.

Asked: February 12, 2013 - 6:27 pm UTC

Last updated: February 13, 2013 - 9:01 am UTC

Version: 11.2.0

Viewed 10K+ times! This question is

You Asked

Tom,

I'm seeing unreasonably bad performance in enabling a foreign key constraint (it takes more than 24 hours, for a table with 153 million rows) and can't determine the reason why. I'm importing a number of tables via sqlldr. After one of the larger tables is loaded, a foreign key constraint is enabled. This constraint references the unique primary key of a "parent" table. In enabling the foreign key constraint, the index is accessed via a 'db file sequential read'. Oracle is spending the majority of its I/O reading from the Buffer Cache. ADDM reports that 'Wait class "User I/O" was consuming significant database time.', but also notes that 'The throughput of the I/O subsystem was not significantly lower than expected.' The AVERAGE_WAIT reported in v$system_event seems much higher than I would expect to read an index from the Buffer Cache.

I reliably encounter the same issue when importing this file. With other tables using a similar structure I haven't noticed any problems. Can you point me toward other questions I should be asking in order to determine the reason for this behavior? I appreciate any insight you can provide.

SQL> select * from v$session_event where event='db file sequential read' order by time_waited;

       SID EVENT
---------- ----------------------------------------------------------------
TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT   MAX_WAIT TIME_WAITED_MICRO
----------- -------------- ----------- ------------ ---------- -----------------
  EVENT_ID WAIT_CLASS_ID WAIT_CLASS#
---------- ------------- -----------
WAIT_CLASS
----------------------------------------------------------------
       132 db file sequential read
   15383685              0     8048682          .52         66        8.0487E+10
2652584166    1740759767           8
User I/O


SQL> select segment_name, partition_name, segment_type, tablespace_name from dba_extents a, v$session_wait b where b.p2 between a.block_id and (a.block_id + a.blocks -1) and a.file_id = b.p1 and b.event='db file sequential read';

SEGMENT_NAME
--------------------------------------------------------------------------------
PARTITION_NAME                 SEGMENT_TYPE       TABLESPACE_NAME
------------------------------ ------------------ ------------------------------
SYS_C00180174
                               INDEX              USERS

I_FILE#_BLOCK#
                               INDEX              SYSTEM



Thanks!
-Chris

and Tom said...

a db file sequential read is a physical IO - not a read from the buffer cache. it will be *followed* by a read from the buffer cache - but the wait for a db file sequential read is a wait for a physical IO.

ops$tkyte%ORA11GR2> select 80486/ 15383685              from dual;

80486/15383685
--------------
    .005231906



shows your IO wait time is on the order of 5ms per IO - which is pretty close to the expected ~3-5ms per random IO for spinning magnetic disk.


so, basically, this is saying you are not reading this index from the buffer cache, rather the buffer cache is devoid of these blocks and we have to perform 15,383,685 physical IO's at about 5ms apiece to get them into the cache.


Are you using a conventional path load with sqlldr? perhaps you are blowing out your buffer cache during the load - this table inserts so much data that it basically takes over your buffer cache - voiding it of the blocks that represent the index/rows of your parent table. Then when you enable the constraint - we have to write your blocks to disk (take the dirty blocks you just created and flush to disk) and then read back in the index/table blocks of the parent table.

that could be avoided by using a direct path load (which will also bypass undo generation for the table data, reducing the amount of redo being generated and if applicable could be done in nologging mode - bypassing virtually all redo generation).


Alternatively, you can validate your data youself using a big bulk query:

select *
from child_table left outer join parent_table on (...)
where parent_table.key is null;

(full scans, hash outer join - no indexes!!!) and then put your constraint on with


rely NOVALIDATE enable


that will not check the existing data (you already did) - novalidate
that will make it so future data is protected by the constraint - enable
and the optimizer will be able to use the constraint for optimization if you set query_rewrite_integrity to trusted - rely

see http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6072457600346531968

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.