Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, sudhakar.

Asked: November 03, 2002 - 11:11 am UTC

Last updated: May 22, 2012 - 8:04 pm UTC

Version: 9.2.0.1.0

Viewed 50K+ times! This question is

You Asked

I am using a database link to connect from one 9i database to another 9i database.

I am frequently getting "db file sequential read" wait event at the linked server. Due to that all my queries are having a slow response.

I tried to find the table that is causing the sequential read, it has 560,780 rows in it. Unfortunately, I am not able to make use of any primary key on it.

I have created a seperate tablespace for this table alone with 16k block size. Still it is very slow. I have 1GB of RAM, Oracle SGA 400MB.

How should I make all my queries faster? Can you give me any suggestions on this.





and Tom said...

buy faster disks?

stripe the data across multiple disks?

db file sequential read is due to INDEXED reads by the way -- it is single block IO, it is not the result of a full scan. So.... you may well be looking at the wrong place. the p1, p2, p3 info in v$session_wait can be used to determine the actual object being waited on. A sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3). This wait may also be seen for reads from datafile headers (P2=1 indicates a file header read) .


Block reads are fairly inevitable so the aim should be to minimise un-necessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield orders of magnitude changes in performance. Tweaking at system level usually only achieves percentage gains. The following points may help:

* Check for SQL using unselective index scans

* A larger buffer cache can (not will, "might") help

* A less obvious issue which can affect the IO rates is how well data is clustered physically. Eg: Assume that you frequently fetch rows from a table where a column is between two values via an index scan. If there are 100 rows in each index block then the two extremes are:
1. Each of the table rows is in a different physical block (100 blocks need to be read for each index block)
2. The table rows are all located in the few adjacent blocks (a handful of blocks need to be read for each index block)
Pre-sorting or re-organising data can help to tackle this in severe situations.

* See if partitioning can be used to reduce the amount of data you need to look at.




Rating

  (84 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

WAIT on db file sequential read

Vinnie, January 22, 2004 - 10:22 am UTC

Tom,

Related to the snippet below:

PARSING IN CURSOR #9 len=784 dep=0 uid=36 oct=2 lid=40 tim=2014330226848 hv=3624461969 ad='bba33118'
INSERT INTO JSITORLA_EQUIP_GROUP_0088 (TYPE,DEFENSIVE_POSITION,COMMAND_ORG,UIC_O,SYMBOL_CODE_O,INITIALIZATION_FO_ID,IR_EXPOSURE,MOTION_
RANGE,HEALTH_STATUS,CONTAMINATION_CHEMICAL,MOUNT_STATUS,CONTAMINATION_NUCLEAR,TIMELINE_ID,MOBILITY_STATUS,VISUAL_EXPOSURE,ORG_NAME,ELAP
SED_SIM_TIME,MINEFIELD_VULNERABLE,MOVE_SEGMENTS_0017,FORMATION,SERVICE_TYPE,PARENT_ORG,CREW_COUNT,LOCATION_TYPE,SPACING,ACTIVE_SENSOR_M
ODE,FOLLOW_ROLE,CONTAMINATION_BIOLOGICAL,ROW_SEQUENCE_ID,PARENT_ID,ID_U,HOST_ID_O,NUMBER_OF_PLATFORMS,DISPLAY_LEVEL,ENTITY_TYPE,COMMS_S
TATUS,FACTION_ID,TYPE_NAME,GND_COMBAT_POWER,ECHELON,AIR_COMBAT_POWER) VALUES ( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:
17,:18,empty_blob(),:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33,:34,:35,:36,:37,:38,:39,:40)
END OF STMT
PARSE #9:c=10000,e=4932,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=0,tim=2014330226832
BINDS #9:
bind 0: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5a5b0 bln=4000 avl=02 flg=05
value="cp"
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac708b0 bln=22 avl=02 flg=05
value=100
bind 2: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac595d0 bln=4000 avl=27 flg=05
value="PELOWER_ORG.LAND.UNIT.14106"
bind 3: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac585f0 bln=4000 avl=06 flg=05
value="WG0HD0"
bind 4: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac57610 bln=4000 avl=15 flg=05
value="SFGPUUM---AE--G"
bind 5: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac56630 bln=4000 avl=47 flg=05
value="PELOWER_INITIALIZATION.LAND.EQUIP_GROUP_P.19102"
bind 6: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac70870 bln=22 avl=02 flg=05
value=100
bind 7: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac55650 bln=4000 avl=10 flg=05
value="not_moving"
bind 8: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac726a8 bln=22 avl=02 flg=05
value=100
bind 9: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac54670 bln=4000 avl=01 flg=05
value="F"
bind 10: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac53690 bln=4000 avl=08 flg=05
value="no_mount"
bind 11: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac526b0 bln=4000 avl=01 flg=05
value="F"
bind 12: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=72 offset=0
bfp=ffffffff7ac5e5a8 bln=22 avl=02 flg=05
value=33
bind 13: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=24
bfp=ffffffff7ac5e5c0 bln=22 avl=02 flg=01
value=100
bind 14: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=48
bfp=ffffffff7ac5e5d8 bln=22 avl=02 flg=01
value=65
bind 15: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac516d0 bln=4000 avl=09 flg=05
value="HQ/D310MI"
bind 16: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5e578 bln=22 avl=03 flg=05
value=7007
bind 17: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac506f0 bln=4000 avl=01 flg=05
value="T"
bind 18: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac7f060 bln=4000 avl=37 flg=05
value="PELOWER_ABSTRACT.LAND_FORMATION.20786"
bind 19: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac7e080 bln=4000 avl=07 flg=05
value="us_army"
bind 20: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac7d0a0 bln=4000 avl=27 flg=05
value="PELOWER_ORG.LAND.UNIT.14106"
bind 21: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5e548 bln=22 avl=02 flg=05
value=6
bind 22: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac7c0c0 bln=4000 avl=06 flg=05
value="normal"
bind 23: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5e518 bln=22 avl=02 flg=05
value=5
bind 24: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac7b0e0 bln=4000 avl=18 flg=05
value="active_sensing_off"
bind 25: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac7a100 bln=4000 avl=10 flg=05
value="uninvolved"
bind 26: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac79120 bln=4000 avl=01 flg=05
value="F"
bind 27: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5e4e8 bln=22 avl=04 flg=05
value=385494
bind 28: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac78140 bln=4000 avl=04 flg=05
value="NULL"
bind 29: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac77160 bln=4000 avl=34 flg=05
value="PELOWER_ORG.LAND.EQUIP_GROUP.15333"
bind 30: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac76180 bln=4000 avl=01 flg=05
value="0"
bind 31: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=48 offset=0
bfp=ffffffff7ac5e4a0 bln=22 avl=02 flg=05
value=4
bind 32: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=24
bfp=ffffffff7ac5e4b8 bln=22 avl=01 flg=01
value=0
bind 33: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac751a0 bln=4000 avl=24 flg=05
value="34398A000_1_1_CO CMD GRP"
bind 34: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5e470 bln=22 avl=02 flg=05
value=100
bind 35: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac741c0 bln=4000 avl=21 flg=05
value="PELOWER_FACTION.20796"
bind 36: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac731e0 bln=4000 avl=04 flg=05
value="NULL"
bind 37: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5e440 bln=22 avl=02 flg=05
value=61
bind 38: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac8f060 bln=4000 avl=07 flg=05
value="section"
bind 39: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5e410 bln=22 avl=02 flg=05
value=22
WAIT #9: nam='db file sequential read' ela= 171 p1=17 p2=111721 p3=1
WAIT #9: nam='db file sequential read' ela= 85 p1=18 p2=55014 p3=1
WAIT #9: nam='db file sequential read' ela= 81 p1=18 p2=110338 p3=1
WAIT #9: nam='db file sequential read' ela= 7405 p1=18 p2=86570 p3=1
WAIT #9: nam='db file sequential read' ela= 83 p1=18 p2=113262 p3=1
WAIT #9: nam='db file sequential read' ela= 10407 p1=18 p2=86574 p3=1
WAIT #9: nam='db file sequential read' ela= 83 p1=18 p2=123340 p3=1
WAIT #9: nam='db file sequential read' ela= 12581 p1=18 p2=86578 p3=1
WAIT #9: nam='db file sequential read' ela= 78 p1=18 p2=113389 p3=1
WAIT #9: nam='db file sequential read' ela= 81 p1=18 p2=86582 p3=1
WAIT #9: nam='db file sequential read' ela= 84 p1=18 p2=123402 p3=1
WAIT #9: nam='db file sequential read' ela= 3937 p1=18 p2=86586 p3=1
WAIT #9: nam='db file sequential read' ela= 81 p1=18 p2=114481 p3=1
WAIT #9: nam='db file sequential read' ela= 607 p1=18 p2=86590 p3=1
WAIT #9: nam='db file sequential read' ela= 78 p1=18 p2=113074 p3=1
WAIT #9: nam='db file sequential read' ela= 8835 p1=18 p2=86594 p3=1
WAIT #9: nam='db file sequential read' ela= 78 p1=18 p2=112505 p3=1
WAIT #9: nam='db file sequential read' ela= 7834 p1=18 p2=103326 p3=1
WAIT #9: nam='db file sequential read' ela= 83 p1=18 p2=115211 p3=1
WAIT #9: nam='db file sequential read' ela= 5111 p1=18 p2=742 p3=1
WAIT #9: nam='db file sequential read' ela= 9352 p1=18 p2=1070 p3=1
WAIT #9: nam='db file sequential read' ela= 160 p1=18 p2=1062 p3=1
WAIT #9: nam='db file sequential read' ela= 156 p1=18 p2=738 p3=1
WAIT #9: nam='db file sequential read' ela= 8868 p1=18 p2=614 p3=1
WAIT #9: nam='db file sequential read' ela= 6671 p1=18 p2=630 p3=1
WAIT #9: nam='db file sequential read' ela= 157 p1=18 p2=666 p3=1
EXEC #9:c=30000,e=104859,p=26,cr=1,cu=26,mis=0,r=1,dep=0,og=4,tim=2014330332010
WAIT #9: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #9: nam='SQL*Net message from client' ela= 969 p1=1413697536 p2=1 p3=0
=====================
=====================
PARSING IN CURSOR #15 len=784 dep=0 uid=36 oct=2 lid=40 tim=2014330488814 hv=3624461969 ad='bba33118'
INSERT INTO JSITORLA_EQUIP_GROUP_0088 (TYPE,DEFENSIVE_POSITION,COMMAND_ORG,UIC_O,SYMBOL_CODE_O,INITIALIZATION_FO_ID,IR_EXPOSURE,MOTION_
RANGE,HEALTH_STATUS,CONTAMINATION_CHEMICAL,MOUNT_STATUS,CONTAMINATION_NUCLEAR,TIMELINE_ID,MOBILITY_STATUS,VISUAL_EXPOSURE,ORG_NAME,ELAP
SED_SIM_TIME,MINEFIELD_VULNERABLE,MOVE_SEGMENTS_0017,FORMATION,SERVICE_TYPE,PARENT_ORG,CREW_COUNT,LOCATION_TYPE,SPACING,ACTIVE_SENSOR_M
ODE,FOLLOW_ROLE,CONTAMINATION_BIOLOGICAL,ROW_SEQUENCE_ID,PARENT_ID,ID_U,HOST_ID_O,NUMBER_OF_PLATFORMS,DISPLAY_LEVEL,ENTITY_TYPE,COMMS_S
TATUS,FACTION_ID,TYPE_NAME,GND_COMBAT_POWER,ECHELON,AIR_COMBAT_POWER) VALUES ( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:
17,:18,empty_blob(),:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33,:34,:35,:36,:37,:38,:39,:40)
END OF STMT
PARSE #15:c=0,e=257,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=2014330488800
BINDS #15:
bind 0: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5a8c8 bln=4000 avl=18 flg=05
value="electronic_warfare"
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5fb90 bln=22 avl=02 flg=05
value=100
bind 2: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac598e8 bln=4000 avl=27 flg=05
value="PELOWER_ORG.LAND.UNIT.13761"
bind 3: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac58908 bln=4000 avl=06 flg=05
value="WSHA19"
bind 4: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac57928 bln=4000 avl=15 flg=05
value="SHGPUH-----C--G"
bind 5: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac56948 bln=4000 avl=47 flg=05
value="PELOWER_INITIALIZATION.LAND.EQUIP_GROUP_P.18158"
bind 6: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5fb60 bln=22 avl=02 flg=05
value=100
bind 7: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac55968 bln=4000 avl=10 flg=05
value="not_moving"
bind 8: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5fb30 bln=22 avl=02 flg=05
value=100
bind 9: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac54988 bln=4000 avl=01 flg=05
value="F"
bind 10: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac539a8 bln=4000 avl=08 flg=05
value="no_mount"
bind 11: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac529c8 bln=4000 avl=01 flg=05
value="F"
bind 12: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=72 offset=0
bfp=ffffffff7ac5fad0 bln=22 avl=02 flg=05
value=33
bind 13: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=24
bfp=ffffffff7ac5fae8 bln=22 avl=02 flg=01
value=100
bind 14: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=48
bfp=ffffffff7ac5fb00 bln=22 avl=02 flg=01
value=65
bind 15: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac519e8 bln=4000 avl=35 flg=05
value="10TM/R INTER CEPT & DF/44 SIG RECON"
bind 16: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5faa0 bln=22 avl=03 flg=05
value=7007
bind 17: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac50a08 bln=4000 avl=01 flg=05
value="T"
bind 18: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac8f060 bln=4000 avl=37 flg=05
value="PELOWER_ABSTRACT.LAND_FORMATION.20786"
bind 19: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac8e080 bln=4000 avl=07 flg=05
value="us_army"
bind 20: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac8d0a0 bln=4000 avl=27 flg=05
value="PELOWER_ORG.LAND.UNIT.13761"
bind 21: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5fa70 bln=22 avl=02 flg=05
value=4
bind 22: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac8c0c0 bln=4000 avl=06 flg=05
value="normal"
bind 23: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5fa40 bln=22 avl=02 flg=05
value=30
bind 24: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac8b0e0 bln=4000 avl=18 flg=05
value="active_sensing_off"
bind 25: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac8a100 bln=4000 avl=10 flg=05
value="uninvolved"
bind 26: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac89120 bln=4000 avl=01 flg=05
value="F"
bind 27: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5fa10 bln=22 avl=04 flg=05
value=385502
bind 28: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac88140 bln=4000 avl=04 flg=05
value="NULL"
bind 29: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac87160 bln=4000 avl=34 flg=05
value="PELOWER_ORG.LAND.EQUIP_GROUP.14666"
bind 30: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac86180 bln=4000 avl=01 flg=05
value="0"
bind 31: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=48 offset=0
bfp=ffffffff7ac5f9c8 bln=22 avl=02 flg=05
value=1
bind 32: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=24
bfp=ffffffff7ac5f9e0 bln=22 avl=01 flg=01
value=0
bind 33: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac851a0 bln=4000 avl=49 flg=05
value="MI002A_1_MI002A-02_RADIO INTERCEPT AND DF PLATOON"
bind 34: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5f998 bln=22 avl=02 flg=05
value=100
bind 35: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac841c0 bln=4000 avl=21 flg=05
value="PELOWER_FACTION.20797"
bind 36: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac831e0 bln=4000 avl=04 flg=05
value="NULL"
bind 37: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac5f950 bln=22 avl=02 flg=05
value=5
bind 38: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac82200 bln=4000 avl=07 flg=05
value="platoon"
bind 39: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=ffffffff7ac509b0 bln=22 avl=02 flg=05
value=20
WAIT #15: nam='db file sequential read' ela= 101 p1=18 p2=123336 p3=1
WAIT #15: nam='db file sequential read' ela= 84 p1=18 p2=113388 p3=1
EXEC #15:c=0,e=15571,p=2,cr=1,cu=26,mis=0,r=1,dep=0,og=4,tim=2014330504682
WAIT #15: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
WAIT #15: nam='SQL*Net message from client' ela= 921 p1=1413697536 p2=1 p3=0
=====================

If you notice the first insert waited over 8 seconds
on db file sequentil read & the second only very little.
File 18 is the index as you probably guessed.
Why would I wait on the db file sequentil read so long for the first insert & hardly nothing for the very next insert?

Tom Kyte
January 22, 2004 - 6:46 pm UTC

physical io on your system was really really slow. do you have a hot disk, slow disks, or a really busy busy system.

(the 2cnd did very little pio and perhaps the thing causing the slow IO was done when it happened)

PIO & db file sequential read

Vinnie, January 23, 2004 - 8:33 am UTC

The only thing that was running was the application loading the data. We have re-run the same test many times & always see the bulk of the PIO on the first insert like shown in the previous message.

If you look at the first insert:
Just a few questions on the WAIT for the indexes.
1.
WAIT #9: nam='db file sequential read' ela= 85 p1=18 p2=55014 p3=1
I checked this block & it is in fact the PK for the table.
I also checked the Buffer Cache & this segment is already there.
From the DOC:
db file sequential read: the user process is reading buffers into the SGA buffer cache & is waiting for a physical I/O call to return.
Why would I re-load the index from the datafile if it is already in the BC?

2.
WAIT #9: nam='db file sequential read' ela= 81 p1=18 p2=110338 p3=1
WAIT #9: nam='db file sequential read' ela= 7405 p1=18 p2=86570 p3=1
...
...
all the other WAIT events are on PK's for other tables. Why would these indexes get loaded into the Buffer Cache for an insert into a table not related to them?

Thanks

Tom Kyte
January 23, 2004 - 9:35 am UTC

1) the block in the buffer cache wasn't good enough (multi-versioning means that many versions of a block can be in the cache, the one you needed wasn't -- we had to read it)

2) foreign keys i would guess. you haven't given us a testcase to play with so we can only guess.

long insert on 8.1.7.4

dl, February 09, 2004 - 10:30 am UTC

Tom

Using tkprof i've identified an insert statement that is taking a long time to run. Looking at the raw trace file it is doing a lot of waits for db sequential file read, the following is an extract from the trace file:


=====================
PARSING IN CURSOR #3 len=1412 dep=1 uid=33 oct=2 lid=33 tim=234202020 hv=4174812
185 ad='c54f7960'
INSERT INTO MORTALITY_DATA_SET SELECT LMD.NHS_NUMBER,LMD.ANONYMISED_ID,LMD.DECEA
SED_FIRST_FORENAME,LMD.DECEASED_SECOND_FORENAME,LMD.DECEASED_THIRD_FORENAME,LMD.
DECEASED_SURNAME,LMD.DECEASED_ALIASES,LMD.DECEASED_MAIDEN_NAME,LMD.DATE_OF_BIRTH
,LMD.DATE_OF_DEATH,LMD.PLACE_OF_DEATH,LMD.DECEASED_USUAL_ADDRESS,LMD.DEC_USUAL_A
DDRESS_POSTCODE,LMD.CAUSE_OF_DEATH_LINE_1,LMD.CAUSE_OF_DEATH_LINE_2,LMD.CAUSE_OF
_DEATH_LINE_3,LMD.CAUSE_OF_DEATH_LINE_4,LMD.CAUSE_OF_DEATH_LINE_5,LMD.CAUSE_OF_D
EATH_LINE_6,LMD.CAUSE_OF_DEATH_LINE_7,LMD.CAUSE_OF_DEATH_LINE_8,LMD.UNDERLYING_C
AUSE_OF_DEATH,LMD.STHA_OF_RESIDENCE,LMD.PCT_OF_RESIDENCE,LMD.DATE_OF_REGISTRATIO
N,LMD.SEX,LMD.CAUSE_OF_DEATH_ICD_CODE_1,LMD.CAUSE_OF_DEATH_ICD_CODE_2,LMD.CAUSE_
OF_DEATH_ICD_CODE_3,LMD.CAUSE_OF_DEATH_ICD_CODE_4,LMD.CAUSE_OF_DEATH_ICD_CODE_5,
LMD.CAUSE_OF_DEATH_ICD_CODE_6,LMD.CAUSE_OF_DEATH_ICD_CODE_7,LMD.CAUSE_OF_DEATH_I
CD_CODE_8,LMD.CERTIFYING_GP,LMD.PLACE_OF_DEATH_CODE,LMD.GP_CODE,LMD.GP_NAME,LMD.
GP_PRACTICE_CODE, NULL GP_PRACTICE_TEXT,LMD.PCT_CODE,DECODE(ST.STHA_CODE, NULL
,LMD.HA_CODE,ST.STHA_CODE),SYSDATE,:b1,MBR.BATCH_RECORD_ID FROM LINKED_MORTALI
TY_DATA2 LMD,MORT_BATCH_RECORDS MBR,STHA_PCT ST WHERE LMD.NHS_NUMBER = MBR.NHS_
NO AND LMD.PCT_CODE = ST.PCT_CODE (+) AND MBR.BATCH_ID = :b1 AND MBR.STAGE_
ID = 3 AND YEAR = GREATEST(TRUNC(NVL(DATE_OF_DEATH,TO_DATE('01-apr-2002','dd-mo
n-yyyy')),'q'),TO_DATE('01-apr-2002','dd-mon-yyyy'))
END OF STMT
PARSE #3:c=1,e=2,p=0,cr=4,cu=0,mis=1,r=0,dep=1,og=0,tim=234202020
BINDS #3:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=7000000001 size=
48 offset=0
bfp=80000001000e4e80 bln=22 avl=02 flg=05
value=13
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=7000000001 size=
0 offset=24
bfp=80000001000e4e98 bln=22 avl=02 flg=01
value=13
WAIT #3: nam='file open' ela= 0 p1=0 p2=0 p3=0
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=35310 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=35324 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=35334 p3=2
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=35374 p3=1
WAIT #3: nam='db file sequential read' ela= 3 p1=19 p2=35389 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=35394 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=35419 p3=2
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=35439 p3=3
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=35454 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=35459 p3=2
WAIT #3: nam='db file scattered read' ela= 0 p1=19 WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36049 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36059 p3=3
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36079 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36084 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36094 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36099 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=36104 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36109 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=36129 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36134 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36149 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36154 p3=3
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36159 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36194 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36204 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36219 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36234 p3=2
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36249 p3=3
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36259 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36269 p3=1
WAIT #3: nam='db file scattered read' ela= 2 p1=19 p2=36314 p3=4
WAIT #3: nam='db file sequential read' ela= 2 p1=19 p2=36319 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36329 p3=2
WAIT #3: nam='db file sequential read' ela= 3 p1=19 p2=36354 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36384 p3=1
WAIT #3: nam='db file scattered read' ela= 2 p1=19 p2=36389 p3=2
WAIT #3: nam='db file sequential read' ela= 2 p1=19 p2=36394 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36404 p3=1
WAIT #3: nam='db file sequential read' ela= 2 p1=19 p2=36424 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36454 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=36459 p3=2
WAIT #3: nam='db file scattered read' ela= 2 p1=19 p2=36464 p3=2
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36479 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36494 p3=2
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36514 p3=2
WAIT #3: nam='db file sequential read' ela= 2 p1=19 p2=36564 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36574 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36579 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36614 p3=2
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36634 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36644 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36669 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=36679 p3=5
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36704 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36714 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36719 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36724 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36729 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=36744 p3=3
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36764 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36774 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36794 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=36804 p3=2
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=36809 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36824 p3=1
WAIT #3: nam='db file sequential read' ela= 2 p1=19 p2=36849 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36854 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=36864 p3=3
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36869 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36909 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36934 p3=1
WAIT #3: nam='db file sequential read' ela= 2 p1=19 p2=36939 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=36969 p3=1
WAIT #3: nam='db file sequential read' ela= 3 p1=19 p2=36974 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=36984 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=37354 p3=2
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=37359 p3=2
WAIT #3: nam='db file sequential read' ela= 3 p1=19 p2=37364 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=37369 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=37379 p3=1
=====================
PARSING IN CURSOR #4 len=36 dep=2 uid=0 oct=3 lid=0 tim=234202148 hv=1705880752
ad='c5fe81f8'
select file# from file$ where ts#=:1
END OF STMT
PARSE #4:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=0,tim=234202148
BINDS #4:
bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=8000000100000001
size=24 offset=0
bfp=80000001000c9ab8 bln=22 avl=02 flg=05
value=3
EXEC #4:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=234202148
FETCH #4:c=0,e=0,p=0,cr=2,cu=0,mis=0,r=1,dep=2,og=4,tim=234202148
FETCH #4:c=0,e=0,p=0,cr=2,cu=0,mis=0,r=1,dep=2,og=4,tim=234202148
FETCH #4:c=0,e=0,p=0,cr=1,cu=0,mis=0,r=0,dep=2,og=4,tim=234202148
STAT #4 id=1 cnt=2 pid=0 pos=0 obj=17 op='TABLE ACCESS BY INDEX ROWID FILE$ '
STAT #4 id=2 cnt=3 pid=1 pos=1 obj=39 op='INDEX RANGE SCAN '
WAIT #3: nam='file open' ela= 0 p1=0 p2=0 p3=0
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=47970 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=47996 p3=13
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=37454 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=37459 p3=1
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48022 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48048 p3=13
WAIT #3: nam='db file sequential read' ela= 2 p1=19 p2=37484 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38064 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38084 p3=1
WAIT #3: nam='db file sequential read' ela= 3 p1=19 p2=38099 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38119 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=38129 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38144 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=38149 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38154 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38159 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38169 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38174 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38184 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=38239 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38249 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38264 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=38289 p3=3
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=38314 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38324 p3=1
WAIT #3: nam='db file scattered read' ela= 1 p1=19 p2=38334 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38339 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38349 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38354 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38369 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38379 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38919 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38924 p3=1
WAIT #3: nam='db file scattered read' ela= 2 p1=19 p2=38934 p3=2
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38949 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38954 p3=1
WAIT #3: nam='db file sequential read' ela= 0 p1=19 p2=38964 p3=1
WAIT #3: nam='db file sequential read' ela= 3 p1=19 p2=38979 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38984 p3=1
WAIT #3: nam='db file sequential read' ela= 1 p1=19 p2=38989 p3=1
WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=103094 p3=2
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48970 p3=13

WAIT #3: nam='db file scattered read' ela= 0 p1=19 p2=103094 p3=2
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48970 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48738 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48829 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48764 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48751 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48816 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48842 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48803 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48790 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48777 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48610 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48623 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48636 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48649 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48675 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48701 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48482 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48508 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48534 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48560 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48586 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48688 p3=13

WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48586 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48688 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48662 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50441 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50480 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50736 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=47894 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=49775 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=49634 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=48675 p3=13
=====================
PARSING IN CURSOR #4 len=36 dep=2 uid=0 oct=3 lid=0 tim=234203293 hv=1705880752
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50428 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50441 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50480 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50736 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=47894 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=49775 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=49634 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=48675 p3=13
=====================

The tkprof report for this insert is:

INSERT INTO MORTALITY_DATA_SET SELECT LMD.NHS_NUMBER,LMD.ANONYMISED_ID,
LMD.DECEASED_FIRST_FORENAME,LMD.DECEASED_SECOND_FORENAME,
LMD.DECEASED_THIRD_FORENAME,LMD.DECEASED_SURNAME,LMD.DECEASED_ALIASES,
LMD.DECEASED_MAIDEN_NAME,LMD.DATE_OF_BIRTH,LMD.DATE_OF_DEATH,
LMD.PLACE_OF_DEATH,LMD.DECEASED_USUAL_ADDRESS,
LMD.DEC_USUAL_ADDRESS_POSTCODE,LMD.CAUSE_OF_DEATH_LINE_1,
LMD.CAUSE_OF_DEATH_LINE_2,LMD.CAUSE_OF_DEATH_LINE_3,
LMD.CAUSE_OF_DEATH_LINE_4,LMD.CAUSE_OF_DEATH_LINE_5,
LMD.CAUSE_OF_DEATH_LINE_6,LMD.CAUSE_OF_DEATH_LINE_7,
LMD.CAUSE_OF_DEATH_LINE_8,LMD.UNDERLYING_CAUSE_OF_DEATH,
LMD.STHA_OF_RESIDENCE,LMD.PCT_OF_RESIDENCE,LMD.DATE_OF_REGISTRATION,LMD.SEX,
LMD.CAUSE_OF_DEATH_ICD_CODE_1,LMD.CAUSE_OF_DEATH_ICD_CODE_2,
LMD.CAUSE_OF_DEATH_ICD_CODE_3,LMD.CAUSE_OF_DEATH_ICD_CODE_4,
LMD.CAUSE_OF_DEATH_ICD_CODE_5,LMD.CAUSE_OF_DEATH_ICD_CODE_6,
LMD.CAUSE_OF_DEATH_ICD_CODE_7,LMD.CAUSE_OF_DEATH_ICD_CODE_8,
LMD.CERTIFYING_GP,LMD.PLACE_OF_DEATH_CODE,LMD.GP_CODE,LMD.GP_NAME,
LMD.GP_PRACTICE_CODE, NULL GP_PRACTICE_TEXT,LMD.PCT_CODE,
DECODE(ST.STHA_CODE, NULL ,LMD.HA_CODE,ST.STHA_CODE),SYSDATE,:b1,
MBR.BATCH_RECORD_ID FROM LINKED_MORTALITY_DATA2 LMD,MORT_BATCH_RECORDS
MBR,STHA_PCT ST WHERE LMD.NHS_NUMBER = MBR.NHS_NO AND LMD.PCT_CODE =
ST.PCT_CODE (+) AND MBR.BATCH_ID = :b1 AND MBR.STAGE_ID = 3 AND YEAR =
GREATEST(TRUNC(NVL(DATE_OF_DEATH,TO_DATE('01-apr-2002','dd-mon-yyyy')),'q'),
TO_DATE('01-apr-2002','dd-mon-yyyy'))



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.02 0 0 0 0
Execute 1 302.05 995.24 3128000 100579 606040 43147
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 302.06 995.26 3128000 100579 606040 43147

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 33 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
43148 HASH JOIN OUTER
43147 HASH JOIN
822834 VIEW LINKED_MORTALITY_DATA2
822834 HASH JOIN
6433183 HASH JOIN
230194 TABLE ACCESS FULL ASCT_GP_LIST_STAGE_4
861932 INDEX FAST FULL SCAN (object id 4895)
913559 TABLE ACCESS FULL ONS_DATA_SET
45272 INDEX RANGE SCAN (object id 4879)
302 TABLE ACCESS FULL STHA_PCT

********************************************************************************

It takes about 15 minutes to run.


I know that without the data you can't see the whole picture but if you've got any clues as to what might be going on here? Why is it waiting so much, do i need to hint it to use a different execution path??
I have indexed the relevant foreign keys etc.. but maybe i am missing an index somewhere or am using the wrong type of index (if it helps the stha_pct table is very small compared to the other tables which are similar in size).

This query does select about 50000 rows and joins a couple of fairly big tables (about 1 million rows from mort_batch_records and 6 million rows from linked_mortality_data2) to a small table (stha_pct 300 rows) but i would've thought it should be quicker that this.

Any ideas would be appreciated.

Many thanks

Tom Kyte
February 09, 2004 - 10:43 am UTC

the waits are for physical IO - both read and write. Looks like temp. what is your hash/sort area size (or pga_aggregate_target) set to there?

Thanks

dl, February 09, 2004 - 11:52 am UTC

the hash_area_size=2097152
and the pga_aggregate_target parameter has not been set in init.ora.

Can you also explain how you knew it was TEMP that maybe causing this problem please ?

Thanks

Tom Kyte
February 09, 2004 - 12:12 pm UTC

WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48586 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48688 p3=13
WAIT #3: nam='direct path write' ela= 0 p1=4 p2=48662 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50441 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50480 p3=13
WAIT #3: nam='direct path read' ela= 0 p1=4 p2=50736 p3=13


direct path reads and writes -> temp


set the hash area "bigger" (say 20/200 meg) for this session.

big hash_area_size

A reader, February 09, 2004 - 5:48 pm UTC

Hi

I am not sure, but once I tested with different hash_area_size setting for one of our dwh query and didnt notice much difference in response time between 64MB and 128MB... Is there a limit or something similar like a threshold?

Tuning direct path reads

Matt, March 07, 2004 - 11:48 pm UTC

I have direct path reads taking up about 25% of my execution time - see below.

create table grms_mig_meter_reading_engy (
mirn constraint mre_mirn_nn not null,
constraint mre_mirn_len_ck check (length(mirn) = 10),
mirn_chksum constraint mre_mirn_chksum_nn not null,
reading_type constraint mre_reading_type_nn not null,
constraint mre_reading_type_ck check (reading_type in ('A','D','E','S')),
start_date constraint mre_start_date_nn not null,
end_date constraint mre_end_date_nn not null,
constraint mre_end_date_ck check (end_date >= start_date),
energy_value constraint mre_energy_value_nn not null,
constraint mre_energy_value_ck check (trunc(energy_value)=energy_value and energy_value>=0),
constraint mig_meter_reading_engy_pk primary key (mirn,start_date)
) nologging
as
select mirn
,mirn_chksum
,reading_type
,start_date
,end_date
,energy_value
from ( select mirn
,mirn_chksum
,reading_type
,start_date
,end_date
,energy_value
,max(record_classification) over (partition by mirn) classification
,row_number() over (partition by mirn,start_date order by end_date desc NULLS last ) rn
from ( select mirn
,mirn_chksum
,reading_type
,start_date
,end_date
,energy_value
,case when start_date < to_date('01/02/2002','DD/MM/YYYY') then 'Start Date Before Historical Start Date'
when end_date > trunc(to_date('26/02/2004','DD/MM/YYYY')) then 'End Date Greater than Extract Date' -- extract date
when max_ed > trunc(to_date('26/02/2004','DD/MM/YYYY')) then 'Max End Date Greater than Extract Date' -- extract date
when end_date < next_sd then 'Data Set has Gap'
when end_date != next_sd then 'Data Set has Overlap'
end record_classification
from ( select mirn
,mirn_chksum
,reading_type
,start_date
,end_date
,energy_value
-- ,lag(start_date) over ( partition by id order by start_date) last_sd
-- ,lag(end_date) over ( partition by id order by end_date) last_ed
,lead(start_date) over ( partition by mirn order by start_date) next_sd
-- ,lead(end_date) over ( partition by id order by end_date) next_ed
,max(end_date) over (partition by mirn ) max_ed
from extern_meter_reading_engy
where 1=1
and length(mirn) = 10
and mirn is not null
and mirn_chksum is not null
and reading_type in ('A','D','E','S')
and reading_type is not null
and start_date is not null
and end_date is not null
and end_date >= start_date
and energy_value is not null
and trunc(energy_value)=energy_value
and energy_value>=0
)
)
) where 1=1
and classification is null
and rn=1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 104.28 159.67 72962 519 5634 2659501
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 104.28 159.67 72962 519 5634 2659501

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 228

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
direct path read 51470 0.05 40.61
direct path write 6 0.00 0.00
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.01 0.01
********************************************************************************

I have workarea_size_policy=auto and increased pga_aggregate_target to 2GB.

Using the following query

select a.sid
,c.serial#
,c.username
,c.osuser
,b.name
,a.value
,b.class
from v$sesstat a
,v$statname b
,v$session c
where 1=1
and a.statistic# = b.statistic#
and a.sid = c.sid
and a.sid = 35
and b.name like '%sort%'
order by value desc

Whatever I try I still see some sorts to disk

I can see that the max pga size gets up to approx 120MB (ie: well below the pga_aggregate value), and three disk sorts.

Any idea how I can get rid of these waits??


Tom Kyte
March 08, 2004 - 7:56 am UTC

for this batch type operation, you might want to

alter session to set the workarea policy to manual
alter session to set hash area size/sort area size to a large number


workarea (auto memory tuning) works best in an ad-hoc environment. for recurring batch jobs, no reason not to set some of these things "as part of the job"

db file sequential read

Mehmood, June 14, 2004 - 6:52 am UTC

Dear Tom:

From the above....

================================
WAIT #9: nam='db file sequential read' ela= 85 p1=18 p2=55014 p3=1
I checked this block & it is in fact the PK for the table.
I also checked the Buffer Cache & this segment is already there.
=================================

Can you explain, how did he find the block and then table related to db file sequential read.... And also how did he check the Buffer Cache and came to know it is alread there??

Thanks

Tom Kyte
June 14, 2004 - 8:07 am UTC

p1/p2 is documented (in the reference guide) as the file and block being read. query dba_extents and you can find the segment.

he could not have verified that it was in the buffer cache before hand really because before the wait event -- he did not know what block to go looking for and AFTER the wait, of course it would be found in the buffer cache view v$bh.

db file sequential read wait events

lior, December 08, 2004 - 3:14 am UTC

Yet , I haven't tried to make some changes in my DB but very useful advices .

Alex, August 11, 2005 - 9:28 am UTC

Hello Tom,

I could really use your help with a db file sequential read wait problem I'm having. Basically what is happening is a query generating by crystal reports is all of a sudden "hanging" on the latest migration of our data. I checked v$session_wait like you suggested to the original poster, it is indeed a db file sequential read. Since you said these are due to index block reads, I was thinking this may be a case where the optimizer is getting it wrong for the new data, and using an index where a full scan may be appropreate?

What information should I post to help you help me with this? Unfortunately it's difficult to get an explain plan and tkprof for this problem because the query appears to hang, not doing any work, indefinitely Are there some common causes for problems like this I could start with?
You advice is appreciated

Tom Kyte
August 11, 2005 - 6:04 pm UTC

What information should you get in order to help YOURSELF you meant of course.

v$sql_plan would have the plan.

autotrace can get the plan.


You want to compare the plan from this query, to the old plan.

common cause: change in plan, the query is *not hung* if the query is waiting for db file sequence read it is certainly not hung. It is just waiting for IO.

You don't really say what "latest migration of data" means, to me, it doesn't mean anything in particular.

Alex, August 12, 2005 - 9:43 am UTC

Thanks for your reply. I was able to figure this out with a trace that I let run for a little while, then kill the session. You were definately right, it was working.
There were a couple of queries in there that were really struggling, so I looked at the tables they were hitting, and oh look at that no index stats!

For my own knowledge, could you explain what is happening in the following tkprof:

SELECT SYMBOL
FROM
T_SYMBOLS WHERE MA_OID = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 669122 60.23 59.47 0 0 0 0
Fetch 1134688 18.92 16.63 32 2473230 0 465566
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1803811 79.15 76.11 32 2473230 0 465566

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 69 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 32 0.01 0.19
latch free 3 0.00 0.00
I don't understand what's happening under the covers here. It says the nested query was executed 669122 times?? Was it full scanning the table row each row? BTW I don't know why the plan didn't show up in the trace file, I did
alter session set events '10046 trace name context forever, level 12'.

Tom Kyte
August 13, 2005 - 8:49 am UTC

OH MY.

developer is trying to be "smart" and do a join by themself.

Unfortunately, that is not a smart thing. Look how many times that query was executed!!!!!!!!!! Ouch. Procedural slow by slow (row by row) processing. That has got to hurt, no?

and given the fetch count, they are not even array procesing :( ouch ouch ouch.

It was executed 669122 times, did 2473230 IO's per execute, about 3-4 IO's per execute.

Meaning it did 3 IO's probably via the index, and one IO to goto the table. (guess)

So, a single executing of this query -- very very very fast.

Do it hundreds of thousands of times -- very very very slow.

You need to go back to the algorithm, look at the code, and do some JOINS!



Alex, August 15, 2005 - 9:53 am UTC

That little query ran like that due to no index stats on the table. But I think I get the jist of it. No indexes = tons of unnecessary IO (in this case). I think mislead you, I'm not use to conversing with someone who is so precise and exact as you are. When I said "nested query", you probably thought something like

select *
from t
where id not in (select id
from....
where....)

Perhaps I should have said "recursive", because the trouble queries do not appear in the overrall, bigger query created by crystal reports. I wish I could post the actual monstrosity that crystal creates for you, it's code generated so you cannot edit it. It doesn't even look like sql, it's impossible to troubleshoot.

I'm amazed at what a few missing indexes can do though.

how to spread I/O

J, August 17, 2005 - 12:48 am UTC

DB sequential read/scattered read are top waits on my database. I checked couple waits on sequential read, found out that those indexes are created within one index file, though for each index, I have couple files on different mount point. Will it help if data is spread out on different files on different mount point? anyway to force that? Thanks!

Tom Kyte
August 17, 2005 - 12:36 pm UTC

it would help only if spreading the IO out would decrease IO response time (something I cannot predict).


in order to get the data spread out, you would have to ensure the indexes go into multiple extents, Oracle will spread the extents out over all of the files allocated to the tablespace.



Su Baba, September 22, 2005 - 2:00 pm UTC

We're running into some performance problems. The following is the partial output of the raw trace file. The wait on "db file sequential read" actually continues for several more pages. I have a few questions on the trace file:

1. Oracle was parsing, binding, executing, and fetching for cursor #238. Then all of a sudden it started waiting on 'db file sequential read' for cursor #322. Is this because the system switches from processing one SQL to another or is the raw trace file not sorted by time?

2. Should it be a concern that there are so many waits on "db file sequential read" for Cursor #322? When I ran the SQL directly in the database with the bind value provided in the trace file, the SQL returns right away. That does not explain to me why there are waits. Am I interepreting the output correctly? How should I proceed with the troubleshooting?

thanks


PARSING IN CURSOR #322 len=70 dep=1 uid=313 oct=3 lid=313 tim=744527328975 hv=342090005 ad='c3185290'
SELECT USER_ID FROM PV_PARTNER_PRIMARY_USERS_V WHERE PARTNER_ID = :B1
END OF STMT
PARSE #322:c=0,e=1809,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,tim=744527328935
...
...
PARSING IN CURSOR #23 len=37 dep=2 uid=0 oct=3 lid=0 tim=744527548063 hv=1398610540 ad='dcfd59c8'
select text from view$ where rowid=:1
END OF STMT
PARSE #23:c=0,e=65,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=744527548053
BINDS #23:
bind 0: dty=11 mxl=16(16) mal=00 scl=00 pre=00 oacflg=18 oacfl2=0001 size=16 offset=0
bfp=ffffffff7b6a1af8 bln=16 avl=16 flg=05
value=0001E5E2.0001.0006
EXEC #23:c=0,e=159,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=744527548479
WAIT #23: nam='db file sequential read' ela= 16442 p1=6 p2=124386 p3=1
WAIT #23: nam='db file sequential read' ela= 30255 p1=4 p2=71853 p3=1
FETCH #23:c=0,e=47875,p=2,cr=3,cu=0,mis=0,r=1,dep=2,og=4,tim=744527596414
STAT #23 id=1 cnt=0 pid=0 pos=1 obj=62 op='TABLE ACCESS BY USER ROWID VIEW$ (cr=0 pr=0 pw=0 time=0 us)'
BINDS #92:
bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=0001 size=24 offset=0
bfp=ffffffff68fcfa38 bln=22 avl=05 flg=05
value=7266148
EXEC #92:c=10000,e=760,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=744527637371
FETCH #92:c=0,e=143,p=0,cr=2,cu=0,mis=0,r=0,dep=2,og=4,tim=744527637626
BINDS #238:
bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=0001 size=24 offset=0
bfp=ffffffff68fcdab0 bln=22 avl=05 flg=05
value=7266148
EXEC #238:c=0,e=713,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=744527639460
WAIT #238: nam='db file sequential read' ela= 37707 p1=2 p2=127075 p3=1
FETCH #238:c=0,e=38439,p=1,cr=2,cu=0,mis=0,r=0,dep=2,og=4,tim=744527677984
BINDS #322:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=206001 size=24 offset=0
bfp=ffffffff7b6a18f8 bln=22 avl=06 flg=05
value=1001490725
EXEC #322:c=250000,e=553692,p=13,cr=78,cu=0,mis=0,r=0,dep=1,og=1,tim=744527882909
WAIT #322: nam='db file sequential read' ela= 77279 p1=603 p2=5646 p3=1
WAIT #322: nam='db file sequential read' ela= 66091 p1=884 p2=17422 p3=1
WAIT #322: nam='db file sequential read' ela= 12660 p1=1120 p2=397799 p3=1
WAIT #322: nam='db file sequential read' ela= 8389 p1=1120 p2=397807 p3=1
WAIT #322: nam='db file sequential read' ela= 47282 p1=843 p2=533230 p3=1
WAIT #322: nam='db file sequential read' ela= 16194 p1=172 p2=215304 p3=1
WAIT #322: nam='db file sequential read' ela= 13048 p1=1068 p2=506660 p3=1
WAIT #322: nam='db file scattered read' ela= 66889 p1=1669 p2=71834 p3=8
WAIT #322: nam='db file sequential read' ela= 44996 p1=2009 p2=25211 p3=1
WAIT #322: nam='db file sequential read' ela= 34728 p1=440 p2=21641 p3=1
WAIT #322: nam='latch: cache buffers chains' ela= 6 p1=4400717966664 p2=116 p3=0
WAIT #322: nam='db file sequential read' ela= 55204 p1=2010 p2=118806 p3=1
WAIT #322: nam='db file sequential read' ela= 20598 p1=440 p2=21642 p3=1
WAIT #322: nam='db file sequential read' ela= 13237 p1=133 p2=42669 p3=1
WAIT #322: nam='db file sequential read' ela= 34928 p1=418 p2=44746 p3=1
WAIT #322: nam='db file sequential read' ela= 20563 p1=440 p2=21644 p3=1
WAIT #322: nam='db file sequential read' ela= 25859 p1=127 p2=88693 p3=1
WAIT #322: nam='db file sequential read' ela= 19863 p1=2010 p2=118774 p3=1
WAIT #322: nam='db file sequential read' ela= 21339 p1=133 p2=42601 p3=1
WAIT #322: nam='db file sequential read' ela= 18301 p1=759 p2=57110 p3=1
WAIT #322: nam='db file sequential read' ela= 12984 p1=2008 p2=116059 p3=1
WAIT #322: nam='db file sequential read' ela= 13670 p1=440 p2=21646 p3=1
WAIT #322: nam='db file sequential read' ela= 15863 p1=436 p2=24003 p3=1
WAIT #322: nam='db file sequential read' ela= 18775 p1=185 p2=10877 p3=1
WAIT #322: nam='db file sequential read' ela= 32293 p1=440 p2=21648 p3=1
WAIT #322: nam='db file sequential read' ela= 23808 p1=806 p2=15806 p3=1
...
...

Tom Kyte
September 22, 2005 - 5:55 pm UTC

1) the application controls this entirely - the trace file is linear, totally linear.


parse statement 1
open statement 1
parse statement 2
open statement 2
fetch statement 1 <<<=== IO for 1 starts happening here for the first time...


physical IO will almost certainly incurr a wait - else we wouldn't have a buffer cache to make the IO appear faster.

This just means you are doing physical IO, if you can find a way to avoid it - great, but if you cannot, it is the price of admission, you have to read the data in sometime.

db file scattered read

Praveen, October 22, 2005 - 2:24 am UTC

Tom,

I have a procedure which is taking a very long time to complete. The v$session_wait shows the following:

EVENT WAIT WAIT STATE
CLASS SECONDS
--------------------------- ----- ------- ------
SQL*Net message from client Idle 0 WAITING
SQL*Net message from client Idle 260035 WAITING
SQL*Net message from client Idle 235543 WAITING
SQL*Net message from client Idle 139465 WAITING
SQL*Net message from client Idle 169810 WAITING
db file scattered read User 0 WAITED
I/O SHORT TIME
SQL*Net message to client Network 0 WAITED
SHORT TIME
SQL*Net message from client Idle 3239 WAITING

This is showing huge wait time. How can I know what is it waiting for? What does db_file_scattered_read means? The dba says it means full table scan and create indexes to avoid it.

Please give your insight into it.

Thanks in advance
Praveen

Tom Kyte
October 22, 2005 - 10:39 am UTC

it isn't showing a huge wait on anything in the database....

and I don't even know if this is for the session in question (you are showing all current sessions.....)

suggest you run with 10046 level 12 trace and just use tkprof to see what you see.

large soft parse

Praveen, October 24, 2005 - 7:28 am UTC

Well, the procedure mentioned above is actually taking about 5 days to complete
processing just 1.2 million records!. As you said I enabled tracing for about 10 min.
at 10046 level 12. (in 1 hr it generated 5GB trace data!).

Certain relevant parts of the trace information is shown below. (beg your pardon,
but I formatted the trace info presented here for readability).

1)

********************************************************************************
INSERT INTO Table1 (t1col1, t1col2, t1col3, t1col4)
SELECT :B3 , :B1 , t2col3, t2col4 FROM Table2
WHERE t2col1 IN (SELECT TO_NUMBER(TRIM(t3col1))
FROM Table3 WHERE TRIM(t3col2) = :B1
AND t3col4 = :B4 )
AND TRIM(t3col5) = :B2



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6068 0.61 0.61 0 0 0 0
Execute 6073 3.88 3.66 0 33569 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12141 4.49 4.28 0 33569 0 0

Misses in library cache during parse: 2
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 24 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 NESTED LOOPS (cr=8 pr=0 pw=0 time=298 us)
1 SORT UNIQUE (cr=5 pr=0 pw=0 time=231 us)
2 MAT_VIEW ACCESS BY INDEX ROWID table3 (cr=5 pr=0 pw=0 time=122 us)
2 INDEX RANGE SCAN IDX_table3 (cr=3 pr=0 pw=0 time=63 us)(object id 38861)
0 MAT_VIEW ACCESS BY INDEX ROWID table2 (cr=3 pr=0 pw=0 time=55 us)
0 INDEX RANGE SCAN IDX_table2 (cr=3 pr=0 pw=0 time=44 us)(object id 40361)
********************************************************************************

In your book it is said that "A high (near 100%) parse count to execute count ratio
when the execute count is > 1" means too many soft parses. The above trace shows exactly
the same. But query is not a dynamic query and bind variables are in place. We can see
this from low library cache misses.

Could you please tell me what exactly can be causing the parse:execute ~= 1? Why does
too many soft parses happens? If we look into the full trace file, this is exactly
what happens in almost all the other queries in the procedure.

Also if you look at the cpu & elapsed time, it took about 5 sec to process 0 records.
ie an excessively high query count (33569) for no work!
Is the query a worst written one?



2) Another part of the trace file: The query is part of a REF cursor.

********************************************************************************

SELECT TRIM(t1col8)
FROM Table1
WHERE TRIM(t1col1) = :B5 AND TRIM(t1col2) = :B4
AND TRIM(t1col3) = :B3 AND :B2 BETWEEN NVL(t1col4, :B2 ) AND NVL(t1col5, :B2 )
AND TRIM(t1col6) = :B1



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3107 0.33 0.31 0 0 0 0
Execute 3107 2.26 2.22 0 0 0 0
Fetch 3107 0.06 0.10 5 54 0 18
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9321 2.65 2.64 5 54 0 18

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 24 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 MAT_VIEW ACCESS BY INDEX ROWID Table1 (cr=0 pr=0 pw=0 time=63 us)
0 BITMAP CONVERSION TO ROWIDS (cr=0 pr=0 pw=0 time=41 us)
0 BITMAP INDEX SINGLE VALUE IDX_table1 (cr=0 pr=0 pw=0 time=26 us)(object id 38860)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 5 0.01 0.02
********************************************************************************

Here again the same Parse:Execute ratio. But I am worried more about the
(fetch count)/(rows fetched) ratio. How does the Fetch count become too much
greater than rows fetched? Although no bulk operation is done here, this
ratio should be nearer to 1 (in the worst case), right?

Tom, could you please shed some light on these issues. This will help us
understand much more about how oracle works.

Best regards



Tom Kyte
October 24, 2005 - 11:32 am UTC

1) parsing is something Oracle does in reponse to an application asking us to parse.

what is your client langage?

(I cannot tell you how bad this is:

WHERE TRIM(t3col2) = :B1

why would you not TRIM it on the way in, once, rather then every time on retreival, probably obviating a really good index access....)



2) when you fetch zero rows - when you run a query that finds "no data"


OUCH - a bitmap index on a table you are modifying - that hurts (that should not be done....)

Re: large soft parsing

Praveen, October 25, 2005 - 12:01 am UTC

1) The whole application is written in PL/SQL and are within
the local database as packages.

I have created function based indexes on TRIM(t3col2) and this
actually improved the performance slightly (the explain plan showed
low cost) than normal indexes WITHOUT TRIM. Also there is no way
we can modify the source table data. The source tables are actually
materialized views. If we keep the data in it after trimming, this
will make it impossible to do a fast-refresh from 'its' source tables.

2) The bitmap indexes (FUNCTION BASED BITMAP indexes, infact) are not
created on any of the tables which are modified by the application
in question - ie not on any of the destination tables. Ofcourse they
are created on materialized views, which are the source tables here,
and the refresh will be slow. Infact the cardinality of those columns
upon which the bitmap indexes are created are so low such that the
expalin plan out showed a much better cost than on normal indexes.
(These explain plans are not obtained at runtime).

The application is taking about 1 sec to process just 3 or 4 records.
Do you think that repeated soft parsing is the 'main' reason for
such a terrible performance? What else should I look for, if not?

Best regards

Tom Kyte
October 25, 2005 - 1:35 am UTC

so the plan said lower cost, did it really run faster? I've never looked at the cost that way.


2) your EXAMPLE begs to differ with that statement.


INSERT INTO Table1 (t1col1, t1col2, t1col3, t1col4)
SELECT :B3 , :B1 , t2col3, t2col4 FROM Table2
WHERE t2col1 IN (SELECT TO_NUMBER(TRIM(t3col1))
FROM Table3 WHERE TRIM(t3col2) = :B1
AND t3col4 = :B4 )
AND TRIM(t3col5) = :B2

insert into table1.


SELECT TRIM(t1col8)
FROM Table1
WHERE TRIM(t1col1) = :B5 AND TRIM(t1col2) = :B4
AND TRIM(t1col3) = :B3 AND :B2 BETWEEN NVL(t1col4, :B2 ) AND NVL(t1col5, :B2 )
AND TRIM(t1col6) = :B

query against table1 showing bitmap index access. I can only work with what I see, nothing more, nothing less




What is your database version and what is your setting for session_cached_cursors?

(and look at the tkprof, the vast majority of time isn't spent on the parse phase, it is a significant portion of time - but not huge)

Re: large soft parse

Praveen, October 25, 2005 - 7:26 am UTC

Sorry, it's a mistake and ignorance from my side. I modified
and formatted the trace file output for readability (changing the actual
table names with table1, table2 etc), but never realized that I gave first &
second query the same table name 'Table1'. Actually they are different
tables. Apologies...

I am working on Oracle 10g R1 on Redhat linux.

SESSION_CACHED_CURSORS = 0.

If the plan cost is not the one to measure the performance of a query, then
how does one estimates it? Execute it with timing on? Then how does explain
plan useful?

What are the other areas in the trace file should I look for possible clues
of bad performance? Is the dba can be of any help to find them?

Best regards...


Tom Kyte
October 26, 2005 - 7:06 am UTC

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:17989406187750 <code>

and consider increasing your session_cached_cursors to avoid the reparses (soft parses) that are happening, you can increase the plsql cursor cache.

You can ask yourself "why am I executing the sql over and over again - did I do something procedurally I should have done in a single query" (tune the algorithms)

You can look for "low hanging fruit" - the query(s) that take the most time.




Thankyou, Tom

Praveen, October 27, 2005 - 1:48 am UTC


DB_FILE_SEQUENTIAL_READ

Brian Zucca, December 09, 2005 - 11:52 am UTC

Tom, Looking at something that is very stange at least to me. We are doing a select * from table X. I would expect it to do a full table scan and it looks like it is doing that, but I am seeing most of the time waited on DB FILE SEQUENTIAL READ, which is an index read correct? Little extra info, this is being called from a pl/sql program so this is a cursor that is being processed for every row in the form of a loop, don't know if that makes a difference or not.

SELECT *
FROM
X


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 213132 946.41 12379.50 1820139 40935748 0 213132
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 213134 946.42 12379.51 1820139 40935748 0 213132


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'X'

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 18 0.01 0.05
db file sequential read 1820006 0.69 11602.10
latch free 253 0.07 1.94

Any suggestions as to why this is doing this? Could it be due to db parameters making index more attractive, then full scan?

Thanks in advance.




Tom Kyte
December 09, 2005 - 1:04 pm UTC

you are using explain plan, don't do that, exit the session running this query so the REAL plan is put into the trace file and use that -- never use explain=u/p on explain plan, only use the row source operation records that should appear if the cursor is closed (exit the session - cursor is closed for sure)

DB FILE SEQUENTIAL READ

Brian, December 09, 2005 - 4:27 pm UTC

Sorry, don't follow you. I was using tkprof with a level 8 trace. In the raw trace file I see PARSING IN CURSOR 50 which is the select * from x. I see Wait #50 event db file scattered read 1 time and then wait #50 event db file sequential read the rest of the time. Am I missing something?

Tom Kyte
December 10, 2005 - 5:16 am UTC

this:

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'X'

comes from explain=u/p on the command line to tkprof. I want to see:

Rows Row Source Operation
------- ---------------------------------------------------

1 TABLE ACCESS FULL DUAL (cr=3 pr=0 pw=0 time=161 us)


ROW SOURCE OPERATION, that is the real plan used at run time. explain plan "lies"

DB FILE SEQUENTIAL READ

Brian, December 14, 2005 - 4:46 pm UTC

I guess my question is why am I seeing so many db_file_sequential reads in the level 8 trace for the cursor that that should be doing a full table scan? Is it because the cursor is selected/created and then each row is processed individually? So the actually db_file_sequential read is the process of stepping through every record?

Tom Kyte
December 14, 2005 - 5:47 pm UTC

compress the trace, rename it to .zyp (not .zip) and email it to me UNEDITED if you would like me to take a closer look.

UNEDITED.

DB FILE SEQUENTIAL READ

Brian, December 14, 2005 - 4:54 pm UTC

This is what I am seeing when the job actually finishes...


SELECT *
FROM
X


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 221185 1207.94 16831.25 2087759 47671964 0 221184
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 221187 1207.96 16831.28 2087759 47671964 0 221184

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
221184 TABLE ACCESS FULL X


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 2083571 16.06 15726.85
db file scattered read 82 0.04 0.97
latch free 4912 0.19 57.13
buffer busy waits 1 0.00 0.00

the db file sequential read is what I am concerend about.

Tom Kyte
December 14, 2005 - 8:26 pm UTC

(if you would like me to comment any further, see above...)

DB FILE SEQUENTIAL READ

Brian, December 15, 2005 - 11:41 am UTC

Will send the file to thomas.kyte@oracle.com, if this is the wrong email address please use the information from this post to send me the correct address. Size of the file zipped up is 35Meg. So if you email account will not handle that do you have another way to send the file?

Thanks,

Brian

Extremely slow db file sequential read

A reader, February 10, 2006 - 11:35 pm UTC

Tom,

This was executed on my production box, no other jobs where running except my batch job. I turned on extended tracing on a SID and among other things found the slow timing on the select below.

Does this tell me I have a slow disk problem? A max wait of 97.11 seconds for db file sequential read seems extremely high. Is this strictly disk related or could there be something else?

Thank you.


=====================================================================


select DIMENSION_KEY, PRPRTY_CODE,
ROOM_TYPE_CODE, RATE_TYPE_CODE,
DOW, BKNG_REGION_CODE,
SRVC_DATE, FRCST_DATE,
ERROR, MEAN_BASE, MEAN_HSE,
MEAN_INFLNC
from
RM_FRCST_WS_SHWP WHERE ((PRPRTY_CODE=:1 AND
ROOM_TYPE_CODE=:2 AND RATE_TYPE_CODE=:3 AND DOW=:4 AND BKNG_REGION_CODE=:5)
OR (PRPRTY_CODE=:6 AND ROOM_TYPE_CODE=:7 AND RATE_TYPE_CODE=:8 AND DOW=:9
AND BKNG_REGION_CODE=:10) OR (PRPRTY_CODE=:11 AND ROOM_TYPE_CODE=:12 AND
RATE_TYPE_CODE=:13 AND DOW=:14 AND BKNG_REGION_CODE=:15) OR (PRPRTY_CODE=
:16 AND ROOM_TYPE_CODE=:17 AND RATE_TYPE_CODE=:18 AND DOW=:19 AND
BKNG_REGION_CODE=:20) OR (PRPRTY_CODE=:21 AND ROOM_TYPE_CODE=:22 AND
RATE_TYPE_CODE=:23 AND DOW=:24 AND BKNG_REGION_CODE=:25) OR (PRPRTY_CODE=
:26 AND ROOM_TYPE_CODE=:27 AND RATE_TYPE_CODE=:28 AND DOW=:29 AND
BKNG_REGION_CODE=:30) OR (PRPRTY_CODE=:31 AND ROOM_TYPE_CODE=:32 AND
RATE_TYPE_CODE=:33 AND DOW=:34 AND BKNG_REGION_CODE=:35) OR (PRPRTY_CODE=
:36 AND ROOM_TYPE_CODE=:37 AND RATE_TYPE_CODE=:38 AND DOW=:39 AND
BKNG_REGION_CODE=:40) OR (PRPRTY_CODE=:41 AND ROOM_TYPE_CODE=:42 AND
RATE_TYPE_CODE=:43 AND DOW=:44 AND BKNG_REGION_CODE=:45) OR (PRPRTY_CODE=
:46 AND ROOM_TYPE_CODE=:47 AND RATE_TYPE_CODE=:48 AND DOW=:49 AND
BKNG_REGION_CODE=:50) OR (PRPRTY_CODE=:51 AND ROOM_TYPE_CODE=:52 AND
RATE_TYPE_CODE=:53 AND DOW=:54 AND BKNG_REGION_CODE=:55) OR (PRPRTY_CODE=
:56 AND ROOM_TYPE_CODE=:57 AND RATE_TYPE_CODE=:58 AND DOW=:59 AND
BKNG_REGION_CODE=:60) OR (PRPRTY_CODE=:61 AND ROOM_TYPE_CODE=:62 AND
RATE_TYPE_CODE=:63 AND DOW=:64 AND BKNG_REGION_CODE=:65) OR (PRPRTY_CODE=
:66 AND ROOM_TYPE_CODE=:67 AND RATE_TYPE_CODE=:68 AND DOW=:69 AND
BKNG_REGION_CODE=:70) OR (PRPRTY_CODE=:71 AND ROOM_TYPE_CODE=:72 AND
RATE_TYPE_CODE=:73 AND DOW=:74 AND BKNG_REGION_CODE=:75) OR (PRPRTY_CODE=
:76 AND ROOM_TYPE_CODE=:77 AND RATE_TYPE_CODE=:78 AND DOW=:79 AND
BKNG_REGION_CODE=:80) OR (PRPRTY_CODE=:81 AND ROOM_TYPE_CODE=:82 AND
RATE_TYPE_CODE=:83 AND DOW=:84 AND BKNG_REGION_CODE=:85) OR (PRPRTY_CODE=
:86 AND ROOM_TYPE_CODE=:87 AND RATE_TYPE_CODE=:88 AND DOW=:89 AND
BKNG_REGION_CODE=:90) OR (PRPRTY_CODE=:91 AND ROOM_TYPE_CODE=:92 AND
RATE_TYPE_CODE=:93 AND DOW=:94 AND BKNG_REGION_CODE=:95) OR (PRPRTY_CODE=
:96 AND ROOM_TYPE_CODE=:97 AND RATE_TYPE_CODE=:98 AND DOW=:99 AND
BKNG_REGION_CODE=:100))AND((SRVC_DATE>=:101)) order by PRPRTY_CODE,
ROOM_TYPE_CODE, RATE_TYPE_CODE, DOW, BKNG_REGION_CODE, SRVC_DATE,
FRCST_DATE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.01 0.01 0 0 0 0
Fetch 4 14.74 699.22 1833 26748 0 1953
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 14.76 699.24 1833 26748 0 1953

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 23

Rows Row Source Operation
------- ---------------------------------------------------
1953 SORT ORDER BY
1953 INLIST ITERATOR
1953 TABLE ACCESS BY GLOBAL INDEX ROWID RM_FRCST_WS_SHWP PARTITION: ROW LOCATION ROW LOCATION
10430 INDEX RANGE SCAN RM_FRCST_WS_SHWP_IX2 (object id 13792)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 9 0.00 0.00
SQL*Net message from client 9 0.33 0.41
db file sequential read 1833 97.11 461.98
free buffer waits 246 0.98 234.70
latch free 20 0.00 0.01
SQL*Net more data to client 86 0.00 0.00
********************************************************************************



Tom Kyte
February 11, 2006 - 1:51 pm UTC

It could be indicative of an impending disk problem - have you any messages in your system event/message log?

do you have a non-destructive tool you can run against that disk to see how it is responding?

Slow memory?

A reader, March 10, 2006 - 3:03 pm UTC

Tom,

We're investigating the apparent slow disk issue that seems to be causing the very high db file sequential read wait I reported on the previous message.

What if it turns out my server has slow memory? What wait events would crop up?

Tom Kyte
March 10, 2006 - 8:34 pm UTC

it doesn't think fast?

We don't really "wait" for memory, it just "happens", the CPU waits for memory, we use the cpu

More slow memory

A reader, March 13, 2006 - 1:04 pm UTC

So, since Oracle doesn't "wait" for memory, given two servers A and B with the same software and hardware configurations except B's memory is slower than A's, extended SQL trace doesn't help me identify why B is slower than A. How about statspack?

If I run the same job on A and B, B should be slower due to slow memory. In-memory sorts should be slower, right? Would the slowdown be reflected on statspack? If not, where?

Thanks.

Tom Kyte
March 13, 2006 - 9:32 pm UTC

it would show up likely as MORE CPU TIME - as the cpu is the thing that accesses memory.

scattered read but an Index Scan

VLS, March 17, 2006 - 8:45 am UTC

Hi Tom,

I have a query which checks for the sql_text of all the sessions waiting for "db file scattered read". One query repeatedly shows waiting for "db file scattered read", but the plan in sql_plan shows that it is doing an Unique Index Scan. How is it possible ? How can an Index scan show waiting for db file scattered read.

The query to check the sql_text with wait events is as under :

select event, sql_text, hash_value
from v$session_wait w, v$sqlarea a, v$session s
where s.sid=w.sid
and s.sql_hash_value=a.hash_value
and s.sql_address=a.address
and w.event='db file scattered read'
and s.sid not in
(select distinct sid from v$mystat)
/

The output from this query is :

Event SQL_TEXT HASH_VALUE
--------------------------------------------------------------------------------------------------
db file scattered read SELECT SUBSTR(PARTY_NUMBER,2,1) FROM HZ_PARTIES WHERE PAR 3440523203
TY_ID = TO_NUMBER(:b1)


and the plan for hash_value 3440523203 shows

select operation, options, object_name, object_owner from v$sql_plan
where hash_value=3440523203;

OPERATION OPTIONS OBJECT_NAME
------------------------------ ------------------------------ -------------------------------------
OBJECT_OWNER
------------------------------
SELECT STATEMENT


TABLE ACCESS BY INDEX ROWID HZ_PARTIES
AR

INDEX UNIQUE SCAN HZ_PARTIES_U1
AR

Party_ID is a numeric column in the table.

Normally, sessions wait for "db file scattered read" while doing a Full Table Scan or Index Fast Full Scan, but why in this case it is waiting for db file scattered read ?

The DB Version is 9206.

Regards
VLS



Tom Kyte
March 17, 2006 - 5:38 pm UTC

well, first of all - v$ tables are *not read consistent*, so the data from

v$session_wait w, v$sqlarea a, v$session s

could be all as of slightly different times (meaning by the time you got to the wait, the sql that was being executed by the session is "done" and we've moved onto something else already. or by the time you got to the sql for the session waiting on db file scattered read - the sql it was executing had changed and it in fact isn't waiting on that anymore. all depending on the order)

if that is the hz_parties view from CRM - are you sure that is the plan? That is a rather nasty view normally.

To VLS

A reader, March 19, 2006 - 4:37 am UTC

Additionally, check the definition of v$session_wait.
You need to add "and wait_time=0" or "status='WAITING'" (or
something like that) to your query depending on the version
of Oracle you are using.
Otherwise the query just shows what an active process
last waited on. There may not have been any sequential
read waits for the current query because most of the
blocks were already in cache


Tom Kyte
March 19, 2006 - 6:53 am UTC

very good - indeed.

Reduce Wait

Vikas Khanna, April 10, 2006 - 1:47 pm UTC

Hi Tom,

We have a warehouse where in we load data using SQL*LOADER tool using conventional mode. We have been experiencing huge wait on
user I/O when the data is getting inserted in the child table.

The wait event is classified as 'db file sequential read' which is basically the wait by the foreground process to get a buffer in
the cache which is not there and a physical I/O has to be performed.

Now we have 2 tables Parent and Child, during the load process we concatenate the files having data and load the data pertaining to
parent table, then we load the child data and each record is getting validated to being having a parent record.

Parent table : Primary Key
Local Range Partitioned Table on Date
Load : 10 million rows

Child Table : Primary Key + Foreign Key referencing Primary Key of Parent table + Index on Foreign Key
Local Range Partitioned Table on Date
Load : 10-15 million rows

I have even observed the wait time to be 100%, during the execution of INSERT into Child table.Even the AWR & ADDM suggest
the high Wait based on I/O to be reduced.

Performance statistics:

Parent Table : 211 records/sec
Child Table : 81 records/sec

There is no SQL transformation being done in both cases, just a plain load.The datafiles for tables and indexes are
distributed across 9 devices shared by 3 device controllers, one controller managing 3 disks.These tables are created in
the LMT with auto allocate option and have extents across all datafiles so as to distibute I/O.

SGA buffer Cache : 4.5 GB

Please let us know what can be done to load the data in the child to be more faster, minimising the wait events.

Just a though coming in,to capture the PK_INDEX of Primary Table in CACHE so as to reduce the wait on bringing the buffer in cache for lookup of Parent Record via a index lookup.

Still, your advice will be appreciated!

Thanks in advance,

Tom Kyte
April 11, 2006 - 10:57 am UTC

how could wait time be "100%" - that doesn't compute (that means you never finish...)

what segment are you experiencing the physical IO on.

why not direct path this data? and validate constraint in bulk after the fact?

Conventional Load

Vikas, April 12, 2006 - 8:34 am UTC

Hi Tom,

My point was that the expected wait on db file sequential read was just the same CPU cycles oracle spent in executing the task.

If CPU time spent is 'x' secs then Elapsed time was '2x', and the waits listed below in the trace just contribute to a single wait event 'db file sequential read'

Now we are loading data in conventional mode, simply because of the reason that we are not sure whether the child record comes faster to us in a file via application servers queue or its the parent record.

We need to maintain RI through'out the database. Hence we load the parent records for that hour and then load the child records for that hour. This process continues every hr. All the child records for which we don't recieve parent gets rejected and we load them in another file to be loaded back again at the end of the day.

If we try to use the direct path load, we will n't have some time window to enable RI at all since this process just keeps on going on an hourly basis.

Please suggest if you have other ideas, which can improve the performance of the load.


Tom Kyte
April 12, 2006 - 9:19 am UTC

what do you use to load. what is your current load process. have you used sql_trace and tkprof on it.

Need your help please

A reader, May 04, 2006 - 12:34 am UTC

Tom,

This process that run once a month is taking about
21 hours to run. I am putting part of the code which
I have indentified as the hanging fruit. Can you offer
some suggestions here...

BEGIN

EXECUTE IMMEDIATE 'truncate TABLE nvrm_chas_temp';

OPEN his_CUR;

LOOP

FETCH his_CUR BULK COLLECT INTO hisTab LIMIT 5000 ;

nc := 0;

ChassisTab.DELETE;

FOR K IN 1 .. hisTab.COUNT LOOP

out4hours_PROC(his1_REC, hisTab(K));
his1_REC := hisTab(K);

END LOOP;

FORALL K IN 1 .. ChassisTab.COUNT

INSERT INTO nvrm_chas_temp VALUES ChassisTab(K);

COMMIT;

EXIT WHEN his_CUR%NOTFOUND;

END LOOP;

CLOSE his_CUR;

COMMIT;

EXCEPTION

WHEN OTHERS THEN

IF his_CUR%ISOPEN THEN

CLOSE his_CUR;

END IF;

ROLLBACK;

RAISE_APPLICATION_ERROR(-20006,'Error in nvrm_PKG.chassis - '||SQLERRM);

END chassis;



end nvrm_PKG;

------------------------------
TKPROFF REPORTS ...HERE....


SELECT GC.ACTION, GC.BOOKING, GV.COOP_IN_DATE, GC.SSL_USER_CODE,
S.LONG_DESCRIPTION
FROM
GATE_CONTAINERS GC ,GATE_VISIT GV ,SSL S WHERE GC.VISIT = :B2 AND
GC.REFERENCE_ID = :B1 AND GC.VISIT = GV.VISIT AND GC.SSL_USER_CODE =
S.CODE(+)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 665514 97.76 98.29 0 92 0 0
Fetch 665514 142.00 5536.31 1041809 6676163 0 665513
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1331029 239.76 5634.61 1041809 6676255 0 665513

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1041809 0.58 5379.59
latch: shared pool 133 0.00 0.00
latch: library cache 23 0.00 0.00
latch: cache buffers chains 2 0.01 0.01
********************************************************************************

select default$
from
col$ where rowid=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.00 0.00 0 4 0 2

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
0 TABLE ACCESS BY USER ROWID COL$ (cr=0 pr=0 pw=0 time=0 us)

********************************************************************************

SELECT GC.ACTION, GC.BOOKING, GV.COOP_IN_DATE, GC.SSL_USER_CODE,
S.LONG_DESCRIPTION
FROM
GATE_CONTAINERS@TRIPOLI GC ,GATE_VISIT@TRIPOLI GV ,SSL@TRIPOLI S WHERE
GC.VISIT = :B2 AND GC.REFERENCE_ID = :B1 AND GC.VISIT = GV.VISIT AND
GC.SSL_USER_CODE = S.CODE(+)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 822790 123.64 1435.69 18 0 583 0
Fetch 822789 81.84 9357.44 0 0 0 822782
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1645580 205.48 10793.16 18 0 583 822782

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to dblink 1646120 0.00 2.48
SQL*Net message from dblink 1646119 0.45 10562.00
latch: shared pool 289 0.00 0.00
latch: library cache 30 0.00 0.00
db file sequential read 18 0.00 0.06
********************************************************************************

SELECT GC.ACTION, GV.COOP_IN_DATE
FROM
GATE_CONTAINERS GC ,GATE_VISIT GV WHERE GC.VISIT = :B2 AND GC.REFERENCE_ID =
:B1 AND GC.VISIT = GV.VISIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 40252 6.01 6.08 0 0 0 0
Fetch 40252 8.82 415.03 78311 313314 0 37704
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 80505 14.84 421.11 78311 313314 0 37704

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 78311 0.33 404.81
latch: shared pool 10 0.00 0.00
********************************************************************************

SELECT S.LONG_DESCRIPTION
FROM
SSL S WHERE S.CODE = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 3966 0.53 0.45 0 0 0 0
Fetch 3966 0.10 0.08 0 7932 0 3966
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7933 0.64 0.53 0 7932 0 3966
**********************


Tom Kyte
May 04, 2006 - 2:16 am UTC

the low hanging fruit here is...

You have written some slow by slow code (row by row processing), you will have to review your algorithm to see where you can optimize that.

For example, you are executing some sql statements HUNDREDS OF THOUSANDS OF TIMES. seems like you really meant to "join", not

for x in (select ... )
loop
do sql based on x
do sql based on x
....


You really meant to JOIN more tables in the first query and NOT run a separate sql for each row.

follow up

A reader, May 04, 2006 - 9:26 am UTC

Tom,

This is the sql that gets executed within the bulk collect,
how can I joined as you put it??

PROCEDURE out4hours_PROC(his_rec_1 IN chas_his_rec_TYPE
,his_rec_2 IN chas_his_rec_TYPE)
IS
v_booking gate_containers.booking%TYPE := NULL;
v_action gate_containers.action%TYPE := NULL;
v_coop_date DATE := NULL;
v_user ssl.code%TYPE := NULL;
v_user_desc ssl.long_description%TYPE := NULL;

CURSOR fg_cur(pvisit gate_visit.visit%TYPE
,pref_id gate_containers.reference_id%TYPE
)
IS
SELECT gc.action, gc.booking, gv.coop_in_date, gc.ssl_user_code, s.long_description
FROM gate_containers gc
,gate_visit gv
,ssl s
WHERE gc.visit = pvisit
AND gc.reference_id = pref_id
AND gc.visit = gv.visit
AND gc.ssl_user_code = s.code(+);

CURSOR tg_cur(pvisit gate_visit.visit%TYPE
,pref_id gate_containers.reference_id%TYPE
)
IS
SELECT gc.action, gc.booking, gv.coop_in_date, gc.ssl_user_code, s.long_description
FROM gate_containers@tripoli gc
,gate_visit@tripoli gv
,ssl@tripoli s
WHERE gc.visit = pvisit
AND gc.reference_id = pref_id
AND gc.visit = gv.visit
AND gc.ssl_user_code = s.code(+);

CURSOR fa_cur(pvisit gate_visit.visit%TYPE
,pref_id gate_containers.reference_id%TYPE
)
IS
SELECT gc.action, gv.coop_in_date
FROM gate_containers gc
,gate_visit gv
WHERE gc.visit = pvisit
AND gc.reference_id = pref_id
AND gc.visit = gv.visit;

CURSOR ta_cur(pvisit gate_visit.visit%TYPE
,pref_id gate_containers.reference_id%TYPE
)
IS
SELECT gc.action, gv.coop_in_date
FROM gate_containers@tripoli gc
,gate_visit@tripoli gv
WHERE gc.visit = pvisit
AND gc.reference_id = pref_id
AND gc.visit = gv.visit;

BEGIN
IF his_rec_1.chassis = his_rec_2.chassis THEN
IF his_rec_1.out_date IS NULL OR his_rec_1.out_visit IS NULL OR his_rec_1.out_reference_id IS NULL THEN
RAISE EXIT_EXCEPTION;
END IF;
IF his_rec_2.in_visit IS NULL OR his_rec_2.in_reference_id IS NULL THEN
RAISE EXIT_EXCEPTION;
END IF;
-- Chassis has to have left thru Coop Depot NAKED
IF his_rec_1.area = 'F' THEN
OPEN fg_cur(his_rec_1.out_visit,his_rec_1.out_reference_id);
FETCH fg_cur INTO v_action, v_booking, v_coop_date, v_user, v_user_desc;
CLOSE fg_cur;
IF v_action IS NULL
THEN
OPEN tg_cur(his_rec_1.out_visit, his_rec_1.out_reference_id);
FETCH tg_cur INTO v_action, v_booking, v_coop_date, v_user, v_user_desc;
CLOSE tg_cur;
IF v_action IS NULL
THEN
RAISE EXIT_EXCEPTION;
END IF;
END IF;
ELSE
OPEN tg_cur(his_rec_1.out_visit,his_rec_1.out_reference_id);
FETCH tg_cur INTO v_action, v_booking, v_coop_date, v_user, v_user_desc;
CLOSE tg_cur;
IF v_action IS NULL
THEN
RAISE EXIT_EXCEPTION;
END IF;
END IF;
IF v_coop_date IS NULL OR v_action <> global_PKG.DELIVERED_NAKED_CHASSIS THEN
RAISE EXIT_EXCEPTION;
END IF;
-- Chassis has to have returned thru Coop Depot NAKED
OPEN fa_cur(his_rec_2.in_visit,his_rec_2.in_reference_id);
FETCH fa_cur INTO v_action, v_coop_date;
CLOSE fa_cur;
IF v_action IS NULL
THEN
OPEN ta_cur(his_rec_2.in_visit,his_rec_2.in_reference_id);
FETCH ta_cur INTO v_action, v_coop_date;
CLOSE ta_cur;
IF v_action IS NULL
THEN
RAISE EXIT_EXCEPTION;
END IF;
END IF;

IF v_coop_date IS NULL OR v_action <> global_PKG.RECEIVED_NAKED_CHASSIS THEN
RAISE EXIT_EXCEPTION;
END IF;
-- and, Chassis has to have out for more than 4 hours...
IF his_rec_2.in_date - his_rec_1.out_date > 1/6 THEN
write_to_chas_temp_PROC(his_rec_1
,his_rec_2.in_date
,his_rec_2.in_visit
,v_booking
,v_user
,v_user_desc);
END IF;
END IF;
EXCEPTION
WHEN EXIT_EXCEPTION THEN
NULL;
WHEN OTHERS THEN
ROLLBACK;
RAISE_APPLICATION_ERROR(-20006,'Error in nvrm_PKG.out4hours_PROC - '||SQLERRM);
END;


Tom Kyte
May 04, 2006 - 4:42 pm UTC

hmm, why all of the open/fetch/closes - a select into would have been so much nicer and more efficient.

YOU will have to look at YOUR algorithms and determine where YOU can make efficiencies and reduce the number of individual sql statements. I cannot look at a single procedure in isolation and tell you how to fix it - the join probably needs to happen at the layer ABOVE this procedure (so this procedure isn't really even called)

select into...

A reader, May 12, 2006 - 3:06 pm UTC

How can I do select into when I have to variables? isnt select into when you only have one variable??


Tom Kyte
May 12, 2006 - 9:36 pm UTC

eh?

select a, b, c, ..... z
into :a, :b, :c, .... :z
from t where.....

Amazed....

Mark S. Wooldridge, May 12, 2006 - 11:21 pm UTC

The initial question and response just shows the depth of knowledge from Tom. Way beyond what even the experienced Oracle Developer/DBA might know.

Most know that IO is critical to performance and scalability. Tom's knowledge that this is an index read, multiblock read, if the data is in adjacement or near blocks and the performance is amazing.

Tom being able to speculate that data is a few adjacent blocks is amazing with such a simple question. Just simply amazing.

Even the most experienced Oracle Developer/DBA must be humbled with such an answer.

Thanks for providing such insight to the performance of Oracle.

db file sequential read

dxl, August 11, 2006 - 12:41 am UTC

Tom

We have the query below running on 9.2.0.5.
This query is running on the test box with no problems. It returns the result in a couple of seconds.

We then have an identical production box which runs the same query against similar data (returning similar number of rows).

Now on the production box the query takes approx 10 seconds to run, then if you execute it again immediately it will take approx 2 seconds.
A tkprof from both these runs show db file sequential read events in the trace file from the first run that takes longer to execute and none of these events for the second quicker execution -- so can I conclude that the first time it ran it needed to get the data from disk and the second time it ran in 2 seconds it retrieved the data from the buffer cache in memory? Is that correct?

Ok now if i wait approx half an hour the same thing occurs again, ie you run the query and it again takes 10 seconds and does lots of physical i/o and you run it again immediately and it runs in 2 seconds.


This is the query and tkprof for the first run which takes 10 seconds :

(I realise that hints are being used but that is how the app is coded i don't have control over that, i have tested it with the hints removed and the same plan is produced so i dont think they affect this problem)

SELECT /*+INDEX(c I_CHARGE_ACCOUNT) INDEX(ne I_NORMALISED_EVENT) INDEX(a U_ACCOUNT)*/
ROWIDTOCHAR (c.ROWID) c_rowid
, a.customer_node_id cnh_customer_node_id
, c.charge_id
, c.normalised_event_id
, c.service_id c_service_id
, c.charge_id c_charge_id
, c.charge c_charge
, c.proportion c_proportion
, c.currency_id c_currency_id
, c.account_id c_account_id
, c.uninvoiced_ind_code c_uninvoiced_ind_code
, c.gl_code_id c_gl_code_id
, c.from_gl_code_id c_from_gl_code_id
, c.receivable_type_id c_receivable_type_id
, c.normalised_event_id c_normalised_event_id
, c.service_id c_service_id
, c.customer_node_id c_customer_node_id
, c.charge_start_date c_charge_start_date
, c.charge_date c_charge_date
, c.tariff_id c_tariff_id
, c.subtotal_id c_subtotal_id
, c.subtotal_date c_subtotal_date
, c.adjustment_id c_adjustment_id
, c.payment_id c_payment_id
, c.invoice_txt c_invoice_txt
, c.from_invoice_id c_from_invoice_id
, c.invoice_id c_invoice_id
, c.invoice_section_nr c_invoice_section_nr
, c.invoice_item_nr c_invoice_item_nr
, c.key_value c_key_value
, c.general_1 c_general_1
, c.general_2 c_general_2
, c.general_3 c_general_3
, c.general_4 c_general_4
, c.general_5 c_general_5
, c.general_6 c_general_6
, c.general_7 c_general_7
, c.general_8 c_general_8
, c.general_9 c_general_9
, c.general_10 c_general_10
, ne.bill_run_id ne_bill_run_id
, ne.c_party_ton_code ne_c_party_ton_code
, ne.c_party_location_code ne_c_party_location_code
, ne.full_path ne_full_path
, ne.charge_start_date ne_charge_start_date
, ne.period_start_date ne_period_start_date
, ne.period_end_date ne_period_end_date
, ne.event_type_code ne_event_type_code
, ne.event_sub_type_code ne_event_sub_type_code
, ne.DURATION ne_duration
, ne.volume ne_volume
, ne.general_1 ne_general_1
, ne.general_3 ne_general_3
, ne.general_6 ne_general_6
, ne.general_7 ne_general_7
, ne.general_9 ne_general_9
FROM charge c, normalised_event ne, ACCOUNT a
WHERE a.customer_node_id = 20006640
AND a.account_id = c.account_id
AND ( ne.bill_run_id IS NULL
OR ne.bill_run_id = 20001448
OR c.from_invoice_id IS NOT NULL
)
AND c.uninvoiced_ind_code = 1
AND c.normalised_event_id = ne.normalised_event_id(+)
AND c.charge_start_date = ne.charge_start_date(+)
AND ( c.charge_date < '11/AUG/2006'
OR ( c.charge_date = '11/AUG/2006'
AND ( ne.bill_run_id = 20001448
OR c.from_invoice_id IS NOT NULL
OR c.normalised_event_id IS NULL
)
)
)
AND c.charge_date >= '11/JUN/2006'
ORDER BY c.service_id, c.charge_start_date, c.normalised_event_id
, c.charge_id

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.11 0.10 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 45 0.76 14.88 1554 3273 0 658
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 47 0.88 14.99 1554 3273 0 658

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 5

Rows Row Source Operation
------- ---------------------------------------------------
658 SORT ORDER BY
658 CONCATENATION
0 FILTER
0 NESTED LOOPS OUTER
0 NESTED LOOPS
1 TABLE ACCESS BY INDEX ROWID OBJ#(265493)
1 INDEX RANGE SCAN OBJ#(288545) (object id 288545)
0 TABLE ACCESS BY LOCAL INDEX ROWID OBJ#(265802) PARTITION: 122 122
13 INDEX RANGE SCAN OBJ#(289093) PARTITION: 122 122 (object id 289093)
0 PARTITION RANGE ITERATOR PARTITION: KEY KEY
0 TABLE ACCESS BY LOCAL INDEX ROWID OBJ#(267694) PARTITION: KEY KEY
0 INDEX RANGE SCAN OBJ#(290084) PARTITION: KEY KEY (object id 290084)
658 FILTER
658 NESTED LOOPS OUTER
658 NESTED LOOPS
1 TABLE ACCESS BY INDEX ROWID OBJ#(265493)
1 INDEX RANGE SCAN OBJ#(288545) (object id 288545)
658 PARTITION RANGE ITERATOR PARTITION: 61 121
658 TABLE ACCESS BY LOCAL INDEX ROWID OBJ#(265802) PARTITION: 61 121
1253 INDEX RANGE SCAN OBJ#(289093) PARTITION: 61 121 (object id 289093)
658 PARTITION RANGE ITERATOR PARTITION: KEY KEY
658 TABLE ACCESS BY LOCAL INDEX ROWID OBJ#(267694) PARTITION: KEY KEY
658 INDEX RANGE SCAN OBJ#(290084) PARTITION: KEY KEY (object id 290084)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 45 0.00 0.00
SQL*Net more data to client 46 0.00 0.00
db file sequential read 1554 0.21 14.44
SQL*Net message from client 45 2.76 3.62



********************************************************************************


In the trace file i get :

PARSE #1:c=110000,e=107545,p=0,cr=2,cu=0,mis=1,r=0,dep=0,og=4,tim=116502878890
BINDS #1:
EXEC #1:c=10000,e=968,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=116502881490
WAIT #1: nam='SQL*Net message to client' ela= 8 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net more data to client' ela= 131 p1=1650815232 p2=2003 p3=0
WAIT #1: nam='db file sequential read' ela= 6535 p1=5 p2=1842272 p3=1
WAIT #1: nam='db file sequential read' ela= 10991 p1=99 p2=266402 p3=1
WAIT #1: nam='db file sequential read' ela= 6341 p1=99 p2=267575 p3=1
WAIT #1: nam='db file sequential read' ela= 24496 p1=99 p2=270197 p3=1
WAIT #1: nam='db file sequential read' ela= 7804 p1=99 p2=274137 p3=1
WAIT #1: nam='db file sequential read' ela= 7242 p1=99 p2=275385 p3=1
WAIT #1: nam='db file sequential read' ela= 10075 p1=99 p2=277797 p3=1
WAIT #1: nam='db file sequential read' ela= 10558 p1=104 p2=203646 p3=1
WAIT #1: nam='db file sequential read' ela= 11718 p1=104 p2=211782 p3=1
WAIT #1: nam='db file sequential read' ela= 7996 p1=127 p2=153816 p3=1
WAIT #1: nam='db file sequential read' ela= 11838 p1=127 p2=162458 p3=1
WAIT #1: nam='db file sequential read' ela= 7707 p1=87 p2=54213 p3=1
WAIT #1: nam='db file sequential read' ela= 10642 p1=87 p2=53305 p3=1
WAIT #1: nam='db file sequential read' ela= 6986 p1=7 p2=207221 p3=1
WAIT #1: nam='db file sequential read' ela= 10837 p1=77 p2=267984 p3=1
WAIT #1: nam='db file sequential read' ela= 7543 p1=77 p2=271713 p3=1
WAIT #1: nam='db file sequential read' ela= 9057 p1=81 p2=304885 p3=1
WAIT #1: nam='db file sequential read' ela= 4039 p1=91 p2=16657 p3=1
WAIT #1: nam='db file sequential read' ela= 7832 p1=91 p2=17714 p3=1
WAIT #1: nam='db file sequential read' ela= 10689 p1=8 p2=1047485 p3=1
WAIT #1: nam='db file sequential read' ela= 9550 p1=8 p2=1046584 p3=1
WAIT #1: nam='db file sequential read' ela= 10162 p1=5 p2=1463879 p3=1
WAIT #1: nam='db file sequential read' ela= 5185 p1=44 p2=425352 p3=1
WAIT #1: nam='db file sequential read' ela= 10111 p1=55 p2=1173333 p3=1
WAIT #1: nam='db file sequential read' ela= 10493 p1=55 p2=1177695 p3=1
WAIT #1: nam='db file sequential read' ela= 9574 p1=70 p2=730834 p3=1
WAIT #1: nam='db file sequential read' ela= 5333 p1=70 p2=733259 p3=1
WAIT #1: nam='db file sequential read' ela= 6111 p1=70 p2=736069 p3=1
WAIT #1: nam='db file sequential read' ela= 8624 p1=77 p2=304387 p3=1
WAIT #1: nam='db file sequential read' ela= 6567 p1=49 p2=50110 p3=1


Autotrace shows:


Statistics
----------------------------------------------------------
7 recursive calls
0 db block gets
3277 consistent gets
1810 physical reads
0 redo size
117054 bytes sent via SQL*Net to client
2367 bytes received via SQL*Net from client
45 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
658 rows processed



When it is run immediately following the first run
all the db file sequential reads disappear and autotrace shows:

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3275 consistent gets
1 physical reads
0 redo size
117054 bytes sent via SQL*Net to client
2367 bytes received via SQL*Net from client
45 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
658 rows processed




Then as i said before if i wait 30 mins then it again takes 10 seconds.


The db cache is 9Gb.

What would cause this behaviour, does this test prove that the data is being aged out of the cache in as little as half an hour and therefore it needs to go to disk to read the rows??

Is there anything else i should be looking for or testing?

The init parameters for both test and prod databases are identical except for db_file_multiblock_read_count which is 4 on the test box and 8 on the prod box.


many thanks for your time.

Tom Kyte
August 11, 2006 - 10:36 am UTC

the tkprof is pretty clear:

  db file sequential read                      1554        0.21         14.44

almost all of your time there was spent waiting for IO.

and if you run it and trace it again immediately, I would suspect no IO waits (found in the buffer cache)

and if you are waiting 30 minutes and then running it and seeing the IO waits again, that would definitely be indicative of the blocks no longer appearing in the buffer cache and having to be re-read from disk.


db file sequential reads are single block IO's - the setting of your multiblock read count isn't going to affect their performance.


You might look at this index:

    658        TABLE ACCESS BY LOCAL INDEX ROWID OBJ#(265802) PARTITION: 61 121
   1253         INDEX RANGE SCAN OBJ#(289093) PARTITION: 61 121 (object id 

to see if something can be added to it to reduce the number of false positives.  What we see there is the index thought there could be 1,253 rows that matched - but when we did the 1,253 table access by local index rowid's - we found that only about half of them did.  For example:


ops$tkyte%ORA10GR2> create table t
  2  as
  3  select abs(dbms_random.random) id1, mod(rownum,2) id2, a.*
  4    from all_objects a
  5   where rownum <= 1253;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index t_idx1 on t(id1);

Index created.

ops$tkyte%ORA10GR2> create index t_idx2 on t(id1,id2);

Index created.

ops$tkyte%ORA10GR2> alter session set sql_trace=true;

Session altered.

ops$tkyte%ORA10GR2> set autotrace traceonly statistics
ops$tkyte%ORA10GR2> select /*+ index( t t_idx1 ) */ * from t where id1 >= 0 and id2 = 0;

626 rows selected.


Statistics
----------------------------------------------------------
        268  recursive calls
          0  db block gets
       1266  consistent gets
          4  physical reads
          0  redo size
      52199  bytes sent via SQL*Net to client
        836  bytes received via SQL*Net from client
         43  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        626  rows processed

ops$tkyte%ORA10GR2> select /*+ index( t t_idx2 ) */ * from t where id1 >= 0 and id2 = 0;

626 rows selected.


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
        645  consistent gets
          4  physical reads
          0  redo size
      52199  bytes sent via SQL*Net to client
        836  bytes received via SQL*Net from client
         43  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        626  rows processed

ops$tkyte%ORA10GR2> set autotrace off

the tkprof will show:

select /*+ index( t t_idx1 ) */ * from t where id1 >= 0 and id2 = 0

Rows     Row Source Operation
-------  ---------------------------------------------------
    626  TABLE ACCESS BY INDEX ROWID T (cr=1217 pr=4 pw=0 time=7780 us)
   1253   INDEX RANGE SCAN T_IDX1 (cr=47 pr=4 pw=0 time=1478 us)(object id 73812)
********************************************************************************
select /*+ index( t t_idx2 ) */ * from t where id1 >= 0 and id2 = 0

Rows     Row Source Operation
-------  ---------------------------------------------------
    626  TABLE ACCESS BY INDEX ROWID T (cr=622 pr=4 pw=0 time=8355 us)
    626   INDEX RANGE SCAN T_IDX2 (cr=47 pr=4 pw=0 time=3321 us)(object id 73813)



so, perhaps if you added something extra to the end of your existing index, you could make it more selective for this query, avoid some 600 IO's to the database table itself. 

Thanks

dxl, August 13, 2006 - 5:55 pm UTC

Thanks very much for the advice i will definitely look into altering the index to make it more selective.

couple of questions though:

1) If the blocks are not being found in the buffer cache after eg 30 minutes of database usage does this mean that they are being aged out due to lots of other select on other data. For a 9Gb buffer cache does this indicate (in your opinion) a high amount of usage or is that normal throughput?? What is the best way of looking at throughput or getting statistics on throughput? Statspack? What figures should i monitor?

How about looking at the amount of archivelogs produced over a time period?

2) Should i look at any other methods of trying to keep the blocks in the buffer cache (if this query is considered important) eg using Keep pool for the tables used or any other method you can suggest??

Thanks for your time

Tom Kyte
August 14, 2006 - 10:25 am UTC

1) could be selects, could be inserts/updates/deletes.

There is only one answer for this "it depends"

Define "throughput" for me here. Statspack can show you things like "number of commits per second" but that may or may not have some relationship to YOUR concept of throughput (what if 10% of your transactions are customer facing, the other 90% in the background, you'd likely be most interested in the 10% - but statspack isn't going to show you that)

measuring the amount of redo generated per X units of time - what would that tell you exactly? How would you translate that into "our application did M useful things because we generated X units of redo". "throughput" can (should) be a personal thing - an application thing.

I can measure throughput on asktom easily for example - because my application measures it (audit trail). I decided what I wanted to track as a metric (for me, it boiled down to a) how many pages per second viewed, b) what pages are viewed, c) response times by page type)

2) only if you know what segments (think segments, not tables here - indexes, etc) would truly benefit from this. v$segment_statistics might be useful. And remember, this is a sort of 'micro optimization'. Something to be looked at perhaps after looking at the "low hanging fruit" (sql that does the most work in your system)

Satheesh Babu.S, October 19, 2006 - 11:54 pm UTC

Tom,
  I am having a "db file sequential read" wait event. I am trying to get file#  using p1 value. But no such file# anywhere.

SQL> select event,state,p1,p2,p3 from v$session_wait where sid=23;

EVENT                                                            STATE                       P1         P2         P3
---------------------------------------------------------------- ------------------- ---------- ---------- ----------
db file sequential read                                          WAITED KNOWN TIME          210     195167          1

i know, p1 = File id and p2 = block id, but when i try to query the v$datafile or dba_extents i don't find any datafile with file#=210


SQL> select name from v$datafile where file#=210;

no rows selected


SQL> select file_id from dba_extents where file_id=210;

no rows selected

SQL> select name from v$tempfile where file#=210;

no rows selected

Regards,
Satheesh Babu.S 

Tom Kyte
October 20, 2006 - 4:13 am UTC

what abuot where file#=10 from v$tempfile (eg: assuming your max files is 200, you have to subtract)

sorry forget it....

Satheesh Babu.S, October 20, 2006 - 12:03 am UTC

I got it, it is a tempfile.

SELECT tablespace_name, file_name
FROM dba_temp_files f, v$parameter p
WHERE p.name='db_files'
AND f.file_id+p.value =210;

TABLESPACE_NAME
------------------------------
FILE_NAME
------------------------------------------------------------------------------------------------------------------------------------
TEMP
/san/san02-stgdb02-r5-02/oradata/krdregdb/temp10.dbf

Regards,
Satheesh Babu.S

Finding Object

A reader, January 19, 2007 - 3:06 am UTC

Hi Tom
Is there a way to find out the object that is causing the wait, once we have the values of p1,p2 and p3 for 'db file sequential read'?
Thanks

forgot to mention release info..

MS, March 12, 2007 - 5:18 pm UTC

The database where tables reside is: 10.1.0.4.0
The database that queries these tables through a links is:
10.2.0.2.0

same query behaves differently through a link

MS, March 12, 2007 - 5:58 pm UTC

Tom,

I have a query that produces different TKPROF results based on whether it is being run directly in the database or through a link.

When it is run in the database where tables reside..I am getting:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.36 0.36 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3969.59 3944.47 17395 35975880 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 3969.95 3944.84 17395 35975880 0 2

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 17395 0.06 4.32
latch: cache buffers chains 123 0.03 0.12
latch free 5 0.00 0.00
latch: cache buffers lru chain 1 0.00 0.00
SQL*Net message from client 2 369.94 369.94

but when run from another database (tables accessed through links)..I get

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.95 1.17 0 10 1 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.03 2.62 0 3 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.98 3.80 0 13 1 2


Why is this?
Tom Kyte
March 12, 2007 - 7:42 pm UTC

got plans?


otherwise, I'll just say this:

"my car won't start, why?"

One table..can make a difference..

MS, March 13, 2007 - 6:29 pm UTC

Found it..
When I was running it from the database (that accesses) tables through a link...all but one table (about 50 rows)
was still queried locally..so we were comparing Apples to Oranges :)

Wait: db file Sequential read on a Insert As Select

Ken, July 03, 2007 - 6:01 pm UTC

hi Tom, I am seeing waits on the event "db file sequential read" while inserting into a table like this:

INSERT INTO /*+ APPEND */ All_Listings NOLOGGING
SELECT * FROM db01.All_Listings ;

I expect this do to a full table scan and shows a "db file SCATTERED read" , but why am I seeing the wait event from an index scan ? Reviewer Brian from St. Louis had the same question back in Dec 14, 2005, but I didn't see any further disucssion on it.

Thanks again!
Tom Kyte
July 05, 2007 - 9:19 am UTC

db file scattered reads are multiblock IO's.
db file sequential reads are single block IO's

scattered reads TYPICALLY result from full scans.
sequential reads TYPICALLY result from single block IO (index reads)

many things can cause single block IO's (you do know by the way that your use of NOLOGGING in that query is "not useful", you could put the text HELLO_WORLD there and it would do the same thing! nologging is an attribute of a table, not a clause you use in an insert statement - append into a table that was marked as nonlogged could be "non-logged"

here is one simple example of a full scan with scattered and sequential read waits.

facts:

table created with 72k exents

full scan once with nothing in buffer cache using a multiblock read count of 8 (64k)

full scan again with nothing in cache using a multiblock read cont of 9 (72k)

full scan again after carefully priming the buffer cache with every other block.

what we'll see:

first full scan - reads 8 blocks, has one more to read from the extent. 8 blocks read using scattered read, 1 block read using sequential read.

second full scan - reads 9 blocks using scattered read. no additional io necessary

third full scan - finds that every other block is in cache, no multi-block IO from disk can be done really as only single blocks need be read...


ops$tkyte%ORA10GR2> create tablespace test
  2  datafile size 1m autoextend on next 1m
  3  extent management local uniform size 72k
  4  /

Tablespace created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t
  2  tablespace test
  3  as
  4  select *
  5    from all_objects;

Table created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T')

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> create table t2
  2  as
  3  select rowid rid
  4    from t
  5   where mod( dbms_rowid.rowid_block_number(rowid), 2 ) = 0;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @trace
ops$tkyte%ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA10GR2> alter system flush buffer_cache;

System altered.

ops$tkyte%ORA10GR2> alter session set db_file_multiblock_read_count = 8;

Session altered.

ops$tkyte%ORA10GR2> select count(*) from t dfmbrc8;

  COUNT(*)
----------
     49889

ops$tkyte%ORA10GR2> alter system flush buffer_cache;

System altered.

ops$tkyte%ORA10GR2> alter session set db_file_multiblock_read_count = 9;

Session altered.

ops$tkyte%ORA10GR2> select count(*) from t dfmbrc9;

  COUNT(*)
----------
     49889

ops$tkyte%ORA10GR2> alter system flush buffer_cache;

System altered.

ops$tkyte%ORA10GR2> begin
  2          for x in ( select rid from t2 )
  3          loop
  4                  for y in (select * from t where rowid = x.rid )
  5                  loop
  6                          null;
  7                  end loop;
  8          end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> alter session set db_file_multiblock_read_count = 9;

Session altered.

ops$tkyte%ORA10GR2> select count(*) from t dfmbrc9_rid;

  COUNT(*)
----------
     49889




elect count(*) from t dfmbrc8

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.04       0.04        687        696          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=696 pr=687 pw=0 time=40862 us)
  49889   TABLE ACCESS FULL T (cr=696 pr=687 pw=0 time=151156 us)

  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        57        0.00          0.00
  db file scattered read                         79        0.00          0.01

<b>so, nothing in cache, we did 687 physical block IO's.  79*8+57= 689 (two blocks to read segment header to find out what to read, then a bunch of multiblock and single block IOs to read)</b>

********************************************************************************
select count(*) from t dfmbrc9

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.03       0.04        687        696          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=696 pr=687 pw=0 time=36889 us)
  49889   TABLE ACCESS FULL T (cr=696 pr=687 pw=0 time=200319 us)

  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                         2        0.00          0.00
  db file scattered read                         79        0.00          0.01

<b>we did exactly the same physical IO's this time - 687 blocks - just 79*(9 or less) reads to do it...</b> 
********************************************************************************
select count(*) from t dfmbrc9_rid

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.07       0.16        344        696          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=696 pr=344 pw=0 time=87325 us)
  49889   TABLE ACCESS FULL T (cr=696 pr=344 pw=0 time=150454 us)

  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                       344        0.00          0.02

<b>but after carefully placing every other block into the cache - our full scan does 50% the physical IO, but can only do single block IO since we have only single blocks to read - the rest come from the cache.

high db_file_sequential read, using ASM

Gowrish, January 15, 2008 - 10:59 am UTC

Tom,
I find high db_file_sequential_read on my 10g Database that's causing high CPU usage at times. I find the insert statements taking more time. I found these data after running awrrpt.sql for one day
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file sequential read 16,832,437 82,204 5 72.2 User I/O
CPU time 2,375 2.1
log file sequential read 33,513 2,029 61 1.8 System I/O
db file parallel write 98,174 1,733 18 1.5 System I/O
Backup: sbtwrite2 134,448 1,621 12 1.4 Administra
-------------------------------------------------------------

This is the statement
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
58,919 7 42,176 1.4 51.7 d97vbvh4m22p2
Module: isdc_DataProc@ta01dp01 (TNS V1-V3)
INSERT INTO drsadm.ISDA_CS22 (dataKey, callTime, originatorId, finalTreatInd, fi
nalTreat, echoData, cv1, cv2, cv3, cv4, cv5, cv6, cv7, cv8, cv9, cv10,cv11, cv12
, cv13, cv14, cv15, cv16, cv17, cv18, cv19, cv20) VALUES (:dataKey,TO_DATE(:call
Time, 'MM/DD/YYYY HH24:MI:SS'),:originatorId,:finalTreatInd,:finalTreat,:echoDat
Tom Kyte
January 15, 2008 - 3:22 pm UTC

... I find high db_file_sequential_read on my 10g Database that's causing high
CPU usage at times. ...

fascinating - how did you determine that high read times made high cpu times?

Over the course of a day - hmmm.

Over the course of a day, you have 86,400 seconds. If you have a single cpu, you used 2.7% of that cpu!!! (if you have more cpus than one, you used less of course, 1.3% with 2cpus and so on)


suggestion: look at your insert statement, go down a level, not at the system... You currently have nothing here that suggests the top of your posting with IO times and the like have anything to do with the bottom half - a single (partial) insert with no information about the thing being inserted into.

high db_file_sequential read, using ASM

A reader, January 16, 2008 - 2:24 am UTC

I checked the process running on the top using topas command and found the the insert statement running. Then I ran the awrrpt to gather the statistics for that day.

Currently the system has two cpu's and below is the table structure into which the process is inserting.

SQL> desc drsadm.isda_cs22
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 DATAKEY                                   NOT NULL RAW(52)
 CALLTIME                                  NOT NULL DATE
 ORIGINATORID                              NOT NULL VARCHAR2(11)
 FINALTREATIND                                      VARCHAR2(33)
 FINALTREAT                                         VARCHAR2(53)
 CV1                                                VARCHAR2(249)
 CV2                                                VARCHAR2(249)
 CV3                                                VARCHAR2(249)
 CV4                                                VARCHAR2(249)
 CV5                                                VARCHAR2(249)
 CV6                                                VARCHAR2(249)
 CV7                                                VARCHAR2(249)
 CV8                                                VARCHAR2(249)
 ECHODATA                                           RAW(6)
 ANSWERIND                                          VARCHAR2(9)
 CONNECTTIMEMIN                                     NUMBER
 CONNECTTIMESEC                                     NUMBER
 EXCEPTIONIND                                       VARCHAR2(12)
 BUSYIND                                            VARCHAR2(8)
 ERRORCODE                                          NUMBER
 BUSYCAUSE                                          VARCHAR2(20)
 PROBDATA                                           VARCHAR2(25)
 CV9                                                VARCHAR2(249)
 CV10                                               VARCHAR2(249)
 CV11                                               VARCHAR2(249)
 CV12                                               VARCHAR2(249)
 CV13                                               VARCHAR2(249)
 CV14                                               VARCHAR2(249)
 CV15                                               VARCHAR2(249)
 CV16                                               VARCHAR2(249)
 CV17                                               VARCHAR2(249)
 CV18                                               VARCHAR2(249)
 CV19                                               VARCHAR2(249)
 CV20                                               VARCHAR2(249)
 INSERTTIMESTAMP                                    DATE


 SQL> SELECT AVG_ROW_LEN FROM DBA_TABLES WHERE TABLE_NAME='ISDA_CS22';

AVG_ROW_LEN
-----------
        243

Tom Kyte
January 16, 2008 - 3:36 pm UTC

...n Then I ran the awrrpt to gather the statistics for
that day....

and that is about 23.75 hours too long for 'tuning' with.

so, how many indexes, triggers, etc.

if you TRACE an insert into that table and tkprof it, what does IT show.

Just trying to turn off bold...

Stew Ashton, January 17, 2008 - 5:33 am UTC


Am I the only one who sees bold on this page from 15 Jan. on?
All I am doing is opening and closing each of the formatting tags; if this works (and the problem isn't just in my browser), why not do the same thing automatically at the end of every review and followup?

Last try, I promise

Stew Ashton, January 17, 2008 - 5:37 am UTC


Just turning off the tags now.

More reasons for sequnetial reads in full scan?

Oren, March 05, 2008 - 4:44 pm UTC

I too have a table (subpartition actually) that "full scanning" it is implemented mostly by sequential reads. What makes it really weird to me is that if I select only some (specific) columns it does the good old scattered reads. And it is consistent (I flushed the buffer_cache between tests).
Are there "legitimate" reasons for such behavior?

Here is the trace of the "sequential" case:

=====================
PARSING IN CURSOR #1 len=98 dep=0 uid=53 oct=3 lid=53 tim=1176514122068482 hv=191899395 ad='399d9428'
SELECT count(DIM_2642) FROM SHADOW_41 subpartition (d_20080101_i_200801010000) where rownum<=10000
END OF STMT
PARSE #1:c=0,e=187,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=1176514122068474
EXEC #1:c=0,e=133,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=1176514122068736
WAIT #1: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514122068800
WAIT #1: nam='db file sequential read' ela= 4440 file#=263 block#=1290939 blocks=1 obj#=890808 tim=1176514122073383
WAIT #1: nam='db file scattered read' ela= 5978 file#=263 block#=1290940 blocks=5 obj#=890808 tim=1176514122079717
WAIT #1: nam='db file sequential read' ela= 2538 file#=36 block#=1220813 blocks=1 obj#=890808 tim=1176514122082416
WAIT #1: nam='db file sequential read' ela= 3032 file#=36 block#=1220880 blocks=1 obj#=890808 tim=1176514122085582
WAIT #1: nam='db file sequential read' ela= 4471 file#=136 block#=113354 blocks=1 obj#=890808 tim=1176514122090158
WAIT #1: nam='db file sequential read' ela= 4547 file#=264 block#=1261450 blocks=1 obj#=890808 tim=1176514122094917
WAIT #1: nam='db file sequential read' ela= 4520 file#=36 block#=1220928 blocks=1 obj#=890808 tim=1176514122099568
WAIT #1: nam='db file sequential read' ela= 6603 file#=36 block#=1220959 blocks=1 obj#=890808 tim=1176514122106332
WAIT #1: nam='db file sequential read' ela= 4789 file#=84 block#=354069 blocks=1 obj#=890808 tim=1176514122111330
WAIT #1: nam='db file sequential read' ela= 4367 file#=36 block#=1220829 blocks=1 obj#=890808 tim=1176514122115880
WAIT #1: nam='db file sequential read' ela= 7933 file#=263 block#=1291006 blocks=1 obj#=890808 tim=1176514122123974
WAIT #1: nam='db file sequential read' ela= 5316 file#=263 block#=1291004 blocks=1 obj#=890808 tim=1176514122129522
WAIT #1: nam='db file sequential read' ela= 6964 file#=264 block#=1261451 blocks=1 obj#=890808 tim=1176514122136680
WAIT #1: nam='db file sequential read' ela= 4938 file#=302 block#=783153 blocks=1 obj#=890808 tim=1176514122141790
WAIT #1: nam='db file sequential read' ela= 4817 file#=36 block#=1220816 blocks=1 obj#=890808 tim=1176514122147048
WAIT #1: nam='db file sequential read' ela= 5326 file#=136 block#=113190 blocks=1 obj#=890808 tim=1176514122152620
WAIT #1: nam='db file sequential read' ela= 4601 file#=36 block#=1220864 blocks=1 obj#=890808 tim=1176514122157365
WAIT #1: nam='db file sequential read' ela= 5222 file#=263 block#=1290991 blocks=1 obj#=890808 tim=1176514122162721
WAIT #1: nam='db file sequential read' ela= 4519 file#=36 block#=1220845 blocks=1 obj#=890808 tim=1176514122167377
WAIT #1: nam='db file sequential read' ela= 4885 file#=263 block#=1290987 blocks=1 obj#=890808 tim=1176514122172385
WAIT #1: nam='db file sequential read' ela= 4648 file#=264 block#=1261454 blocks=1 obj#=890808 tim=1176514122177542
WAIT #1: nam='db file sequential read' ela= 5148 file#=264 block#=1261449 blocks=1 obj#=890808 tim=1176514122182820
WAIT #1: nam='db file sequential read' ela= 4834 file#=36 block#=1220861 blocks=1 obj#=890808 tim=1176514122187801
WAIT #1: nam='db file sequential read' ela= 5608 file#=36 block#=1220912 blocks=1 obj#=890808 tim=1176514122193582
.
.
.
WAIT #1: nam='db file sequential read' ela= 2693 file#=36 block#=1221632 blocks=1 obj#=890808 tim=1176514128189026
WAIT #1: nam='db file sequential read' ela= 4717 file#=36 block#=1221666 blocks=1 obj#=890808 tim=1176514128193870
WAIT #1: nam='db file sequential read' ela= 5825 file#=36 block#=1221648 blocks=1 obj#=890808 tim=1176514128199913
FETCH #1:c=466929,e=6131299,p=1400,cr=10595,cu=0,mis=0,r=1,dep=0,og=2,tim=1176514128200160
WAIT #1: nam='SQL*Net message from client' ela= 312610 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514128512953
FETCH #1:c=0,e=4,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=1176514128513087
WAIT #1: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514128513144
WAIT #1: nam='SQL*Net message from client' ela= 32070 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514128545260
=====================
PARSING IN CURSOR #7 len=52 dep=0 uid=53 oct=47 lid=53 tim=1176514128545530 hv=1029988163 ad='287c1d08'
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
END OF STMT
PARSE #7:c=0,e=101,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=1176514128545522
WAIT #7: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514128546033
EXEC #7:c=999,e=410,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=2,tim=1176514128546093
WAIT #7: nam='SQL*Net message from client' ela= 267383 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514128813599
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514128813874
WAIT #0: nam='SQL*Net message from client' ela= 22537 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514128836459
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=10595 pr=1400 pw=0 time=6131324 us)'
STAT #1 id=2 cnt=10000 pid=1 pos=1 obj=0 op='COUNT STOPKEY (cr=10595 pr=1400 pw=0 time=3393299 us)'
STAT #1 id=3 cnt=10000 pid=2 pos=1 obj=0 op='PARTITION COMBINED ITERATOR PARTITION: KEY KEY (cr=10595 pr=1400 pw=0 time=3363291 us)'
STAT #1 id=4 cnt=10000 pid=3 pos=1 obj=450259 op='TABLE ACCESS FULL SHADOW_41 PARTITION: 326 326 (cr=10595 pr=1400 pw=0 time=3333273 us)'


And here is the trace of the "scattered" case:

=====================
PARSING IN CURSOR #5 len=102 dep=0 uid=53 oct=3 lid=53 tim=1176514084387801 hv=2088718486 ad='3a9c7a80'
SELECT count(DATA_COMP_ID) FROM SHADOW_41 subpartition (d_20080101_i_200801010000) where rownum<=10000
END OF STMT
PARSE #5:c=0,e=151,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=1176514084387796
EXEC #5:c=0,e=88,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=1176514084387950
WAIT #5: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514084387987
WAIT #5: nam='db file sequential read' ela= 5551 file#=263 block#=1290939 blocks=1 obj#=890808 tim=1176514084393630
WAIT #5: nam='db file scattered read' ela= 8415 file#=263 block#=1290940 blocks=5 obj#=890808 tim=1176514084402326
WAIT #5: nam='db file scattered read' ela= 10916 file#=263 block#=1290945 blocks=8 obj#=890808 tim=1176514084413629
WAIT #5: nam='db file scattered read' ela= 11189 file#=263 block#=1290962 blocks=7 obj#=890808 tim=1176514084425316
WAIT #5: nam='db file scattered read' ela= 17832 file#=263 block#=1290969 blocks=8 obj#=890808 tim=1176514084443543
WAIT #5: nam='db file scattered read' ela= 23738 file#=263 block#=1290978 blocks=7 obj#=890808 tim=1176514084467723
WAIT #5: nam='db file scattered read' ela= 12682 file#=263 block#=1290985 blocks=8 obj#=890808 tim=1176514084480871
WAIT #5: nam='db file scattered read' ela= 12940 file#=263 block#=1291002 blocks=7 obj#=890808 tim=1176514084494391
WAIT #5: nam='db file scattered read' ela= 11347 file#=264 block#=1261449 blocks=8 obj#=890808 tim=1176514084506146
WAIT #5: nam='db file scattered read' ela= 12925 file#=264 block#=1261458 blocks=7 obj#=890808 tim=1176514084519666
WAIT #5: nam='db file scattered read' ela= 12225 file#=263 block#=1291057 blocks=8 obj#=890808 tim=1176514084532692
WAIT #5: nam='db file scattered read' ela= 23188 file#=263 block#=1291066 blocks=7 obj#=890808 tim=1176514084556490
WAIT #5: nam='db file scattered read' ela= 10120 file#=263 block#=1291073 blocks=8 obj#=890808 tim=1176514084567369
WAIT #5: nam='db file scattered read' ela= 16593 file#=263 block#=1291082 blocks=7 obj#=890808 tim=1176514084584867
WAIT #5: nam='db file scattered read' ela= 27788 file#=263 block#=1291089 blocks=8 obj#=890808 tim=1176514084613283
WAIT #5: nam='db file scattered read' ela= 10660 file#=263 block#=1291098 blocks=7 obj#=890808 tim=1176514084624620
WAIT #5: nam='db file scattered read' ela= 17000 file#=263 block#=1291105 blocks=8 obj#=890808 tim=1176514084642180
WAIT #5: nam='db file scattered read' ela= 97892 file#=264 block#=1261579 blocks=126 obj#=890808 tim=1176514084742793
WAIT #5: nam='db file scattered read' ela= 106694 file#=265 block#=1236363 blocks=126 obj#=890808 tim=1176514084855620
WAIT #5: nam='db file scattered read' ela= 122175 file#=266 block#=1190411 blocks=126 obj#=890808 tim=1176514084984528
WAIT #5: nam='db file scattered read' ela= 125239 file#=267 block#=1160459 blocks=126 obj#=890808 tim=1176514085116238
FETCH #5:c=52991,e=731536,p=623,cr=595,cu=0,mis=0,r=1,dep=0,og=2,tim=1176514085119557
WAIT #5: nam='SQL*Net message from client' ela= 206492 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514085326288
FETCH #5:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=1176514085326416
WAIT #5: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514085326484
WAIT #5: nam='SQL*Net message from client' ela= 29275 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514085355825
=====================
PARSING IN CURSOR #7 len=52 dep=0 uid=53 oct=47 lid=53 tim=1176514085356162 hv=1029988163 ad='287c1d08'
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
END OF STMT
PARSE #7:c=0,e=116,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=1176514085356152
WAIT #7: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514085356808
EXEC #7:c=1000,e=499,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=2,tim=1176514085356898
WAIT #7: nam='SQL*Net message from client' ela= 24336 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514085381387
WAIT #0: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514085381685
WAIT #0: nam='SQL*Net message from client' ela= 31981 driver id=1413697536 #bytes=1 p3=0 obj#=890808 tim=1176514085413714
STAT #5 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=595 pr=623 pw=0 time=731545 us)'
STAT #5 id=2 cnt=10000 pid=1 pos=1 obj=0 op='COUNT STOPKEY (cr=595 pr=623 pw=0 time=104435 us)'
STAT #5 id=3 cnt=10000 pid=2 pos=1 obj=0 op='PARTITION COMBINED ITERATOR PARTITION: KEY KEY (cr=595 pr=623 pw=0 time=84428 us)'
STAT #5 id=4 cnt=10000 pid=3 pos=1 obj=450259 op='TABLE ACCESS FULL SHADOW_41 PARTITION: 326 326 (cr=595 pr=623 pw=0 time=54414 us)'

Tom Kyte
March 06, 2008 - 8:03 am UTC

is there anything you can glean from the files - see how it is reading them in the first case:

4440 file#=263 block#=1290939 blocks=1 obj#=890808 tim=1176514122073383
WAIT #1: nam='db file scattered read' ela= 5978 file#=263 block#=1290940 blocks=5 obj#=890808 tim=1176514122079717
WAIT #1: nam='db file sequential read' ela= 2538 file#=36 block#=1220813 blocks=1 obj#=890808 tim=1176514122082416
WAIT #1: nam='db file sequential read' ela= 3032 file#=36 block#=1220880 blocks=1 obj#=890808 tim=1176514122085582
WAIT #1: nam='db file sequential read' ela= 4471 file#=136 block#=113354 blocks=1 obj#=890808 tim=1176514122090158
WAIT #1: nam='db file sequential read' ela= 4547 file#=264 block#=1261450 blocks=1 obj#=890808 tim=1176514122094917
WAIT #1: nam='db file sequential read' ela= 4520 file#=36 block#=1220928 blocks=1 obj#=890808 tim=1176514122099568
WAIT #1: nam='db file sequential read' ela= 6603 file#=36 block#=1220959 blocks=1 obj#=890808 tim=1176514122106332
WAIT #1: nam='db file sequential read' ela= 4789 file#=84 block#=3540

that seems a bit odd - given you know your system, anything pop out to you about the ordering of those files and the data therein?

Nothing pops out :-(

Oren, March 09, 2008 - 4:34 am UTC

I'm afraid I don't have any new inputs.

Identifying bitmap index

Drew, April 04, 2008 - 10:01 am UTC

Hi Tom -

I'm reading this thread because I'm trying to optimize existing code that causes a lot of db file sequential reads.

Above there was an explain plan:
Rows Row Source Operation
------- -------------------------------
0 MAT_VIEW ACCESS BY INDEX ROWID
0 BITMAP CONVERSION TO ROWIDS
0 BITMAP INDEX SINGLE VALUE IDX_table1

and you wrote:
OUCH - a bitmap index on a table you are modifying - that hurts (that should not be done....)

I see this same thing in my explain plan:

TABLE ACCESS BY INDEX ROWID <tablename>
BITMAP CONVERSION TO ROWIDS
BITMAP AND BITMAP CONVERSION FROM ROWIDS INDEX RANGE SCAN <indexname>
BITMAP CONVERSION FROM ROWIDS
SORT ORDER BY
INDEX RANGE SCAN <indexname>

I'm almost 100% positive that this index was NOT created as a bitmapped index. What's the best way to verify?
Tom Kyte
April 04, 2008 - 12:31 pm UTC

that is not a bitmap index, that is a bitmap btree plan. We generate a bitmap on the fly - that is totally safe

very helpful

A reader, April 21, 2008 - 4:44 pm UTC

just the general details in this post helped me to diagnose a performance issue that i was just having.

sequential reads

A reader, January 14, 2010 - 5:22 pm UTC

Tom:

Do you see what is causing the high sequential read here and how to reduce it. is it the SQL or the disk or something with oracle 9.2.0.2 (unpatched)? is this normal for 1st run (before caching).

pga_aggregate_target = 25165824 (is this low)
hash_area_size = 1048576

IT used an index on (bkmed,bkno) but i created another index on (bkmed,bkno,reship_flag,custcd) it did not use.

********************************************************************************

SELECT count(*)
FROM dist a, prod@db_link b
WHERE a.bkno = b.bkno
AND a.bkmedia = b.bkmedia
AND a.reship_flag is null
AND b.pstage in ('AB','CD','XY')
AND b.end_date between (trunc(sysdate)-120) and (trunc(sysdate)+1-1)
AND a.custcd = 'CANB'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.02 0.40 0 7 1 0
Execute 2 0.00 0.58 0 0 0 0
Fetch 4 1.73 17.55 3004 15172 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8 1.75 18.54 3004 15179 1 2

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 38

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=7586 r=3004 w=0 time=16872727 us)
1245 FILTER (cr=7586 r=3004 w=0 time=16871821 us)
1245 TABLE ACCESS BY INDEX ROWID DIST (cr=7586 r=3004 w=0 time=16868958 us)
165051 NESTED LOOPS (cr=3494 r=1162 w=0 time=6250159 us)
1378 REMOTE (cr=0 r=0 w=0 time=73510 us)
163672 INDEX RANGE SCAN DIST_BKN_IDX (cr=3494 r=1162 w=0 time=6020002 us)(object id 27438)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to dblink 6 0.00 0.00
SQL*Net message from dblink 6 0.32 0.61
SQL*Net message to client 4 0.00 0.00
SQL*Net more data from dblink 28 0.02 0.08
db file sequential read 3004 0.09 15.66
SQL*Net message from client 4 0.08 0.13
********************************************************************************

thank you
Tom Kyte
January 18, 2010 - 5:18 pm UTC

it did 1,162 reads against the index and then 3004-1162 reads against the table.


the data has to get into the cache sometime, if the index blocks needed were not there, we have to read them (and we did), same with the table blocks.

I'm always suspicious of a count(*), they are most often wastes of time, perhaps think about a way to NOT do them.


Not knowing your tables, I do see a lot of rows returned from the index - that don't have a match in the table (index rows - 163,672 - table rows 1,245).

you might consider an index on a(bkno,bkmedia,custcd,reship_flag) - you could probably cut out a large number of IO's by avoiding going to the table some 150,000 times that way.

db_file_sequential_read waits

Dan, February 17, 2010 - 11:02 pm UTC

Tom

Database: 10.2.0.4
I was tracing a job (using 10046 level 8) and noticed that the session is waiting on db_file_sequential_read (v$session_wait.event) at the same time the session id was appearing in v$session_longops (doing Table scan).

a) How it is possible that session wait is appearing in both places ?

b) session was just stuck on db_file_Sequental_read wait for 20 minutes while reading a block as per raw trace file.

What could be the reason for it ?

Tables used in the query were cached in the buffer pool.
Tom Kyte
February 18, 2010 - 8:46 am UTC

a) well, when you are doing a full scan ( a long operation, it would therefore appear in v$session_longops ) you typically have to do physical IO ( something we wait on ) therefore, you would be shown as doing a full scan (v$session_longops) and having to wait on something (physical io)

b) I doubt it, I think you might have been misinterpreting the data somewhere - care to share the actual observed values?


... Tables used in the query were cached in the buffer pool....

ummm, NOT if you were doing physical IO.

And the wait event db file sequential read is the wait event for indexed reads (single block IO), not full scans - so I'm really confused by what you've posted - things are not adding up.

Parse to execute ratio

akas, February 24, 2010 - 5:32 am UTC

Hi tom, in our AWR report it's observed."Execute to Parse %:" is 2.3%.

Does this mean SQL executed 2 times and parsed 100 times?
If so how can be this possible, as parsing is tyhe first step, then execution.

It should have been other way around imean 2 parse and 100 execution.
Tom Kyte
March 01, 2010 - 8:50 am UTC

round(100*(1-:prse/:exe),2)


that is the execute to parse %. When parse count = execute count, parse/execute = 1 and you get near zero.

So, it means that your parse counts are almost equal to your execute counts, you parse every single time you execute a sql statement.


Can you parse 100 time and execute 2 times? Sure, a really bad programmer could definitely do that - since the programmer can parse without executing.

But, that same programmer can parse once and execute once, a practice that is only a little less bad than parse 100 and execute twice.




Oracle Applications Reports

Ravi, April 01, 2010 - 12:27 pm UTC

We have multiple Oracle-provided Applications Reports running long running requests and many of them don't use bind variables. We have VERY HIGH DB File Sequential reads.

How do I go about resolving this issue, especially during Month-ends and quarter-ends - when multiple reports running at once (with no bind variables) all accessing AR tables (one report or other). We have adequately sized buffer cache and shared pool. Could you please let me know?

Thanks!
Tom Kyte
April 05, 2010 - 1:05 pm UTC

bind variables are not the cause of VERY HIGH DB file sequential reads. Using indexes inappropriately usually is.


If you have an adequately sized buffer cache, then the problem must be that.... the data you need isn't yet in the cache and needs to get there.


The only way to reduce physical IO waits would be to make the wait time less or reduce the number of times you wait - therefore your options include:

a) buy faster disk
b) increase buffer cache (MIGHT help, MIGHT hurt, cannot tell based on what you've told us)
c) tune queries - remove hints as a start (probably you are using indexes when you shouldn't be)

tuning queries includes reviewing indexing schemes, considering materialized views (that could be refreshed once before the end of month stuff and used by many reports - answer the question once, not once per report or per line in a report) and so on.

Sequential reads For FTS?

Jatin, April 29, 2010 - 3:38 pm UTC

Hi Tom

I have a very long running query consuming a lot of CPU and showing waits as 'db file sequential reads'. See a o/p from v$active_session_history:

SQL> select event, sum(wait_time)
  2  from v$active_session_history
  3  where sql_id = '4k18bbcngwgu7'
  4  group by event;

EVENT                                                            SUM(WAIT_TIME)
---------------------------------------------------------------- --------------
SQL*Net message from client                                            13717897
buffer busy waits                                                             0
db file sequential read                                                 4365918
latch: In memory undo latch                                               38176
latch: cache buffers chains                                            38694066
latch: cache buffers lru chain                                             8374
latch: redo allocation                                                       42
latch: undo global data                                                   49839
log buffer space                                                         631728
log file switch completion                                               507742
read by other session                                                       298


This is a simple delete query fired by ETL users executed around 60,000 times (one row at a time) and with no index on the concerned column of the table.. I am surprised why do I see a sequential read wait when I have a missing index here.. i was anticipating a scattered read wait.

DELETE
FROM
ALTN_HIER_ITEM
WHERE
(FIN_ITEM_I = :FIN_ITEM_I)


Thanks,
Jatin
Tom Kyte
April 29, 2010 - 3:40 pm UTC

if the table has any indexes - do you think that delete might have to read them all in order to delete :)

a delete reads EVERY index. For every row it deletes - it has to find the index entry to delete!

Don\t Quite Get It

Jatin, April 30, 2010 - 2:33 pm UTC

Sorry but I did not quite get the idea why a full table scan there in my delete is showing me waits in 'db file sequential read' (instead of scattered read). Can you please elaborate for me - sorry for asking you to reframe it in simple words.. I read through some of the other posts but not getting an exact answer..

Thanks
Jatin
Tom Kyte
May 06, 2010 - 10:14 am UTC

the delete needs to

a) find records to delete (full scan - db file scattered reads, consistent gets)
b) delete them (db block/current mode gets)
c) remove their index entries (index range scan, db file sequential reads)



Got it..

Jatin, May 05, 2010 - 10:41 am UTC

Tom - I din't read your post carefully enough, I think you meant don't you think.. (instead of do you.. ) :)

thanks. i have the answer now.
Tom Kyte
May 06, 2010 - 2:03 pm UTC

Nope, I mean what I wrote...

...
if the table has any indexes - do you think that delete might have to read them all in order to delete :) ...


I was asking you the rhetorical question - do you think that delete might have to read them...

hot blocks,

A reader, May 12, 2010 - 9:03 pm UTC

We are seeing incosistent elapsed time for the same query that run at one hour interval during the entire 24 hour window. The query sometimes takes less than 200 seconds and sometimes it goes up to 2000 seconds.

I have generated 10046 trace file for each run and I am seeing exact same explain plan always. The difference I am noticing is the time spent on db file sequential read is always high when the query takes above normal time to run. The other wait event is direct path read/write temp.

Looking at my observation, it is quite clear the IO latency is the reason for the query to run longer.

Why would IO latency fluctuates?
How to find out within Oracle that can cause high elapsed time due to Index reads?
If there are any hot blocks,how to find out them? We use ASM instance and our files are spread across (evenly) over the entire diskgroup.
Can I get any information of hot blocks using dba_extents?

Thanks for your support and time.


backup issue

lalit, August 31, 2010 - 3:34 am UTC

eeeeeeee

How did you calculate...

Arvind, January 15, 2011 - 2:51 pm UTC

Hi Tom,

I was reading through this post and did not fully understand your thoughts about a section posted by "Reviewer: A reader " on January 14, 2010 - 5pm Central time zone .

Can you please explain it a little on how did you calculate the "index rows - 163,672 - table rows 1,245".

And how did you figure that "an index on a(bkno,bkmedia,custcd,reship_flag) - you could probably cut out a large number of IO's by avoiding going to the table some 150,000 times that way."

Thanks a lot!

Tom Kyte
January 23, 2011 - 3:05 pm UTC

look at the row source operation, it says it got 163,672 rows from the index (that number is right there during the index range scan). for each of those 163,672 index entries - it went to the table but only found 1,245 (it says that number right in the row source operation).

If they added more of the columns to the index that are used in the predicate - we would be able to figure out during the index range scan that only 1,245 of the rows match our predicate and would only have to go to the table 1,245 times...

DB sequential read

Arun, February 25, 2011 - 7:24 pm UTC

Tom,

"Db file sequential read wait event indicates a single block i/o from disk not from memory"

Is there a case where we have a db file sequential read can still be hitting the filesystem cache which is in memory, why because we have a case where we have average read times for sequential reads is < 1ms and admins are saying it's not possible to have reads <1ms unless if it is not using file system cache.


Thank you
Tom Kyte
February 28, 2011 - 8:10 am UTC

Is there a case where we have a db file sequential read can still be hitting
the filesystem cache which is in memory


absolutely - search this site for:

http://asktom.oracle.com/pls/asktom/asktom.search?p_string=%22secondary+sga%22

happens all of the time.


And to Oracle it means "we are doing a single block IO from disk, not from memory". The filesystem cache looks like disk to us - we issued a system call for IO. You cannot differentiate in the database between an IO that the OS satisfied from its cache versus disk - they are just IO's to us.

Understanding db file sequential read

A reader, June 02, 2011 - 6:31 am UTC

Tom:

We have an update correlated statement that runs for 150,000 times. It ran fine for all of them except one. The update looks like

update x
set col1 = (select id from y where )
where x_pk in (select id2 from y where);


the problem one had to update 80,000 rows. It ran 3 hours to update 80,000 rows taking CPU cycles.

When tracing the session I see the wait event on the data block of "x".

The only change that happened to "x" is it moved from manual segment tablespace to auto segment tablespace recently.

How do we analyze that 1 problem id?

Thanks for your help.

regards,




Tom Kyte
June 02, 2011 - 9:14 am UTC

what is the 'wait event on the data block of "x"'? Do you mean it was waiting on db file sequential reads?

If so, it isn't really the move from manual to automatic segment space management, it was just that the blocks it needed to update were NOT in the buffer cache and had to be read from disk.

sql_trace it and you'll see it doing physical IO.

A reader, June 02, 2011 - 9:26 am UTC

Tom:

Here is the sample:

WAIT #243: nam='db file sequential read' ela= 875 file#=86 block#=3154963 blocks
=1 obj#=346483 tim=5808411792011
WAIT #243: nam='db file sequential read' ela= 167 file#=103 block#=1312530 block
s=1 obj#=346489 tim=5808412011793
WAIT #243: nam='db file sequential read' ela= 509 file#=86 block#=3154966 blocks
=1 obj#=346483 tim=5808412012699
WAIT #243: nam='db file sequential read' ela= 1058 file#=86 block#=3154970 block
s=1 obj#=346483 tim=5808412234795

*** 2011-06-01 11:00:28.488
WAIT #243: nam='db file sequential read' ela= 722 file#=86 block#=3154973 blocks
=1 obj#=346483 tim=5808412348333
WAIT #243: nam='db file sequential read' ela= 137 file#=86 block#=3154977 blocks
=1 obj#=346483 tim=5808412787490
WAIT #243: nam='db file sequential read' ela= 710 file#=86 block#=3154980 blocks
=1 obj#=346483 tim=5808412901262
WAIT #243: nam='db file sequential read' ela= 809 file#=86 block#=3154983 blocks
=1 obj#=346483 tim=5808413232776
WAIT #243: nam='db file sequential read' ela= 574 file#=86 block#=3154985 blocks
=1 obj#=346483 tim=5808413346236

If it is the physcial IO should i not see buffer busy waits somewhere?

The top wait event assocaited with that hr is:

User I/O 13,639,715 0 28,061 2 66.29
DB CPU 13,028 30.78


so it took 3 hours to update 80000 rows....

Thanks,
Tom Kyte
June 02, 2011 - 9:34 am UTC

why would you see buffer busy waits from a session that is doing physical IO?

you might see buffer busy waits if you had:

1 session doing a physical IO of a block that

N other sessions also needed.


You would see one session waiting on IO and N other sessions waiting on buffer busy wait: read by other session.

Since you were presumably the only one interested in these blocks - you had to wait.


You chopped off the meaning of those columns :( sigh... and you don't say if this was system wide numbers, or just for your sessoing.

Assuming that represents 13.5 million IO's by your session -

ops$tkyte%ORA11GR2> select 5/1000*13639715/60/60 from dual;

5/1000*13639715/60/60
---------------------
           18.9440486


you are lucky it didn't take 19hours. 5ms for a random read, times 13.5 million random reads would take about 19hours.

A reader, June 02, 2011 - 10:06 am UTC

Tom:

Thats the system wide statistics for 1 hour from awr report. The clip i cut and put is from the trace of the session. The wait is on db file sequential read. 149,999 id's updated without problem, and some of them updated 500K rows also without problem. This problem ID ran by itself with minimum load on the system. My suspicion is could it be migrated rows?

Thanks,
Tom Kyte
June 02, 2011 - 10:54 am UTC

please put column headings on things - those metrics are reported in many places.


It is *physical IO*. run the tkprof on that trace file, you'll see that that particular query hit what was apparently an empty buffer cache - devoid of the particular blocks it needed to update. You'll see lots of waits on db file sequential read - which is *physical IO*, which means the blocks it wanted where not in the cache. Post that bit of the tkprof report - so we can see the number of times and the amount of wait time attributed to that particular wait event and we'll all see that it was *physical IO*

A reader, June 02, 2011 - 10:15 am UTC

Here is the session stats for the 3 hrs:

Execution Statistics




Total Per Execution Per Row
Executions 87 1 0.00
CPU Time (sec) 966.53 11.11 0.01
Buffer Gets 15,205,188 174,772.28 194.00
Disk Reads 3,660 42.07 0.05
Direct Writes 0 0.00 0.00
Rows 78,378 900.90 1
Fetches 0 0.00 0.00

Hour 2:

Execution Statistics




Total Per Execution Per Row
Executions 0 NA NA
CPU Time (sec) 3,561.29 NA NA
Buffer Gets 41,344,898 NA NA
Disk Reads 12,062 NA NA
Direct Writes 0 NA NA
Rows 0 NA NA
Fetches 0 NA NA



Hour 3:

Execution Statistics




Total Per Execution Per Row
Executions 0 NA NA
CPU Time (sec) 3,520.54 NA NA
Buffer Gets 42,092,638 NA NA
Disk Reads 8,485 NA NA
Direct Writes 0 NA NA
Rows 0 NA NA
Fetches 0 NA NA




Activity By Time


Each hour shows the wait time in negative:

Elapsed Time (sec) 3,472.08
CPU Time (sec) 3,520.54
Wait Time (sec) -48.46





Tom Kyte
June 02, 2011 - 11:01 am UTC

and completely unreadable it is....

not that I even know what it represents nor how it was collected.


just run the tkprof on the trace file you already have please.

A reader, June 02, 2011 - 11:15 am UTC

Tom:

The trace i started to take after the update statement began. When I run tkprof it comeout emtpty devoid of the update statement.

Hour 1:

Execution Statistics of the update statement

"Total" "Per Execution" "Per Row "
Executions 87 1 0.00
CPU Time (sec) 966.53 11.11 0.01
Buffer Gets 15,205,188 174,772.28 194.00
Disk Reads 3,660 42.07 0.05
Direct Writes 0 0.00 0.00
Rows 78,378 900.90 1
Fetches 0 0.00 0.00


Hour 2:

Execution Statistics




"Total" "Per Execution" "Per Row "
Executions 0 NA NA
CPU Time (sec) 3,561.29 NA NA
Buffer Gets 41,344,898 NA NA
Disk Reads 12,062 NA NA
Direct Writes 0 NA NA
Rows 0 NA NA
Fetches 0 NA NA








Hour 3:

Execution Statistics




"Total" "Per Execution" "Per Row "
Executions 0 NA NA
CPU Time (sec) 3,520.54 NA NA
Buffer Gets 42,092,638 NA NA
Disk Reads 8,485 NA NA
Direct Writes 0 NA NA
Rows 0 NA NA
Fetches 0 NA NA





Tom Kyte
June 02, 2011 - 11:29 am UTC

migrated rows would AT MOST increase the buffer gets by 2 - but not really anywhere near two.

update x
set col1 = (select id from y where )
where x_pk in (select id2 from y where);


it would affect the set col1 = bit, adding 1 IO to it per row updated.


The select id2 from y where... would do as much work.
the update x where x_pk in ... would do as much work.
the select id from y where.... would do as much work.

So, it is doubtful - extremely doubtful - that migrated rows have much to do with this - unless this is triggering the mass migration of all of the rows.

Your buffer gets look very high. Are you sure that this was run in isolation (the excessive buffer gets could be reads to the undo segment to roll back some other transactions outstanding work).

Do you have access to ASH - get an ASH report for this session

A reader, June 02, 2011 - 12:07 pm UTC

Thanks for your time Tom. How do I upload the ash report without losing the formatting?

Thanks,

A reader, June 02, 2011 - 12:25 pm UTC

Top SQL with Top Row Sources


SQL ID PlanHash S "Sampled # of Executions" "% Activity" "Row Source " " %RwSrc Top Event " "%Event" SQL Text
8sk49cnwqmvmg 1598910661 7 99.89 TABLE ACCESS - BY INDEX ROWID 54.69 CPU + Wait for CPU 54.69 UPDATE SE_ S...
INDEX - RANGE SCAN 33.64 CPU + Wait for CPU 33.64
HASH - UNIQUE 10.64 CPU + Wait for CPU 10.64









"Slot Time" "(Duration)" "Slot Count" "Event" "Event Count" "% Event "
09:50:15 (9.8 min) 57 CPU + Wait for CPU 57 6.52
10:00:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
10:15:00 (15.0 min) 90 CPU + Wait for CPU 90 10.30
10:30:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
10:45:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:00:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:15:00 (15.0 min) 89 CPU + Wait for CPU 88 10.07
db file sequential read 1 0.11
11:30:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:45:00 (15.0 min) 89 CPU + Wait for CPU 87 9.95
db file sequential read 2 0.23
12:00:00 (15.0 min) 89 CPU + Wait for CPU 88 10.07
db file sequential read 1 0.11
12:15:00 (15.0 min) 15 CPU + Wait for CPU 14 1.60
db file sequential read 1 0.11

A reader, June 02, 2011 - 12:26 pm UTC

Top SQL with Top Row Sources


SQL ID PlanHash S "Sampled # of Executions" "% Activity" "Row Source " " %RwSrc Top Event " "%Event" SQL Text
8sk49cnwqmvmg 1598910661 7 99.89 TABLE ACCESS - BY INDEX ROWID 54.69 CPU + Wait for CPU 54.69 UPDATE SE_ S...
INDEX - RANGE SCAN 33.64 CPU + Wait for CPU 33.64
HASH - UNIQUE 10.64 CPU + Wait for CPU 10.64









"Slot Time" "(Duration)" "Slot Count" "Event" "Event Count" "% Event "
09:50:15 (9.8 min) 57 CPU + Wait for CPU 57 6.52
10:00:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
10:15:00 (15.0 min) 90 CPU + Wait for CPU 90 10.30
10:30:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
10:45:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:00:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:15:00 (15.0 min) 89 CPU + Wait for CPU 88 10.07
db file sequential read 1 0.11
11:30:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:45:00 (15.0 min) 89 CPU + Wait for CPU 87 9.95
db file sequential read 2 0.23
12:00:00 (15.0 min) 89 CPU + Wait for CPU 88 10.07
db file sequential read 1 0.11
12:15:00 (15.0 min) 15 CPU + Wait for CPU 14 1.60
db file sequential read 1 0.11

A reader, June 02, 2011 - 12:32 pm UTC

Tom I tried various formatting options:

"Sample" "Time" "Data Source"
Analysis Begin Time: 01-Jun-11 09:50:15 DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 24246
Analysis End Time: 01-Jun-11 12:30:15 DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 24249
Elapsed Time: 160.0 (mins)
Sample Count: 874
Average Active Sessions: 0.91
Avg. Active Session per CPU: 0.03
Report Target: SESSION_ID = 788 10.6% of total database activity




Top SQL with Top Row Sources


SQL ID PlanHash S "Sampled # of Executions" "% Activity" "Row Source " " %RwSrc Top Event " "%Event" SQL Text
8sk49cnwqmvmg 1598910661 7 99.89 TABLE ACCESS - BY INDEX ROWID 54.69 CPU + Wait for CPU 54.69 UPDATE SE_ S...
INDEX - RANGE SCAN 33.64 CPU + Wait for CPU 33.64
HASH - UNIQUE 10.64 CPU + Wait for CPU 10.64









"Slot Time" "(Duration)" "Slot Count" "Event" "Event Count" "% Event "
09:50:15 (9.8 min) 57 CPU + Wait for CPU 57 6.52
10:00:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
10:15:00 (15.0 min) 90 CPU + Wait for CPU 90 10.30
10:30:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
10:45:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:00:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:15:00 (15.0 min) 89 CPU + Wait for CPU 88 10.07
db file sequential read 1 0.11
11:30:00 (15.0 min) 89 CPU + Wait for CPU 89 10.18
11:45:00 (15.0 min) 89 CPU + Wait for CPU 87 9.95
db file sequential read 2 0.23
12:00:00 (15.0 min) 89 CPU + Wait for CPU 88 10.07
db file sequential read 1 0.11
12:15:00 (15.0 min) 15 CPU + Wait for CPU 14 1.60
db file sequential read 1 0.11

Tom Kyte
June 02, 2011 - 1:16 pm UTC

Your buffer gets look very high. Are you sure that this was run in isolation (the excessive buffer gets could be reads to the undo segment to roll back some other transactions outstanding work).

does the database cpu for this timeframe more or less equal the cpu used by this statement in that timeframe - that would be one way to verify that you were in isolation.


You can use the sqlplus text reports for ASH information.


In $ORACLE_HOME/rdbms/admin ashrpt.sql, ashrpti.sql, generates an ASH report … filter by:
Session
SQL’s
Wait Classes
Database Services
Modules/Actions/Client_ID
PL/SQL entry point

A reader, June 02, 2011 - 1:30 pm UTC

Tom:

This is a production database. I can say this was running when the system was pretty idle. no major rollbacks or transactions happening during this time.

Thanks,

A reader, June 02, 2011 - 2:27 pm UTC

Tom not sure how it is going to look with formatting:

CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
32 9,971M (100%) 5,952M (59.7%) 290M (2.9%) 30.4M (0.3%)


Analysis Begin Time: 01-Jun-11 10:15:00
Analysis End Time: 01-Jun-11 13:15:00
Elapsed Time: 180.0 (mins)
Begin Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 24247
End Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 24250
Sample Count: 9,176
Average Active Sessions: 8.50
Avg. Active Session per CPU: 0.27
Report Target: None specified

Top User Events DB/Inst: PIPROD (Jun 01 10:15 to 13:15)

Avg Active
Event Event Class % Event Sessions
----------------------------------- --------------- ---------- ----------
direct path read User I/O 31.46 2.67
CPU + Wait for CPU CPU 28.71 2.44
db file sequential read User I/O 26.38 2.24
db file scattered read User I/O 1.93 0.16
direct path read temp User I/O 1.68 0.14
-------------------------------------------------------------


Top Background Events DB/Inst: PIPROD/piprod (Jun 01 10:15 to 13:15)

Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 2.05 0.17
log file parallel write System I/O 1.23 0.10
-------------------------------------------------------------




Top Event P1/P2/P3 Values DB/Inst: PIPROD/piprod (Jun 01 10:15 to 13:15)

Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
direct path read 31.46 "45","1933312","128" 0.07
file number first dba block cnt

db file sequential read 26.42 "1","4268","1" 0.01
file# block# blocks

db file scattered read 1.94 "21","1120649","128" 0.02
file# block# blocks

direct path read temp 1.68 "1003","4879","1" 0.01
file number first dba block cnt

log file sync 1.56 "252","1043327782","0" 0.01
buffer# NOT DEFINED NOT DEFINED

-------------------------------------------------------------




Top Service/Module DB/Inst: PIPROD/piprod (Jun 01 10:15 to 13:15)

Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS JDBC Thin Client 26.51 UNNAMED 26.51
PIprod pmdtm@prodhostpm3 (TNS V1-V 21.83 UNNAMED 21.83
pmdtm@prodhostm1 (TNS V1-V 15.42 UNNAMED 15.42
pmdtm@prodhostpm4 (TNS V1-V 12.61 UNNAMED 12.61
pmdtm@prodhostpm2 (TNS V1-V 12.06 UNNAMED 12.06
-------------------------------------------------------------




SQL Command Type SQLIDs % Activity Sessions
---------------------------------------- ---------- ---------- ----------
SELECT 193 75.08 6.38
INSERT 30 9.70 0.82
UPDATE 15 8.79 0.75
-------------------------------------------------------------







Top SQL with Top Events DB/Inst: PIPROD/piprod (Jun 01 10:15 to 13:15)

Sampled #
SQL ID Planhash of Executions % Activity
----------------------- -------------------- -------------------- --------------
Event % Event Top Row Source % RwSrc
------------------------------ ------- --------------------------------- -------
8sk49cnwqmvmg 1598910661 7 7.92
CPU + Wait for CPU 7.88 TABLE ACCESS - BY INDEX ROWID 4.20
UPDATE X FE SET FE.COL1 = (SELECT UNIQUE (J.COL2) FROM PROCESS_TABLE J WHERE J.COL1 = :B5 AND PFT
4gws2vkfc27hu 2923163058 24 7.75
db file sequential read 6.60 TABLE ACCESS - BY INDEX ROWID 6.00
SELECT /*+ ordered full(
CPU + Wait for CPU 1.12 PX SEND - QC (RANDOM) 0.19

2hdyn5jwz1r4c 856479528 184 6.09
CPU + Wait for CPU 2.89 HASH JOIN 1.61
SELECT strat.colid

dyx7g7ucj5fs5 3410003418 4 4.80
direct path read 4.60 TABLE ACCESS - FULL 4.60
SELECT STATEMENT

4n93ktu7pscft 549543722 18 3.52
db file sequential read 2.67 TABLE ACCESS - BY INDEX ROWID 2.03
SELECT STATEMENT

-------------------------------------------------------------





Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
788,54667 8.11 CPU + Wait for CPU 8.01
PI_APP JDBC Thin Client 735/1,080 [ 68%] 3 --------------------> this is the update statement

1083,23881 1.55 direct path read 1.50
PI_APP pmdtm@prodhostpm4 (TNS V1-V3) 138/1,080 [ 13%] 0

207,21158 1.51 db file sequential read 1.35
PI_APP oracle@prodhost01 (P000) 124/1,080 [ 11%] 0

223,31743 1.51 db file sequential read 1.31
PI_APP oracle@prodhost (P004) 120/1,080 [ 11%] 0

527,22148 1.51 db file sequential read 1.31
PI_APP oracle@prodhost (P001) 120/1,080 [ 11%] 0

-------------------------------------------------------------






> With respect to Application, Cluster, User I/O and buffer busy waits only.

Object ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
Object Name (Type) Tablespace
----------------------------------------------------- -------------------------
346483 8.76 db file sequential read 6.20 --------------------------------> Table x has a foreign key constraint on table strat. The update is happening to the referential foreign key, hence it has to do the look up. (The one below)
PI_PROD.X (TABLE) PI_D06

direct path read 2.55


140666 7.64 db file sequential read 3.90
PI_PROD.STRAT (TABLE) PI_D01

direct path read 3.49


118273 4.60 direct path read 4.60
PI_PROD.STRAS (TABLE) PI_D01

3375 3.44 direct path read 3.44
PI_PROD.LPTTRAN (TABLE) PI_D01

346489 2.90 db file sequential read 2.90
PI_PROD.X_IX3 (INDEX) PI_X07

-------------------------------------------------------------







Activity Over Time DB/Inst: PIPROD/piprod (Jun 01 10:15 to 13:15)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period

Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
10:15:00 (5.0 min) 312 direct path read 119 1.30
db file sequential read 83 0.90
CPU + Wait for CPU 76 0.83
10:20:00 (20.0 min) 1,300 direct path read 581 6.33
CPU + Wait for CPU 398 4.34
db file sequential read 205 2.23
10:40:00 (20.0 min) 1,084 CPU + Wait for CPU 377 4.11
direct path read 342 3.73
db file sequential read 229 2.50
11:00:00 (20.0 min) 751 direct path read 327 3.56
CPU + Wait for CPU 247 2.69
db file sequential read 105 1.14
11:20:00 (20.0 min) 746 CPU + Wait for CPU 248 2.70
direct path read 210 2.29
db file sequential read 209 2.28
11:40:00 (20.0 min) 1,054 direct path read 434 4.73
CPU + Wait for CPU 337 3.67
db file sequential read 128 1.39
12:00:00 (20.0 min) 669 direct path read 223 2.43
CPU + Wait for CPU 212 2.31
db file sequential read 167 1.82
12:20:00 (20.0 min) 958 direct path read 364 3.97
db file sequential read 266 2.90
CPU + Wait for CPU 193 2.10
12:40:00 (20.0 min) 931 db file sequential read 349 3.80
CPU + Wait for CPU 290 3.16
direct path read 164 1.79
13:00:00 (15.0 min) 1,371 db file sequential read 683 7.44
CPU + Wait for CPU 444 4.84
direct path read 123 1.34
-------------------------------------------------------------








A reader, June 02, 2011 - 5:22 pm UTC

Tom:

I have figured out the problem. The explain plan was wrong and it was using a wrong index. It is fixed now.

Thanks for all your time.

Your reasoning with the problem helped me out to dig deeper.

Kind regards,

Anirban, June 29, 2011 - 2:58 pm UTC

All the answer is very usefull if I consistently get "db file sequential read" but I am surprised , if I run the run the same query from SQL*PLUS , I am not getting any slow responce but it is slow only from application and waiting on "db file sequential read". It is giving exactly same explain plan for both the cases. I am using 9.2.0.7 on HP-Itanium.

Tom Kyte
June 29, 2011 - 3:14 pm UTC

that just means that whatever inputs you used in sqlplus happen to hit data that resides in the cache already. Mix up the inputs - change them - and I'll bet you see sqlplus doing physical IO as well.

sqlplus is just a client application - no different than an application you could code yourself, it is not special.

Anirban, June 29, 2011 - 3:52 pm UTC

Thanks Tom for very prompt reply. After seeing your reply ,I used different values in predicate while I was executing thru SQL*PLUS (not exactly the value used thru application) , and got the same response time, i,e very fast. After that I used the same value in application and result remain same, i,e slow and same "db file sequential read" wait. If you need something from me , I am ready to provide it.And correct me if I missed anything.

Once again thanks a ton for replying my issue. I am grateful.

Regards,
Anirban
Tom Kyte
June 29, 2011 - 5:35 pm UTC

trace it.

show us the sql and the tkprof from the application
show us the sql and the tkprof from your sqlplus application

Anirban, June 29, 2011 - 4:17 pm UTC

Problem is solved when I analyze the table with sample 100 percent. Thanks anyway.

Very expensive fetch for zero rows

Abel, September 12, 2011 - 9:15 am UTC

Hi, Tom,
I get from tkprof a report quite paradoxical: 2.5 minutes for a fetch of 0 rows!.
(Is a bulk collect fetch in PL/SQL)
A lot of wait db file sequential read, also.

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.02 0.01 0 0 0 0
Fetch 1 13.76 155.80 151507 434539 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 13.78 155.81 151507 434539 0 0


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 5 0.00 0.00
db file sequential read 97975 0.29 82.28

Any hint?
Thanks in advance!

Regards
Tom Kyte
September 13, 2011 - 6:12 am UTC

well, something seems missing here

like the query
the row source operation report (showing where the time was spent in the execution plan)

some of the wait events (151,507 total IO's yet you only waited 97,975 times?)


But this is easy - the number of rows returned doesn't really have any direct correlation to the run time of a query. It contributes to it, but does not control it.


Say the query was:

select * from t where x = ? and y = ?;

and there was an index on X we decided to use.

But that index found thousands of rows (so we go from index to table over and over) and each of those rows was on a different block most of the time (so we do tons of physical IO). And when we got to that row - we discovered that y=? was never true.

A lot of work to find no rows. Possible solution: add y to the index.


But I cannot solve or answer your specific case give the information provided.

Very expensive fetch for zero rows (2)

Abel, September 13, 2011 - 11:48 am UTC

Sorry, Tom, here is the missing data (believe or not, the rule optimizer goal produced faster results than the CBO):

SELECT /*+RULE*/ UN.COD_OS, UN.COD_INTER, UN.TIP_INTER,UN.COD_CLIENTE,
UN.COD_PLAN, UN.FEC_INGRESO, UN.FEC_DESDEAPLI, UN.NUM_PERIODOS
FROM
( SELECT /*+RULE*/B.COD_OS, B.COD_INTER, B.TIP_INTER,A.COD_CLIENTE,
A.COD_PLAN, A.FEC_INGRESO, A.FEC_DESDEAPLI, A.NUM_PERIODOS FROM
BPT_BENEFICIARIOS A, GED_CODIGOS C, GA_ABOAMIST D, CI_ORSERV B WHERE
B.TIP_INTER = 1 AND A.COD_ESTADO IN ( 'V','N') AND B.COD_INTER =
D.NUM_ABONADO AND A.COD_CLIENTE = D.COD_CLIENTE AND A.NUM_ABONADO =
D.NUM_ABONADO AND B.FECHA > A.FEC_INGRESO AND B.FECHA > SYSDATE -2 AND
C.NOM_TABLA = :B3 AND C.NOM_COLUMNA = :B2 AND C.COD_MODULO = :B1 AND
C.COD_VALOR = B.COD_OS AND NOT EXISTS (SELECT /*+RULE*/D.COD_PLAN FROM
BPD_PLANES D WHERE D.COD_PLAN = A.COD_PLAN AND D.FEC_DESDEAPLI =
A.FEC_DESDEAPLI AND D.COD_ESTADO = 'V' AND D.COD_TIPLAN = 1) UNION SELECT
/*+RULE*/B.COD_OS, B.COD_INTER, B.TIP_INTER,A.COD_CLIENTE, A.COD_PLAN,
A.FEC_INGRESO, A.FEC_DESDEAPLI, A.NUM_PERIODOS FROM BPT_BENEFICIARIOS A,
GED_CODIGOS C, GA_ABOCEL D, CI_ORSERV B WHERE B.TIP_INTER = 1 AND
A.COD_ESTADO IN ( 'V','N') AND B.COD_INTER = D.NUM_ABONADO AND
A.COD_CLIENTE = D.COD_CLIENTE AND A.NUM_ABONADO = D.NUM_ABONADO AND B.FECHA
> A.FEC_INGRESO AND B.FECHA > SYSDATE - 2 AND C.NOM_TABLA = :B3 AND
C.NOM_COLUMNA = :B2 AND C.COD_MODULO = :B1 AND C.COD_VALOR = B.COD_OS AND
NOT EXISTS (SELECT /*+RULE*/D.COD_PLAN FROM BPD_PLANES D WHERE D.COD_PLAN =
A.COD_PLAN AND D.FEC_DESDEAPLI = A.FEC_DESDEAPLI AND D.COD_ESTADO = 'V' AND
D.COD_TIPLAN = 1) )UN GROUP BY UN.COD_OS, UN.COD_INTER, UN.TIP_INTER,
UN.COD_CLIENTE, UN.COD_PLAN, UN.FEC_INGRESO, UN.FEC_DESDEAPLI,
UN.NUM_PERIODOS


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.02 0.01 0 0 0 0
Fetch 1 13.76 155.80 151507 434539 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 13.78 155.81 151507 434539 0 0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: RULE
Parsing user id: 40 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 SORT GROUP BY (cr=434539 pr=151507 pw=0 time=155803856 us)
0 0 0 VIEW (cr=434539 pr=151507 pw=0 time=155803839 us)
0 0 0 SORT UNIQUE (cr=434539 pr=151507 pw=0 time=155803834 us)
0 0 0 UNION-ALL (cr=434539 pr=151507 pw=0 time=155803803 us)
0 0 0 FILTER (cr=217319 pr=148311 pw=0 time=138194233 us)
0 0 0 NESTED LOOPS (cr=217319 pr=148311 pw=0 time=138194228 us)
0 0 0 NESTED LOOPS (cr=217319 pr=148311 pw=0 time=138194227 us)
0 0 0 NESTED LOOPS (cr=217319 pr=148311 pw=0 time=138194222 us)
0 0 0 NESTED LOOPS (cr=217319 pr=148311 pw=0 time=138194220 us)
5 5 5 INDEX RANGE SCAN PK_GED_CODIGOS (cr=2 pr=0 pw=0 time=48 us)(object id 31271)
0 0 0 TABLE ACCESS BY INDEX ROWID CI_ORSERV (cr=217317 pr=148311 pw=0 time=138194154 us)
500749 500749 500749 INDEX RANGE SCAN AK_CI_ORSERV (cr=2549 pr=2381 pw=0 time=11691015 us)(object id 31149)
0 0 0 TABLE ACCESS BY INDEX ROWID GA_ABOAMIST (cr=0 pr=0 pw=0 time=0 us)
0 0 0 INDEX UNIQUE SCAN PK_GA_ABOAMIST (cr=0 pr=0 pw=0 time=0 us)(object id 34955)
0 0 0 INDEX RANGE SCAN PK_BPT_BENEFICIARIOS (cr=0 pr=0 pw=0 time=0 us)(object id 35444)
0 0 0 TABLE ACCESS BY INDEX ROWID BPT_BENEFICIARIOS (cr=0 pr=0 pw=0 time=0 us)
0 0 0 TABLE ACCESS BY INDEX ROWID BPD_PLANES (cr=0 pr=0 pw=0 time=0 us)
0 0 0 INDEX UNIQUE SCAN PK_BPD_PLANES (cr=0 pr=0 pw=0 time=0 us)(object id 35523)
0 0 0 FILTER (cr=217220 pr=3196 pw=0 time=17609526 us)
0 0 0 NESTED LOOPS (cr=217220 pr=3196 pw=0 time=17609521 us)
0 0 0 NESTED LOOPS (cr=217220 pr=3196 pw=0 time=17609512 us)
0 0 0 NESTED LOOPS (cr=217220 pr=3196 pw=0 time=17609506 us)
0 0 0 NESTED LOOPS (cr=217220 pr=3196 pw=0 time=17609500 us)
5 5 5 INDEX RANGE SCAN PK_GED_CODIGOS (cr=2 pr=1 pw=0 time=1556 us)(object id 31271)
0 0 0 TABLE ACCESS BY INDEX ROWID CI_ORSERV (cr=217218 pr=3195 pw=0 time=17607919 us)
500749 500749 500749 INDEX RANGE SCAN AK_CI_ORSERV (cr=2450 pr=75 pw=0 time=983146 us)(object id 31149)
0 0 0 TABLE ACCESS BY INDEX ROWID GA_ABOCEL (cr=0 pr=0 pw=0 time=0 us)
0 0 0 INDEX UNIQUE SCAN PK_GA_ABOCEL (cr=0 pr=0 pw=0 time=0 us)(object id 30795)
0 0 0 INDEX RANGE SCAN PK_BPT_BENEFICIARIOS (cr=0 pr=0 pw=0 time=0 us)(object id 35444)
0 0 0 TABLE ACCESS BY INDEX ROWID BPT_BENEFICIARIOS (cr=0 pr=0 pw=0 time=0 us)
0 0 0 TABLE ACCESS BY INDEX ROWID BPD_PLANES (cr=0 pr=0 pw=0 time=0 us)
0 0 0 INDEX UNIQUE SCAN PK_BPD_PLANES (cr=0 pr=0 pw=0 time=0 us)(object id 35523)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 5 0.00 0.00
db file sequential read 97975 0.29 82.28


Thanks and best regards!

Tom Kyte
September 14, 2011 - 3:53 pm UTC

it looks like it is *exactly* what I described above.

see you have this twice:

         0          0          0           TABLE ACCESS BY INDEX ROWID CI_ORSERV (cr=217218 pr=3195 pw=0 time=17607919 us)
    500749     500749     500749            INDEX RANGE SCAN AK_CI_ORSERV (cr=2450 pr=75 pw=0 time=983146 us)(object id 31149)



so, twice it found 500,749 rows in the index AK_CI_ORSERV and had to go to the table CI_ORSERV 500,749 times (performing many tens of thousands of physical IO's in the process) to find out that...

none of the rows actually matched the predicate.


So, it sounds like your index AK_CI_ORSERV is missing a column at the end - the column (or columns) we are having to get from the table to finish the where clause. Adding those to the index would undoubtedly make this query go pretty darn fast.

consider:

ops$tkyte%ORA11GR2> create table t ( x int, y int, z varchar2(1000) default rpad( 'x', 1000, 'x' ) );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t  (x,y)
  2  select rownum, -rownum
  3    from dual
  4  connect by level <= 100000
  5  /

100000 rows created.

ops$tkyte%ORA11GR2> create index t_idx on t(x);

Index created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec dbms_monitor.session_trace_enable

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select /*+ index( t t_idx ) */ * from t where x > 0 and y = 0;

no rows selected


tkprof will show us:

select /*+ index( t t_idx ) */ *
from
 t where x > 0 and y = 0


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.52       4.66      12234      14509          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.53       4.66      12234      14511          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 273
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  TABLE ACCESS BY INDEX ROWID T (cr=14509 pr=12234 pw=0 time=4663930 us cost=14528 size=24816 card=47)
    100000     100000     100000   INDEX RANGE SCAN T_IDX (cr=223 pr=222 pw=0 time=73865 us cost=237 size=0 card=94559)(object id 97079)



sort of like what you are seeing, now, if we:

ops$tkyte%ORA11GR2> drop index t_idx;

Index dropped.

ops$tkyte%ORA11GR2> create index t_idx on t(x,y);

Index created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec dbms_monitor.session_trace_enable

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select /*+ index( t t_idx ) */ * from t where x > 0 and y = 0;

no rows selected


the tkprof is slightly different:

select /*+ index( t t_idx ) */ *
from
 t where x > 0 and y = 0


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.01       0.02        304        305          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.01       0.02        304        307          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 273
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  TABLE ACCESS BY INDEX ROWID T (cr=305 pr=304 pw=0 time=20730 us cost=379 size=24816 card=47)
         0          0          0   INDEX RANGE SCAN T_IDX (cr=305 pr=304 pw=0 time=20724 us cost=321 size=0 card=378)(object id 97080)

KSV Master Wait

Rajeshwaran, Jeyabal, January 19, 2012 - 11:03 pm UTC

Tom:

We are running our Application & When we query gv$session we see connection from application are getting "KSV Master Wait". We are on 11GR2 (11.2.0.2)

1) Can you help us what does this "KSV Master Wait" means?
2) How can we resolve this?
Tom Kyte
January 20, 2012 - 9:42 am UTC

it is typically related to a product issue - please contact support for this one, it should not happen to a degree that you even notice it. that it is suggests a possible bug.

big difference in IO latency,

A reader, May 17, 2012 - 3:38 pm UTC

Hello,

On a HP-UX 11.23 box single instance Oracle 11.1.0.7 database, the insert into..select from (about 50K records) was inserted with an Index. Similar test was done on a different RAC box that's on HP-UX 11.31. The result was quite astonishing.


OLD db file sequential read 12146 0.05 22.28 -> 1.8ms per read
NEW db file sequential read 12026 0.61 672.27 -> 55ms per read

The new environment (RAC box) is around 30 times slower than the old box from Index read point of view.

How can we proceed further to find out the real cause for such a slowness? I saw the raw trace file and some of the sequential file read took up to 300ms per call.

Thanks,




Tom Kyte
May 18, 2012 - 2:39 am UTC

stop the database

dd datafiles to /dev/null

rule out the database as the cause - if dd is "fast" on one and "slow" on the other...

Or use the orion tool http://www.oracle.com/technetwork/topics/index-089595.html to exercise the disk (maybe better than dd in this case as it'll do single block IO's like index reads would - dd is more like a full scan thing)

follow up,

A reader, May 18, 2012 - 10:53 am UTC

Thank you. Will try to use ORION.

One other point I missed telling you yesterday - the insert into the table (same 50K rows) without index was much much faster in the NEW environment (49 seconds against 2min on OLD). However, after placing the Index, the speed was 14min on NEW and 7 min on OLD.

This is looking like a puzzle for me. At the beneath, it is still blocks whether it is table or index. Would the process of creating leaf blocks could have caused slowness (on new box)?

Tom Kyte
May 21, 2012 - 8:04 am UTC

need to see a tkprof at the very very very least. with waits.

follow up,

A reader, May 22, 2012 - 3:23 pm UTC

Hello,

I have the tkprof output of both the places (old and new).

Output of tkprof for INSERT statement from OLD:


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 12146 0.05 22.28
free buffer waits 14375 1.00 141.95
latch: object queue header operation 26 0.00 0.00
latch: cache buffers lru chain 4 0.00 0.00
latch free 2 0.00 0.00
latch: redo allocation 54 0.00 0.01
log file switch completion 5 0.07 0.13
********************************************************************************



Output of tkprof for the INSERT statement from NEW:

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 10.28 10.28
db file sequential read 12026 0.61 672.27
log file switch (private strand flush incomplete)
1 0.16 0.16
buffer busy waits 1 0.00 0.00
log buffer space 2 0.08 0.09
********************************************************************************



Tom Kyte
May 22, 2012 - 3:28 pm UTC

old db file sequential read                     12146        0.05         22.28
new db file sequential read                     12026        0.61        672.27


I'm going to blame misconfigured IO system here. Look at that. It pretty much jumps out and hits you square in the face.


take the database out of the equation - just run some IO tests against it - anything from "dd" (will tell you about large sequential multiblock IOs) to orion (full test tool - single and multiblock IO's)

http://www.oracle.com/technetwork/topics/index-089595.html

they did about the same amount of IO, just the waits are more than an order of magnitude larger on new, leads to "there is something wrong with the IO setup"

follow up (again),

A reader, May 22, 2012 - 3:32 pm UTC

Hello,

The tkprof output I have put earlier is when the table had Index.

the tkprof now is from the insert statement when there was no index.

OLD:


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
free buffer waits 3793 0.05 31.46
latch: redo allocation 5 0.00 0.00
latch: object queue header operation 10 0.00 0.00
latch: cache buffers lru chain 3 0.00 0.00
latch: undo global data 1 0.00 0.00
********************************************************************************

SQL ID: f8pavn1bvsj7t Plan Hash: 3924106966

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 37 0.43 7.11
********************************************************************************

SQL ID: f8pavn1bvsj7t Plan Hash: 3924106966

Thanks,


NEW:


Tom Kyte
May 22, 2012 - 8:04 pm UTC

it seems there is a serious ongoing issue with your IO system on new.

37 IO's for 7.11 seconds, about .2 seconds/IO

.005 would be OK, your old system was getting about 0.008.

your new system is put together wrong.

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.