Skip to Main Content
  • Questions
  • Difference between "db file scatterd read" and "db file sequential read"

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Ravi.

Asked: October 25, 2000 - 8:30 pm UTC

Last updated: June 01, 2013 - 2:33 pm UTC

Version: 816

Viewed 50K+ times! This question is

You Asked

Dear Tom,

What exactly the Difference between "db file scatterd read" and "db file sequential read" in v$sysstat. How to use these parameters for tuning.

Are they associated with sort/fulltable scan/ index scan operations.

Your comment on this will really useful.

Regards
Ravi

and Tom said...

A db file sequential read is an event that shows a wait for a foreground process while doing a sequential read from the database.

This is an o/s operation, most commonly used for single block reads. Single block reads are mostly commonly seen for index block access or table block access by a rowid (Eg: to access a table block after an index entry has been seen)

This can also be a multiblock read. Then it will usually
be a read from a SORT (TEMPORARY) segment as multiblock
reads for table scans (and index fast full scans) usually
show up as waiting on "db file scattered read"

A db file scattered read is the same type of event as "db file sequential read", except that Oracle will read multiple data blocks. Multi-block reads are typically used on full table scans. The name "scattered read" may seem misleading but it refers to the fact that multiple blocks are read into DB block buffers that are 'scattered' throughout memory.





Rating

  (23 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Wondering about multiblock and buffers

MDz, September 20, 2001 - 4:51 pm UTC

Excellent clarification for the confusing names! I was just wondering, when doing a full table scan, how will blocks that are buffered (and potentially changed) influence the multi-block read operation? Particularly, say that multi-block read count is 16, blocks 1 through 16 should be read, but block 10 is buffered - will it perhaps cause two multi-block reads, or several single-block reads? Will that be reflected in 'db file scattered read' or 'db file sequential read'?

How can the sequential read be reduced?

Rich, October 09, 2001 - 3:15 pm UTC


ANTON

Anton, November 30, 2002 - 11:13 am UTC

How how we reduce this contention?

Tom Kyte
November 30, 2002 - 4:49 pm UTC

contention? I see no contention here -- what do you mean?

If you mean "how can we reduce reads" -- you do that by tuning your physical structures (storing stuff so it answers your questions efficiently) or tuning your SQL or not being afraid of full scans -- in short, by reducing IO.

A reader, May 24, 2003 - 7:08 pm UTC

Hi Tom

WAIT #4: nam='db file sequential read' ela= 5 p1=1 p2=53903 p3=1

Is above wait mean Total time taken to read one index block(53903) is 0.05 second ?

Thanks




Tom Kyte
May 25, 2003 - 9:42 am UTC

depends on the release. In 8i and before, it'll look like this:

WAIT #4: nam='db file sequential read' ela= 2 p1=1 p2=26612 p3=1

and ela is in 1/100th of a second. In 9i and up, it'll look more like this:

WAIT #1: nam='db file sequential read' ela= 12917 p1=1 p2=26866 p3=1

and ela is in 1/1,000,000 (so that was 0.012917 seconds of wait time)


Scattered and Sequential read is too high..

Moiz, November 18, 2004 - 1:24 am UTC

Hi tom,
for last 3 to 4 days , my db file scattered read and sequential read is increasing very badly...I give the detail for it..
17-nov-04 12:00 noon
db file scattered read 153,062,018 160,793 0 .000
db file sequential read 130,254,279 254,781 0 .000

3:30

db file scattered read 167,398,996 176,098 0 .000 db file sequential read 141,039,410 272,064 0 .000 control file sequential read 5,187,982 368 0 .000 4:30 evening

db file scattered read 175,021,038 184,158 0 .000 db file sequential read 146,340,087 281,062 0 .000 control file sequential read 5,709,852 388 0 .000

5:30 evening
db file scattered read 177,861,135 186,851 0 .000 db file sequential read 148,746,206 285,050 0 .000 control file sequential read 5,710,625 389 0 .000



18-nov-04 10:30

C1 C2 C3 C4 C5
---------------------------------------------------------------- ---------- ---------- ---------- ----------
db file scattered read 216541738 221577.72 0 0
db file sequential read 179125149 349555.1 0 0
control file sequential read 6287545 437.76 0 0



Tom Kyte
November 18, 2004 - 10:26 am UTC

look at your sql, sql is what does IO, look for sql doing the most IO.

also, find out what changed (and don't even say "nothing")

v$session_event

Nadeesh Condon, May 31, 2005 - 2:00 am UTC

Tom,

How can we find out username related to sid in v$session_event, for which there exists no sid in v$session.

Tom Kyte
May 31, 2005 - 7:44 am UTC

that should not happen, is it happening ?

v$session_event

Nadeesh Condon, June 01, 2005 - 11:54 pm UTC

Tom,

Yes it is happening, below is for your information for above query:-

bash-2.03$ sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.1.0 - Production on Thu Jun 2 09:20:37 2005

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> select sid from v$session_event;

       SID
----------
         4
         8
         9
         4
         1
         2
         7
         6
         5
         3
         9

       SID
----------
         8
         2
         9
         8
         8
         9
         1
         9
         8
         2
         3

       SID
----------
         2
         8
         3
         9
         4
         2
         9
         8
         2
         2
         2

       SID
----------
         7
         7
         4
         5
         6
         7
         4
         7
         1
         1
         2

       SID
----------
         3
         2
         3
         4
         4
        11
        11
         7

52 rows selected.

SQL> select sid from v$session;

       SID
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        12

11 rows selected.

SQL> 

Tom Kyte
June 02, 2005 - 4:28 pm UTC

looks like sessions coming and going -- doesn't it.

so, someone logged in you caught them with one query and by the time you ran the next there weren't there anymore.

by reducing IO

A reader, June 20, 2005 - 1:00 am UTC

"by reducing IO"

Tom

Please give us an illutration as to how we can reduce IO in a given situation. This question is vague, but the answer we are expecting is the one which you would give to beginers in Oracle.

Tom Kyte
June 20, 2005 - 9:54 am UTC

you can reduce IO in lots of different ways.

My favorite way -- look at the SQL, find the high load SQL, tune it. If it requires less resources and one of the resources it was using was IO -- you will reduce that.

Look at your cache size, is it in fact sized right for you.



Not useful

Stephanie, December 07, 2006 - 3:07 pm UTC

Although the descriptions were useful, the information on how to fix the problem was not. Look at i/o, look at cache... for a newbie, this is not enough information.

Why db file sequential read is index block read?

tom.lai, October 26, 2010 - 12:47 am UTC

Hi tom,

Why event 'db file sequential read' is index block read? how to read index block in Oracle?

Thank
Tom Kyte
October 26, 2010 - 7:52 pm UTC

it is not an index block read. It might be, but it does not have to be.

It is a read of a SINGLE BLOCK.

that block could be:

a block from an index
a block from a table
a block from the rollback segments
a block from <anywhere>

it is A BLOCK, a single block IO.


Why need db file sequential read wait event?

A reader, October 26, 2010 - 8:25 pm UTC

Hi Tom,

Thanks,

Why oracle need single block read? contiguous multi blocks read should been better than single block read.


Tom Kyte
October 26, 2010 - 8:30 pm UTC

see above...

db file sequencial read

A reader, February 06, 2011 - 5:43 am UTC

Hi Tom,

Can you please explain what do you mean by single block I/O read? Is it the same block being accessed for every single I/O read even if my DB_FILE_MULTIBLOCK_READ_COUNT is not 1? If yes then how can it be possible?
Note: It will be helpful if you can show us with an example.
Tom Kyte
February 06, 2011 - 12:30 pm UTC

there are single block reads - physical ios - that is a time we need a SPECIFIC single block from disk. These usually happen as a result of using an index. We get a rowid from an index and realize we need block 42 from file 55 - it isn't in the cache yet, so we do a single block IO to read it in.


there are multi-block reads - physical ios - that is a time we need to read all of the block of a segment and instead of reading block 1, then block 2, then block 3 and so on - we read a set of blocks - 1..n - at a time (controlled by db_file_multiblock_read_count).


I did not understand this bit:

... Is it the same block being accessed for every single I/O read even if my DB_FILE_MULTIBLOCK_READ_COUNT is not 1? ...

db file scattered read

A reader, February 09, 2011 - 8:39 am UTC

Hi Tom,

In a document today I read regarding action for this event db file scattered read:
"A DBA may have increased the value of the DB_FILE_MULTIBLOCK_READ_COUNT INIT.ORA parameter. The optimizer is more likely to choose a full table scan over an index access if the value of this parameter is high."

Can you please help me understand why we should be doing this? Why full table scan should be preffered over an index access?
Tom Kyte
February 10, 2011 - 4:44 pm UTC

the db file mulitblock read count (dbmbrc) controls how many blocks the optimizer will assume we'll be reading at a time. The more blocks we read in a single IO call - the more efficient it would be.

suppose you have a table with 1000 blocks in it.

Assume there are 100,000 rows in the table (100 rows/block).
Assume you are going to read 500 rows out of that table.
Assume there is an index that could be used.
Assume the data patterns are such that these 500 rows are on 500 different database blocks - they are not "clumped" together - they are spread out all over the table.
Assume the dbfmbrc is 64

Now, which would be more efficient:

a) do 500 single block IO calls to read 500 blocks that are spread all over the place.

b) perform 16 big IO's of 64 blocks each to read the data.


In this case, the optimizer would almost certainly go with (b).



Now assume we start lowering the number of records we are going to get from 500 slowly down to 10. At some point - the cost of (a) will eventually become cheaper than (b) - because as we start getting fewer rows - we hit less single blocks.

Now, that point is determined by the cost of the full scan - and the cost of the full scan is computed in part by the setting of dbfmbrc. The higher the value for dbfmbrc - the lower the cost of the full scan. The lower the value - the higher the cost.


Now, in 10g and above you should NOT set the dbfmbrc at all. In that release - we set the value automatically based on what we observe your system doing. Meaning - if we see that on your system - we average "12" blocks in a multi-block read - we'll cost the queries using the value of 12 for dbfmbrc, but when we actually go to do the read - we max it out to the maximum read size on your system in the hopes we can get more read in. This is the best of all worlds - your query gets costed using the actual observed multi block read count (better plans) but if we can take advantage of a larger IO size - we will.


oh, and this:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6749454952894#6760861174154
is why full scans are not evil and indexes are not all goodness. Indexes can be really bad sometimes too.

db file scattered read

A reader, February 11, 2011 - 3:40 am UTC

Hi Tom,

I want to ask you something:

Is the cause of db file scattered read is few no of rows (say 10 rows) scattered to different blocks and optimizer is still going for FTS rather than index scan since db_file_multiblock_read_count parameter value is high? Is Cost of using high db_file_multiblock_read_count the main problem then?
Why 16 I/O's are needed for fetching 500 rows when our db_file_multiblock_read_count=64?
Tom Kyte
February 14, 2011 - 7:14 am UTC

The cause of a db file scattered read is simply that "we are doing a multi-block IO".

The first sentence you have there "is the cause..." is one that I'm not able to fully parse and understand.


I'm not sure what probably you are alluding to. If you have to do typical physical IO's you will either way for db file SCATTERED (multi-block) or SEQUENTIAL (single-block) IO.



16 IO's are needed for the hypothetical 500 rows above because we assumed there are 1000 blocks and if we read 64 blocks in an IO call - 1000/64 = 15.625 - so we need to perform at least 16 IO calls.

set db_file_multiblock_read_count to default

A reader, March 16, 2011 - 5:05 am UTC

We have one database "Oracle Database 10g Enterprise Edition Release 10.1.0.4.0" where DB_FILE_MULTIBLOCK_READ_COUNT value is now 8 and Default is NO. How can we set it to Default?
Tom Kyte
March 16, 2011 - 8:38 am UTC

take it out of your init.ora (remove the line) or if you are using an spfile -

alter system reset db_file_multiblock_read_count scope=spfile sid='*';

A reader, March 16, 2011 - 9:18 am UTC

so setting DB_FILE_MULTIBLOCK_READ_COUNT to default is also recommend for release 10.1, on some websites on net it is recommended for release 10.2.
Tom Kyte
March 16, 2011 - 10:41 am UTC

they asked how to set it to the "default", that would be accomplished by removing it.

In 10gr2 a new algorithm kicked in:
http://docs.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams047.htm#REFRN10037

that makes setting of this parameter something you should NOT be doing anymore.

In releases prior to that - it was up to you.

A reader, June 12, 2011 - 4:29 am UTC

Hi Tom,
From the above I understood that,

Whether it is db file sequential or scattered read depends on how much total IO is needed to read whole required read.
Or they are fixed by oracle

Am i correct or not?

Please correct me IF i am wrong.

Tom Kyte
June 17, 2011 - 10:08 am UTC

Whether it is db file sequential or scattered read depends on how much total IO
is needed to read whole required read.


that is not accurate, sort of.

you will get a db file sequential read when we need to read a single block. This is the wait event we register when doing reads from objects such as an index - which during a range/unique scan we have to read block by block by block.

you will get a db file scattered read when we read more than a single block in a single IO - and we have to "scatter" the blocks in the buffer cache after we do that read. You will see this during full scans of objects.



The reason your statement is not accurate (to me) is that an index range scan - which does a series of db file sequential reads - might read 100mb of data, whereas a full scan might read 1mb of data in total.


It is proper to think of db file sequential read as a single block IO and db file scattered read as a multiblock IO.

Can high db sequential read in top 5 event cause by bad index

sh, December 03, 2011 - 11:45 am UTC

Hi Tom,

Would like to clarify few things in regard to db sequential read.

1) will index range scan contribute db sequential read or db scattered read?
2) can db sequential read can cause by bad index with huge leaf block, if yes. can rebuild index help to reduce the number of single block read.

thanks

regards
sh
Tom Kyte
December 06, 2011 - 11:13 am UTC

1) db file sequential read is a single block IO, most typically these comes from index range scans and table access by index rowid.

So yes, an index is a major contributor to this statistic


2) typically NOT as the highest number of db file sequential reads typically come from the table.

think about what happens if you get 500 rows from a table via an index, we'll read a very small number of blocks typically from the index - and then we'll have to read 500 table blocks. most of the single block reads will be against the table - not the index in general.


FULL TABLE SCAN vs DB FILE SEQUENTIAL READ

Carl Bruneau, May 23, 2013 - 1:44 pm UTC

Hello,

To continue on the interesting subject of DB FILE SEQUENTIAL vs DB FILE SCATERED READ, I would like to know why I am seeing so many db file sequential reads when I am updating an entire table. I know that it is more efficient to create a new table with the required data (with create table x as select, in order to bypass the undo generation), create the indexes, constraints, etc., drop the old table and rename the new table with the old table's name. Yet, I would still like to understand the reason for all these db file sequential reads since I would expect to see only DB FILE SCATERED READ in this particular case.

Server info :
Oracle 11.2.0.3 on Windows 2008 r2
Block size : 8KB

Table info :
17,000,000 rows
450,000 blocks
212 avg row len

Operation : UPDATE t SET y = 1;

Resulting number of db file sequential reads: 150,000. All these reads are on the UNDO tablespace! There are no uncommitted transactions on the table. No one else is connected to the database, only me doing this UPDATE.

Why does this operation result in so many DB FILE SEQUENTIAL READs? What is the number of DB FILE SEQUENTIAL READs proportional to?

Best regards.

Carl
Tom Kyte
May 23, 2013 - 3:17 pm UTC

we need undo blocks in the buffer cache (to fill them) and they were not there. So, we have to read them in, modify them, add your undo to them and presumably write them out at some future date.

it'll be proportional to the number of undo blocks in the circle of undo that are not in the buffer cache.

if you have a really large undo segment and a small SGA, the old part of the undo ring will be pushed to disk and when we get back around to that - we'll have to read it back in.

db file scattered read in Index Range scan

Rajeshwaran, Jeyabal, May 23, 2013 - 6:03 pm UTC

A db file sequential read is an event that shows a wait for a foreground process while
doing a sequential read from the database.

This is an o/s operation, most commonly used for single block reads. Single block reads
are mostly commonly seen for index block access or table block access by a rowid (Eg: to
access a table block after an index entry has been seen)


Tom:
Why do we see "db file scattered read" for an Index Range scan operation then?

select owner,object_id from big_table
where owner = user
and object_id > 5000

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       48      0.03       0.24         18         80          0        7000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       50      0.03       0.25         18         80          0        7000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61  

Rows     Row Source Operation
-------  ---------------------------------------------------
   7000  INDEX RANGE SCAN BIG_IDX_03 (cr=80 pr=18 pw=0 time=33 us)(object id 55922)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      48        0.00          0.00
  SQL*Net message from client                    48        0.00          0.02
  db file scattered read                          3        0.01          0.01
  db file sequential read                         2        0.01          0.01

Tom Kyte
May 23, 2013 - 6:29 pm UTC

you will notice that I always said "most commonly", "typically", "in general" and the like.

In general -

db file scattered read => full table scan, index fast full scan
db file sequential read => index scan, table access by index rowid

those are the most common causes. But not the only. A full scan can do db file sequential reads and an index scan can use db file scattered reads.


We'll do the multi-block reads in current releases in some cases to perform a read ahead operation.

here is an interesting OTN thread that discusses and demonstrates this:

https://forums.oracle.com/forums/thread.jspa?threadID=941864


DB FILE SEQUENTIAL READ vs DB FILE SCATERED READ

Carl, May 24, 2013 - 4:21 pm UTC

Hi Tom,

Related to your answer (btw thank's for this answer) :

"if you have a really large undo segment and a small SGA, the old part of the undo ring will be pushed to disk and when we get back around to that - we'll have to read it back in."

We did this test:

1st test memory target 8G buffer cache 3.7G:

SQL> show sga

Total System Global Area 8885620736 bytes
Fixed Size                  2262448 bytes
Variable Size            5066721872 bytes
Database Buffers         3758096384 bytes
Redo Buffers               58540032 bytes

Event waited on                             Times   Max. Wait  Total Waited
----------------------------------------   Waited  ----------  ------------
db file sequential read                    143347        0.42        178.87
db file scattered read                      75373        1.06        243.12


2nd test memory target 1G buffer cache 134M:

SQL> show sga

Total System Global Area 8885620736 bytes
Fixed Size                  2262448 bytes
Variable Size            8690600528 bytes
Database Buffers          134217728 bytes
Redo Buffers               58540032 bytes

Event waited on                             Times   Max. Wait  Total Waited
----------------------------------------   Waited  ----------  ------------ 
db file sequential read                    145603        0.15        195.62
db file scattered read                     106520        0.59        378.55

Maybe I didn't understand exactly what you mean because our system do not confirm what I think you mean: With a very small SGA (Buffer cache size at 134 MB) we do approx. the same number of DB FILE SEQUENTIAL READs as we did with a much larger SGA (Buffer cache size at 3.7GB).

Can you please clarify what is the number of DB FILE SEQUENTIAL READs proportional to?

Thank's a lot for your help on this case.

Best regards.

Carl

Tom Kyte
May 29, 2013 - 4:59 pm UTC

but that doesn't tell us if the undo was read in or not. how big are your undo segments.


if you are curious, you can enable sql trace with waits (dbms_monitor.session_trace_enable( waits=> true )) and review the trace data to see exactly what file/block you are doing the single block IO's on. It could be your data file (extent sizing would affect that, if you have extents that are not nice multiples of your multi block IO size). It could be an affected index. It could be undo.

it will be very illuminating to discover which one it is.

db file sequentail read vs db file scatered read

Carl, May 30, 2013 - 7:57 pm UTC

Hi Tom,

The updated table have no indexes. The table size is : 3.3GB.

Almost all of the single block reads were done on the UNDO tablespace:

- 1st test memory target 8G buffer cache 3.7G:

Only 40 out of the 143347 db file sequential reads were on the tablespace containing the updated
table, 143307 were done in the UNDO.

- 2nd test memory target 1G buffer cache 134M:

Only 51 out of the 145603 db file sequential reads were on the tablespace containing the updated
table, 145552 were done in the UNDO.

Our UNDO tablespace (automatic UNDO management) have two datafiles: One is currently 4GB and the other is currently 2GB.

Here an example of what we have seen in the trace files:

WAIT #486568376: nam='db file sequential read' ela= 384 file#=8 block#=78834 blocks=1 obj#=0 tim=3970354243771
WAIT #486568376: nam='db file sequential read' ela= 341 file#=8 block#=78835 blocks=1 obj#=0 tim=3970354244792

File #8 is one of the undo's datafile.

Best regards.

Carl
Tom Kyte
May 30, 2013 - 8:16 pm UTC

it just means it was reading those blocks out of the undo segment on disk. They could have been flushed by DBWR (to keep the cache clean). They might never have been in the cache. They might not fit into the cache (your undo tablespace is 6gb after all, your table is 3.3gb - that is much larger than 3.7gb).

this all looks normal. Your undo segments do not fit into cache along with all of the data being modified. Your table would be sufficient to fill up the entire buffer cache almost by itself.

think about it - if you update every row in a table, it is 100% sure that every single block will have to be in the buffer cache. If every single block must be in the cache at some time - and if that fills your buffer cache - your undo just isn't going to be in there (since the undo will be pretty large too).


It is what I guessed the first time around. Your needed undo blocks, needed to be modified, are not in the cache when we need them. The size of your table pretty much assures that.



You should consider a create table as select - drop old table - rename new table. This operation can bypass redo and will absolutely bypass undo generation.

DB FILE SEQUENTIAL READ vs DB FILE SCATERED READ

A reader, June 01, 2013 - 12:34 am UTC

Hi Tom,

after having raised the size of the buffer cache to 10GB, doing the same UPDATE operation lead to these waits:

Event waited on
Times Max. Wait Total Waited
------------------------- Waited ------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 3 0.01 0.04
db file scattered read 53726 0.88 289.10

Where are these DB FILE SEQUENTIAL rean on the UNDO gone?

I assume you will tell me it is normal behavior... :(

Best regards.

Carl
Tom Kyte
June 01, 2013 - 2:33 pm UTC

I'm not sure what is surprising here?


You have 3+ gb of UNDO at least (you are modifying 3+ gb of data...).

You have 3+ gb of data


when you had a buffer cache of 3+ gb - you saw lots of reads against the undo tablespace - given the fact that the 3gb table AND the 3 gb of undo could not possible fit into cache - it should be obvious why it happened that way. There wasn't enough room to have 3+ gb of UNDO and 3+ gb of table data in cache.


Now, now you have 10gb of cache. and you are presumably the only thing in town. hence, 3+ gb of UNDO plus 3+ gb of DATA <= 10gb of cache and it all fits, so we are not forced to put the undo to disk.


Yes, I will say this is normal, this is normal behavior of cached data when it can be cached.


I'm not really sure where the misunderstanding is here?

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.