satheesh nagaraj, December 10, 2014 - 5:57 pm UTC
Thanks Tom for the excellent info.
"put it into the cache, access and process it from the cache - read another bit and do the same - and flush out some of the earlier bits"
Could you please eloborate more on this statement
After processing the first set of data blocks, Oracle will send the results to users and then processing remaining set of blocks in the cache after retrieving from disk? or there the required data will be internally stored and send to application/user after the entire set is processed.
December 11, 2014 - 7:28 am UTC
if you have access to Expert Oracle Database architecture, I go into great detail on this.
basically - instead of caching all of the blocks that we can from a large table full table scan - we only let a small portion of the table reside in the buffer cache. In that fashion, a large table full table scan won't have the side effet of "flushing the buffer cache" - removing all of the other tables cached information. We know the large table would consume a lot of the cache - so we do not let it.
the blocks are stored in the cache - they are accessed by any user process that wants them - but they are marked to age out fast since they came in from a full scan.
conventional path read using the buffer cache - but only using a small portion of it
Rajeshwaran Jeyabal, December 17, 2014 - 6:17 am UTC
Tom,
We are able to understand the direct path reads and able to create a testcase for the same
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3931724600346905308 Can you help us to understand on
#1) which scenario or for which type of queries oracle will use "conventional path read using the buffer cache - but only using a small portion of it" (is that by doing single block IO's)
#2) Which wait events in Tkprof/AWR denotes the table is read by using "conventional path read using the buffer cache - but only using a small portion of it" ? (is that a db file sequential reads ?)
December 17, 2014 - 7:03 pm UTC
#1) that happens for all large table full table scans.
https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:255215154182 unless you alter that behavior with alter table t cache
#2) db_file_scattered_reads is typically what you will see.
db_file_sequential_reads are single block IOs. db_file_scattered_reads are the multi-block reads we use in full scans.
A reader, December 19, 2014 - 2:53 pm UTC
I was interviewd on the sequential read and scattered read. Can you please explain
db_file_sequential_reads and db_file_scattered_reads
December 22, 2014 - 7:42 pm UTC
db file sequential read is a wait event caused by single block IOs. The name comes from the fact that it most often happens by reading a single block from an index sequentially followed by a single block from a table (then index then table then index then table and so on - in sequence)
db file scattered read is a wait event caused by multi-block IO, were we read (typically up to 1mb) a set of blocks at a time, not singly. The name comes from the fact that we read a bunch of blocks and then scatter them in the buffer cache by hashing their dba (data block address - file.block).
A reader, February 26, 2015 - 4:13 pm UTC
thanks. I am back into DBA work again after doing development work for short period of time
where is data stored after it's deletion from the buffer cache
Tom, March 04, 2015 - 10:06 am UTC
Hi Tom,
I'd like to ask the same question as the first reviewer. I understand that the table would be copied to (and removed from) the buffer cache by small chunks, but what we would like to know is what happens to that processed data before it's sent back to the user.
Is it stored somewhere else, or is it returned to the user by chunks too?
Thanks
forms9i
A reader, March 12, 2015 - 3:49 pm UTC
Is forms9i with patchsets compatible with oracle 11g database? Appreciate your help.
A reader, April 08, 2015 - 2:35 pm UTC
Can you please reply my question above? Thanks.
Db blocks in db buffer
Sagar, January 16, 2024 - 11:10 am UTC
When db blocks brought to buffer cache in small chunks from large table, how these data be delivered to the user? in chunks only or fully processed data?
January 16, 2024 - 3:12 pm UTC
I'm not sure what you're asking here - please clarify.
after processing in chunks where does the dirty block go .
nsiva, March 11, 2025 - 8:56 am UTC
after processing in chunks where does the dirty block go when the buffer is not large enough to grow. .
March 20, 2025 - 2:02 am UTC
Not sure what you mean, but a dirty buffer is not too dissimilar from any other buffer in the buffer cache, ie, it has to "earn" its right to stay there.
For a clean buffer, if it gets too "old", then it becomes a candidate for being "thrown out" as new buffers come in.
For a clean buffer, if it gets too "old", it too becomes a candidate for being "thrown out" as new buffers come in, the difference being that "thrown out" also means we will write the changes from that dirty buffer back to the data files.