satheesh nagaraj, December 10, 2014 - 5:57 pm UTC
Thanks Tom for the excellent info.
"put it into the cache, access and process it from the cache - read another bit and do the same - and flush out some of the earlier bits"
Could you please eloborate more on this statement
After processing the first set of data blocks, Oracle will send the results to users and then processing remaining set of blocks in the cache after retrieving from disk? or there the required data will be internally stored and send to application/user after the entire set is processed.
December 11, 2014 - 7:28 am UTC
if you have access to Expert Oracle Database architecture, I go into great detail on this.
basically - instead of caching all of the blocks that we can from a large table full table scan - we only let a small portion of the table reside in the buffer cache. In that fashion, a large table full table scan won't have the side effet of "flushing the buffer cache" - removing all of the other tables cached information. We know the large table would consume a lot of the cache - so we do not let it.
the blocks are stored in the cache - they are accessed by any user process that wants them - but they are marked to age out fast since they came in from a full scan.
conventional path read using the buffer cache - but only using a small portion of it
Rajeshwaran Jeyabal, December 17, 2014 - 6:17 am UTC
Tom,
We are able to understand the direct path reads and able to create a testcase for the same
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3931724600346905308 Can you help us to understand on
#1) which scenario or for which type of queries oracle will use "conventional path read using the buffer cache - but only using a small portion of it" (is that by doing single block IO's)
#2) Which wait events in Tkprof/AWR denotes the table is read by using "conventional path read using the buffer cache - but only using a small portion of it" ? (is that a db file sequential reads ?)
December 17, 2014 - 7:03 pm UTC
#1) that happens for all large table full table scans.
https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:255215154182 unless you alter that behavior with alter table t cache
#2) db_file_scattered_reads is typically what you will see.
db_file_sequential_reads are single block IOs. db_file_scattered_reads are the multi-block reads we use in full scans.
A reader, December 19, 2014 - 2:53 pm UTC
I was interviewd on the sequential read and scattered read. Can you please explain
db_file_sequential_reads and db_file_scattered_reads
December 22, 2014 - 7:42 pm UTC
db file sequential read is a wait event caused by single block IOs. The name comes from the fact that it most often happens by reading a single block from an index sequentially followed by a single block from a table (then index then table then index then table and so on - in sequence)
db file scattered read is a wait event caused by multi-block IO, were we read (typically up to 1mb) a set of blocks at a time, not singly. The name comes from the fact that we read a bunch of blocks and then scatter them in the buffer cache by hashing their dba (data block address - file.block).
A reader, February 26, 2015 - 4:13 pm UTC
thanks. I am back into DBA work again after doing development work for short period of time
where is data stored after it's deletion from the buffer cache
Tom, March 04, 2015 - 10:06 am UTC
Hi Tom,
I'd like to ask the same question as the first reviewer. I understand that the table would be copied to (and removed from) the buffer cache by small chunks, but what we would like to know is what happens to that processed data before it's sent back to the user.
Is it stored somewhere else, or is it returned to the user by chunks too?
Thanks
forms9i
A reader, March 12, 2015 - 3:49 pm UTC
Is forms9i with patchsets compatible with oracle 11g database? Appreciate your help.
A reader, April 08, 2015 - 2:35 pm UTC
Can you please reply my question above? Thanks.
Db blocks in db buffer
Sagar, January 16, 2024 - 11:10 am UTC
When db blocks brought to buffer cache in small chunks from large table, how these data be delivered to the user? in chunks only or fully processed data?
January 16, 2024 - 3:12 pm UTC
I'm not sure what you're asking here - please clarify.
after processing in chunks where does the dirty block go .
nsiva, March 11, 2025 - 8:56 am UTC
after processing in chunks where does the dirty block go when the buffer is not large enough to grow. .
March 20, 2025 - 2:02 am UTC
Not sure what you mean, but a dirty buffer is not too dissimilar from any other buffer in the buffer cache, ie, it has to "earn" its right to stay there.
For a clean buffer, if it gets too "old", then it becomes a candidate for being "thrown out" as new buffers come in.
For a clean buffer, if it gets too "old", it too becomes a candidate for being "thrown out" as new buffers come in, the difference being that "thrown out" also means we will write the changes from that dirty buffer back to the data files.
What about versions of the same block?
Christopher Curzon, June 24, 2025 - 6:54 pm UTC
I would like to understand better how Oracle manage different >versions< of the same block (reflecting different SCNs).
But first, just to confirm my understanding:
1) Data Structures:
The SGA contains the buffer cache. x$bh is a fixed array that has a 1-to-1 relationship to the buffers in the cache. So, if there are 1 million buffers, then x$bh will have 1 million entries in it. Additionally, there is a hash table that converts a data block address into an index into x$bh. (the hash table will have considerably fewer entries than x$bh.)
Is this a correct understanding of the data structures?
2) Now suppose transaction A updates a row at dba(25, 407). The dba is hashed, the correct entry in x$bh is latched, and the block is read from disk into the buffer cache. Now the row is updated. Is the original block updated in place (making it "dirty")? Or is cloned and then the clone is updated?
In the former case, the cache buffers chain would have just the original entry. In the latter case, there would be two entries in the chain - one pointing to the original version of the block; and one pointing to the updated version.
This question will help me get a sense of how long the cache buffer chain can grow. Only 5 to 10 entries, maybe? Or possibly hundreds?
Thanks for helping me to understand.
Christopher Curzon, June 25, 2025 - 6:30 pm UTC
I think my question can be succinctly stated this way: How does Oracle handle multiple versions of the same data block?
Consider a data block -- lets say (file 23, block 555). If transaction A wants to update row 1 in that block, then the block will be read from disk and placed into the buffer cache. Then row1 is updated (but not committed).
Now transaction B comes along and wants to select row1 from that same data block. Since trans A is not committed, the original version of the data block must be reconstructed from rollback data. Where is that reconstructed version of the block stored? Is it also in the buffer cache? Or somewhere else like the PGA? Are there two versions of that block in the buffer cache?
Each version of a data block has an entry in x$bh. Would all of these blocks be present in the cache buffer chain? If there are multiple versions of a block present because of multiple transactions, when does Oracle fold all of those changes into a single block that gets written to disk?
Thanks!