Sreenivas.M, May 07, 2010 - 6:34 am UTC
Nice explanation thanks tom
After Block is loaded into buffer cache
A reader, November 12, 2010 - 4:35 pm UTC
Hello Sir,
After block is loaded into buffer cache. How it's copied into user process ?
For example when i select compress block it's move into buffer cache as compressed. But where does it de-compress (PGA/UGA/CGA) ?
Also, for direct read oracle this block into user memory which PGA/UGA/CGA ?
When i dump buffer cache at level 10 i can view block as compressed. But when it's decompressed and been kept.
So, basically how block will travel from buffer cache to user memory(Screen) ?
November 12, 2010 - 4:57 pm UTC
.. How it's copied into user process ? ...
the block is copied "as is".
the act of decompressing is done as you access the rows, it is just dereferencing a pointer in the row to data in the block header. It is not compressed like a ZIP file would be. It is a rather easy "decompression"
direct reads go into pga memory.
We do not decompress the blocks anywhere really - we process them "as is"
blocks do not get back to clients - clients get result sets, answers - not raw inputs.
So whats stats or counters
A reader, November 13, 2010 - 4:42 pm UTC
Thank you Sir, That make my understanding clear. But one more thing so this act of decompression is recorded some where in stats. For example if we do some kind of redo activity "redo size" in v$mystat counters increase/decrease. Can we monitor this for compression
November 15, 2010 - 8:59 am UTC
there are no stats for compression. It is just a process that takes place.
You insert data using compression - we compress it.
You retrieve data from a compressed block - we just retrieve the data - the act of decompressing is so trivial that you won't be able to measure it (except in improved response times - since you scan less data overall)
The only way to "measure it" would be to measure the table before and after compressing.
Regarding DB Cache
Asheesh, December 26, 2012 - 6:46 am UTC
Hi,
Thanks for your nice replies and knowledge sharing.
I have a question regarding DB Cache. You told that in case of Parallel Query there is Direct I/O bypassing of DB Cache. How do we know whether DB Cache is used or not in case of FULL TABLE SCAN.
January 04, 2013 - 10:59 am UTC
typically you would trace it and the wait event you'd see would be for direct path reads, instead of db file scattered reads.
you don't have to trace, you can use ASH data retrospectively as well.
Tom just one clarification.
A reader, January 05, 2013 - 12:29 am UTC
Hi Tom,
I have just one question. While using oracle parallel update and delete. did buffer cache is being use.
my own understanding is for direct path operating SGA is not used. so multi-versioning does not come to play, which is logical. like insert with append hint but with update and delete our DBA is claiming that Oracle reads the data into SGA and then update/delete it.
While my opinion is since parallel update/delete is direct load operations it just generate undo vector without involving SGA. that is data, is not read into SGA. because if it does. SGA hit ratio would decrease.
Please clarify
January 14, 2013 - 10:28 am UTC
direct path doesn't bypass multi-versioning in any way.
insert with append hint - the select uses multiversioning. the insert component never does (the insert is as of here and now, in current mode)
update uses multiversioning always to read the rows - and then does the update in current mode.
same with delete.
update and deletes are never direct path operations. they are always conventional path.
You can do a direct path load using insert /*+ append */ but that only speaks to the WRITES, the query component may well be using the buffer cache.
Subhadeep Banerjee, April 12, 2013 - 11:07 pm UTC
Hi Tom,
How data is read from the buffer cache. Say we are executing a new SQL query (ie, the query plan is not cached in library), but the data it's going to fetch is present in the buffer and the query is using an index also. Then what will be process? will use the index; as far I understood (may be wrong), index fetch the data from disk itself with the help of the rowid. So, how index will fetch it from the buffer cache and what will be the whole process of this fetch?
April 22, 2013 - 6:52 pm UTC
the plan is independent of the state of the cache, the optimizer receives a query and decides "this is what I'll do - I'll use this index to read this table".
Once we have the plan, we'll run it. and the logic used is exactly that in the original answer I gave above.
to use the index for example, we'd have to read the root block of the index. the root block has a DBA (data block address) - a file.block address. We take that fie.block and look it up in the cache - if not there, read it into the cache and return it (if it was there, just return it basically)
and that root will point to a branch (gives us another DBA) and we do the same to that and so on until we get to the leaf block and find a rowid - which has the DBA of a block in a table and we read that... and so on.
How will it work in 12c
Dhruva, July 18, 2013 - 10:13 am UTC
Hi Tom,
In version 12c multitenant architecture, the SGA (and hence the DB Buffer cache) will be shared amongst the various instances running on the server.
So there is an OLTP application that has a lot of MRU blocks in the cache (from reference data lookups, etc.), but then suddenly a BI user runs a mammoth report performing full scans on large tables and wipes out those blocks from the cache, causing the OLTP application to perform file I/Os all over again.
In a scheme of things where the SGAs are separate, one application will not impact another in this manner.
The speakers at the 12c launch confirmed that no boundaries or limits will be imposed on the SGA for the various instances.
Is this a cause for concern?
Thanks
*Sorry if the text appears scattered over multiple lines, that is not how I typed it!
July 18, 2013 - 5:30 pm UTC
full scans do not wipe out the buffer cache, they never have. blocks read from large table full table scans are processed differently in the cache than single block IO's are.
we age out the full table scan blocks rapidly so they do not flood the buffer cache.