Thanks for quick response
A reader, April 23, 2006 - 10:48 am UTC
Why 100 gets?
Anastasia Grishelenok, November 29, 2008 - 2:53 pm UTC
Hi, Tom,
I wonder why the Reader suppose that it will be about 100 gets from undo to find this row, indeed there will be 100 blocks in undo, but only one of them will have scn 123456 and only this one is needed for our fetch, then it should be only one get from undo.
Am I right?
December 01, 2008 - 6:11 am UTC
Ok, so we know we want scn 123456.
In the buffer cache the current version of the block is there - the block is as of 123456+100-at-least.
So, let us call that block "as of 123600"
Now, we process your query that is "as of 123456", we find the block in the cache and it is as of 123600.
We need to put it back the way it was.
So, we get the last change made to the block (UNDO) and apply it. Now we have a version of the block that is say 123597. 123597 is greater than 123456. So, we get the UNDO of the next to last change - rollback - now we have the block as of 123595. Still greater than 123456.
We do this 98 more times - NOW we have the version of the block as of the correct point in time.
The problem is - the block as of scn 123456 probably doesn't exist anymore, we have to reconstruct it, we do that by applying UNDO and rolling the block back - and we have to do that over and over to get it back in time.
David Aldridge, December 01, 2008 - 10:18 am UTC
In terms of predicting the performance of a flashback (AS OF) query, which presumably works along the same lines, is it safe to say that the overhead of the flashback clause would be related to the number of _individual changes_ that each block that requires modification has been subjected to?
How efficient is it to find the last change made to a block in UNDO? Is there some kind of index maintained on UNDO that is used for that, or is there a certain amount of scanning of multiple blocks involved to find that single change?
December 01, 2008 - 10:57 am UTC
absolutely - I like to say "flashback query based on UNDO is inherently non-scalable, the further back in time you go, the longer it takes as the odds you have to process more and more changes goes up over time"
The undo is found by looking at the block header - it points to the transactions that modified this block. As we roll back the block - we are rolling back this transaction header too - so as we roll back - we create the data we need to roll back further. Think of it like a linked list - it is very linear - undo a change, see if we are rolled back enough, if not roll back change, see if we are done, if not roll back change and so on.
Sometimes we can find the version of the block we need in the cache straight off and skip the work (we store multiple versions) or we might find a version that is 'old' (not current) but not old enough and would hence skip work (we'd start at a point back in time, just not far enough back in time)
But the further back in time a query goes - in general - the longer it will take.
Until you add the flashback data archive to the mix of course :) In that case - we achieve scalable flashback times because we maintain old versions of rows and can go directly back to a point in time, rather than rolling back. So, to use the flashback data archive to go two years back will almost certainly be more efficient than using undo based flashback query to go 2 days back.
Thomas, December 01, 2008 - 1:02 pm UTC
Can you elaborate on how the flashback data archive works?
How does Oracle find the data it needs for flashback queries?
Will the flashback data archive be used automatically for flashback queries or do we have to enable this?
December 02, 2008 - 5:46 am UTC
you put tables into a flashback data archive and you specify a retention period (30 days, 3 years, whatever)....
Then, as UNDO is generated for that segment (generated naturally as part of transactions), a new background process FBDA (flashback data archive) 'mines' the undo for these segments and recreates the old record. This old record (and other bookkeeping information) is inserted into the flashback data archive table for this table (just like a history table).
When you use flashback query (it is tied right into that feature, you just use flashback query) - it can go right to the old record by 'date' (the history has the timestamps) rather than processing undo record after undo record.
David Aldridge, December 01, 2008 - 2:48 pm UTC
>> "flashback query based on UNDO is inherently non-scalable, the further back in time you go, the longer it takes as the odds you have to process more and more changes goes up over time"
My recent experiences agree ... "use with caution".
December 02, 2008 - 7:18 am UTC
No, better said as:
"use with knowledge"
No need to be 'cautious', just understand what it is, what it does, how it does it and you'll inherently understand the implications, the ramifications, the side effects - and what it can and cannot reasonably do...