Skip to Main Content

Breadcrumb

Connor and Chris will both be at AI World from October 12 to October 17 , the premier Oracle conference of 2025. If you're in Vegas, please come say Hi or pop into our sessions

Question and Answer

Tom Kyte

Thanks for the question, Reader.

Asked: April 23, 2006 - 9:08 am UTC

Last updated: December 02, 2008 - 7:18 am UTC

Version: 10.2.0

Viewed 1000+ times

You Asked

Hi Tom,

With this feature we can store scn at row level.
suppose if we have one table with rowdependencies and it's
having 2 rows.

create table t rowdependencies as select rownum r from dict where
rownum < 3;

-- these 2 rows would be having same scn suppose 123456

We open a refcursor for r=1 but do not print the result.

----
var x refcursor
exec open :x for select r from t where r = 1 ;
----

We update 100 time second row of this table where r = 2.

----
begin
for i in 1..100 loop
update t set r = r where r = 2 ;
commit;
end loop;
end;
/
----

Now row 1 and 2 will have diff. scn. row 1 would be 123456 and row 2
will be x.

if I print x refcursor then still its getting more than 100 gets
to find this row.

When we have the same scn in current block then why is it
reading all undo blocks.

Regards,


and Tom said...

Because read consistency is done at the BLOCK level, not the row level.

Queries process blocks, not rows (true they find rows, but they process blocks).



Rating

  (5 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks for quick response

A reader, April 23, 2006 - 10:48 am UTC


Why 100 gets?

Anastasia Grishelenok, November 29, 2008 - 2:53 pm UTC

Hi, Tom,
I wonder why the Reader suppose that it will be about 100 gets from undo to find this row, indeed there will be 100 blocks in undo, but only one of them will have scn 123456 and only this one is needed for our fetch, then it should be only one get from undo.
Am I right?
Tom Kyte
December 01, 2008 - 6:11 am UTC

Ok, so we know we want scn 123456.

In the buffer cache the current version of the block is there - the block is as of 123456+100-at-least.

So, let us call that block "as of 123600"

Now, we process your query that is "as of 123456", we find the block in the cache and it is as of 123600.

We need to put it back the way it was.

So, we get the last change made to the block (UNDO) and apply it. Now we have a version of the block that is say 123597. 123597 is greater than 123456. So, we get the UNDO of the next to last change - rollback - now we have the block as of 123595. Still greater than 123456.

We do this 98 more times - NOW we have the version of the block as of the correct point in time.

The problem is - the block as of scn 123456 probably doesn't exist anymore, we have to reconstruct it, we do that by applying UNDO and rolling the block back - and we have to do that over and over to get it back in time.

David Aldridge, December 01, 2008 - 10:18 am UTC

In terms of predicting the performance of a flashback (AS OF) query, which presumably works along the same lines, is it safe to say that the overhead of the flashback clause would be related to the number of _individual changes_ that each block that requires modification has been subjected to?

How efficient is it to find the last change made to a block in UNDO? Is there some kind of index maintained on UNDO that is used for that, or is there a certain amount of scanning of multiple blocks involved to find that single change?
Tom Kyte
December 01, 2008 - 10:57 am UTC

absolutely - I like to say "flashback query based on UNDO is inherently non-scalable, the further back in time you go, the longer it takes as the odds you have to process more and more changes goes up over time"

The undo is found by looking at the block header - it points to the transactions that modified this block. As we roll back the block - we are rolling back this transaction header too - so as we roll back - we create the data we need to roll back further. Think of it like a linked list - it is very linear - undo a change, see if we are rolled back enough, if not roll back change, see if we are done, if not roll back change and so on.

Sometimes we can find the version of the block we need in the cache straight off and skip the work (we store multiple versions) or we might find a version that is 'old' (not current) but not old enough and would hence skip work (we'd start at a point back in time, just not far enough back in time)

But the further back in time a query goes - in general - the longer it will take.

Until you add the flashback data archive to the mix of course :) In that case - we achieve scalable flashback times because we maintain old versions of rows and can go directly back to a point in time, rather than rolling back. So, to use the flashback data archive to go two years back will almost certainly be more efficient than using undo based flashback query to go 2 days back.

Thomas, December 01, 2008 - 1:02 pm UTC

Can you elaborate on how the flashback data archive works?
How does Oracle find the data it needs for flashback queries?

Will the flashback data archive be used automatically for flashback queries or do we have to enable this?
Tom Kyte
December 02, 2008 - 5:46 am UTC

you put tables into a flashback data archive and you specify a retention period (30 days, 3 years, whatever)....

Then, as UNDO is generated for that segment (generated naturally as part of transactions), a new background process FBDA (flashback data archive) 'mines' the undo for these segments and recreates the old record. This old record (and other bookkeeping information) is inserted into the flashback data archive table for this table (just like a history table).

When you use flashback query (it is tied right into that feature, you just use flashback query) - it can go right to the old record by 'date' (the history has the timestamps) rather than processing undo record after undo record.


David Aldridge, December 01, 2008 - 2:48 pm UTC

>> "flashback query based on UNDO is inherently non-scalable, the further back in time you go, the longer it takes as the odds you have to process more and more changes goes up over time"

My recent experiences agree ... "use with caution".
Tom Kyte
December 02, 2008 - 7:18 am UTC

No, better said as:

"use with knowledge"


No need to be 'cautious', just understand what it is, what it does, how it does it and you'll inherently understand the implications, the ramifications, the side effects - and what it can and cannot reasonably do...