Skip to Main Content
  • Questions
  • UNDO_RETENTION -- Increasing Value of it

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Robert.

Asked: October 27, 2009 - 12:26 pm UTC

Last updated: October 29, 2009 - 1:50 pm UTC

Version: 10.2.0.4

Viewed 10K+ times! This question is

You Asked

Hi Tom,

If we do not have a *guaranteed* undo_retention...
And if we have a fixed amount of undo disk space

Is there any benefit to increasing the value of UNDO_RETENTION?

Whether the value of UNDO_RETENTION is 100 or 1000000 won't Oracle keep the unused undo for the same lenth of time?

Thanks,

Robert.

and Tom said...

well, if you have a fixed amount of undo tablespace allocated (say 10gb) but the undo segments are only at 1gb in size, we won't grow them unless and until we have to (the undo retention would be one of the things that would make us have to)

And we remove undo segments after a while if we don't need them anymore, move extents around and so on.

You would want to set your undo retention to whatever you need it to be in all cases, and then you can monitor whether you are expiring extents prematurely due to not having sufficient space available.

But do not think that just because your undo tablespace is 10gb, we'll fill it up, we won't, we'll reuse stuff whenever we can prior to extending a segment.

Rating

  (8 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Then why not max it out?

Robert, October 28, 2009 - 6:22 pm UTC

Tom,

Exactly what I needed to know.


But *if* we have a fixed amount of disk space for undo
*And* we have some interest in using the undo (flashback query, avoid ora-1555, etc.)
*Then* why not just set undo_retention to 100000000 to make sure we ALWAYS KEEP as much undo as possible for flashback query etc.?

Do I have a good point?

Thanks,

Robert.
Tom Kyte
October 29, 2009 - 8:16 am UTC

too much of a good thing is a bad thing.


Configure more UNDO space than you really want - and we'll end up doing more physical writes to disk (dbwr) probably....

You see, undo is cached in the buffer cache and treated like most any other block. When the cache is near full, dbwr will have to flush out some blocks to disk - when a checkpoint happens - dbwr will have to flush out some blocks. The more blocks you have that can be dirty at a single point in time - will affect dbwr performance.

Hence, if you have (for example) 1,000 undo blocks - and that is the limit on them - in a period of time we might use each one 10 times over before a checkpoint. At checkpoint time, we'd flush 1,000 undo blocks.

Now, say you have 10,000 undo blocks. In that same period of time, we would use each one ONCE resulting in 10,000 unique dirty blocks that have to be flushed to disk at the checkpoint (assuming your buffer cache can hold them all, if not, we'd have been flushing them to disk during the time between those official checkpoints...)


I don't want to alarm anyone here - you SHOULD set your undo retention to the period of time YOU NEED.

Not less (that would be bad - ora-1555's and inability to flashback query)
Not way too much more (as that could lead to increased writes by dbwr)

A reader, October 29, 2009 - 9:09 am UTC

Hello Tom,

Nice explanation!!!

So before reusing undo block dbwr have to write back to undo segment in between official checkpoint is that correct?

Please confirm.

Thanks


Tom Kyte
October 29, 2009 - 10:18 am UTC

No, that isn't it at all (that it would "before reusing undo block dbwr have to write back to undo segment in between
official checkpoin")


If you have few total undo blocks, we can resuse them over and over and over in the cache without having to flush them (assuming the cache can contain them)

eg: you have 1,000 undo blocks allocated in total. In period X of time (during which dbwr is NOT asked to flush anything to disk), you generate 10,000 undo blocks. We'll use each of the 1,000 blocks about 10 times without doing any physical IO.

If you have a lot of undo blocks - we won't tend to reuse them in that period X of time (because you have a lot of them, we don't need to so we won't). So, if in period X of time you generate 10,000 undo blocks but can only cache say 5,000 of them (remember also, undo isn't the only thing in the cache) - then we'll have to write out at least the first 5,000 in order to allow for the last 5,000 to fit.


We *don't have to* write the block out before reusing - if we did - then there would be no difference. It is the fact that we could reuse a smaller set of blocks totally in cache (perhaps) that makes this interesting at all.

A reader, October 29, 2009 - 10:37 am UTC

Thanks Tom,

But one point bug me say oracle resue undo blocks over and over again in cache and some transaction rollback or a query need undo to support multiversioning who oracle find the undo?


Pleasse explain



Tom Kyte
October 29, 2009 - 10:51 am UTC

you cannot reuse an undo block that is used by a non-committed transaction of course - but that doesn't really have anything to do with the prior discussion. The assumption would be that the block in question was available to be reused.

whether it gets to disk or not - we can only reuse the undo block if the transaction that generated it has committed.



If you need it for multiversioning - then we are into the realm of "set your undo retention properly (or let us autotune based on what we observe in current releases) and you'll have

a) not too little
b) not too much

undo allocated"



A reader, October 29, 2009 - 11:11 am UTC

Thanks Sir for your reply


<quote>

whether it gets to disk or not - we can only reuse the undo block if the transaction that generated it has committed.

<quote>

Say the transaction committed and there is no free undo block for subsequent transactions before resuing undo block, does not oracle
has to write it out just like it did with data block?


Appreciate for your reply

Thanks
Tom Kyte
October 29, 2009 - 12:02 pm UTC

... has to write it out just like it did with data block?
...

no, and it would not have to do it with the data block either.


We only have to write the blocks out of the cache when we

a) run out of room and need more space, we'll flush dirty blocks to disk
b) we are doing any sort of checkpoint activity and the block in question needs to be checkpointed.

A data block can stay in the cache for a long long long time and be used/reused over and over again (insert, delete, insert, delete, ..... ) without ever hitting the disk.

A reader, October 29, 2009 - 12:33 pm UTC

Thanks for clarification


so let me summarize this:

(1) if we set undo_retantion such that we have total 1,000 undo block and all of them fit in cache we can resuse it with out write it out

(2) if we set undo_retantion value to high value such that we have total 10,000 undo block and did not fit in cache
we have to write some of them to disk before reusing it



Please confirm

Many thanks
Tom Kyte
October 29, 2009 - 1:50 pm UTC

no, you are missing the gist here.


we are not discussing undo retention right now, we are discussing "what happens if I have a ton of undo segments configured" (you can have too much configured either by setting undo-retention way too high for what you need OR by using manual rollback segments)

forget undo retention - just pretend you have

1,000 undo blocks to be used (allocated in segments) in one case

10,000 undo blocks to be used (allocated in segments) in another case

and you have a limited resource( buffer cache blocks ) that will be used to manage each.


If you generate 1,000 blocks of undo in a given unit of time - then both cases will work the same.

It only gets interesting when you generate lots of undo and you have the ability to overwrite, to reuse the blocks many times in the buffer cache before being forced to write them out.


I'll try once more - same situation, unit X of time, you generate 10,000 blocks of undo in both situations - both the "we have 1,000 blocks allocated" and the "we have 10,000 blocks allocated".

Assume both situations are after a cold start, NO blocks in cache initially. Just to make the math easy.

In case one, you have only 1,000 blocks to work with for undo. And it happens that the 1,000 blocks fit in the buffer cache and dbwr is NEVER asked to flush any out to free up some. You will use each block on average 10 times in the cache without writing it - you will read each one once of course to get it frim disk into the cache - but only once. The, at the end of unit X of time, a checkpoint comes and you flush the dirty blocks. total undo written to disk in unit X of time: 1,000 blocks. total undo read from disk in unit X of time: 1,000 blocks

In case two, you have 10,000 blocks to work with for undo. Suppose you only have room for about 2,000 of them in the buffer cache (given that other blocks are in there). So, you will read the first 2,000 into the cache over time as you generate your undo. When we get about 2,000 in there - we'll have to move some out to make room for more. So (for ease of math) lets say dbwr flushes out 500 of them (500 writes). We can now do 500 more reads to get blocks 2001-2500 into the cache. Then we fill and flush and fill and flush. Then we get to the end of unit X of time and we checkpoint.

total blocks written during unit X of time = 8,000 (the first 8,000 undo blocks we filled and flushed) plus 2,000 blocks currently in the cache.

total blocks read during unit X of time = 10,000 - the 10,000 unique blocks.



Excellent Explanation! Thank You

Robert, October 29, 2009 - 2:08 pm UTC


A reader, October 29, 2009 - 2:35 pm UTC

great explanation !!!

Thanks for your time

Superb explanation!

Fernando Mendes, June 28, 2013 - 6:39 pm UTC

Thanks