Skip to Main Content
  • Questions
  • Snapshot too old X read only tablespaces

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Andre.

Asked: July 02, 2001 - 3:20 pm UTC

Last updated: October 20, 2011 - 9:48 pm UTC

Version: 816

Viewed 1000+ times

You Asked

Hi Tom.
If i have a query that frequently raise the error ORA 1555, then I
get the tablespace names of all the tables that the query uses.
So, I put them READONLY. Is it guaranteed that the error ORA 1555 will not be generated ?

Another question: is it a good idea perform a
select * from ... all tables involved in the process before execute the query ?

Regards,
André.

and Tom said...

No, it will not be assured. Relevant facts for following test:

o db_block_buffers 300
o block size 8k


We'll show how a query against a table in a READ ONLY tablespace can get the ora-1555 quite easily in an active system (which you must have).


tkyte@TKYTE816> create table t tablespace users unrecoverable as select * from all_objects;

Table created.

tkyte@TKYTE816> create table small( x int, y char(500) ) tablespace tools;
Table created.

tkyte@TKYTE816> insert into small values ( 0, null );
1 row created.

tkyte@TKYTE816> commit;
Commit complete.


tkyte@TKYTE816> create rollback segment rbs_small
2 tablespace tools
3 storage ( initial 1k next 1k minextents 2 maxextents unlimited )
4 /
Rollback segment created.

tkyte@TKYTE816> alter rollback segment rbs_small online
2 /
Rollback segment altered.

tkyte@TKYTE816> begin
2 commit;
3 set transaction use rollback segment rbs_small;
4 update t
5 set object_type = lower(object_type);
6 commit;
7 execute immediate 'alter tablespace users read only';

8
9 for x in ( select * from t )
10 loop
11 for i in 1 .. 20
12 loop
13 set transaction use rollback segment rbs_small;
14 update small set x = x+1, y = x;
15 commit;
16 end loop;
17 end loop;
18 end;
19 /
begin
*
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 10 with name "RBS_SMALL" too small
ORA-06512: at line 9

(the above is a side effect of the delayed block cleanout)....

The solution is to have rollback segments that are sized correctly for your entire system -- NOT just for the write components of the system.

You need to size RBS given the system -- read and write, not just write.

You should use equi-sized rollback segments (do NOT have a "big one" and then other small ones). the probability of the ora-1555 is in direct proportion to the size of your SMALLEST rbs. The largest only helps writes -- does nothing for the reads.

It would not be a good idea to "select * from all tables" -- it would be good to size rollback properly for the system as a whole.


In almost 15 years, I've never once hit this in a properly sized system. I've fixed lots of systems that hit this (by sizing RBS correctly for them). Rollback is not something to skimp on or undersize.


Rating

  (15 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Sagi, October 25, 2001 - 6:12 am UTC

Hi Tom !

I agree with u in terms of Sizing the RBS properly. But then how do we exactly determine the exact size of the RBS. If you could explain it with an example then it would be helpful for many of us.

Thanx in Adv.

Tom Kyte
October 25, 2001 - 7:58 am UTC

You need to determine the longest running query you have.

You need to determine the length of the longest running transaction you do.


You need to ensure you have sufficient rolllback permanently (not that it can GROW but rather that it is permanently allocated) so that you do not wrap in the perdiod of time that is larger then the above two numbers.


Most people size for their insert/update/delete transactions. They forget about sizing for the read transactions. You can monitor the v$ tables to see when and how fast your RBS wrap. You need to determine how much rollback you generate in a period of time. Say you generate X blocks of undo in 1 minutes. You have a long running query that takes 15 minutes. You need to have more then 16 * X blocks of undo configured.

In 9i, this is simplified as you may use an UNDO tablespace instead of rollback segments. Here you configure a retention period (how long do you want to keep undo) and it sizes itself.

how about read only tablespaces since months ago?

A reader, February 03, 2004 - 5:21 pm UTC

Hi

Your example is because of delayed clean out, that does not happen in tablespaces which have been put in read only for long time huh?

Tom Kyte
February 04, 2004 - 7:13 am UTC

it should not as it will be known at that point in time that the transaction committed long ago (before your statement/transaction began) and the undo is not needed.

clarification

amit poddar, May 23, 2005 - 12:12 pm UTC

Hi,

In your response to original question.

If you had done

select * from t ( to do the delayed block cleanout)

before putting the tablespace in read only mode
ORA-1555 would have been avoided

Correct ?



Tom Kyte
May 23, 2005 - 3:48 pm UTC

correct.

it was an example that explains how you can get 1555 on read only tablespaces...

exporting the table...
selecting the records...
waiting long enough even... (so we know the scn's we encounter are "old")

Very great help

vijay, August 17, 2005 - 1:58 pm UTC

I was always wondering if there is a formula to size rollback segments correctly in order to avoid ORA-155 errors. As per Tom's advise, I believe I can take the following approach.

1) Run statspack for every hour. Note down the rollback size(bytes) written from the staspack report, for every hour. Choose the maximum value from the above list of 24. Say, for example it is 6000K. So, the rate of rollback generation is 100k per minute.

2) Set SQL_TRACE to true with TIMED_STATISTICS=true for the whole system. run tkprof with statements order by ELAPSED TIME desc. Say, the longest running query takes 3 hours

3) So, all my rollback segments should be of the same size which is atleast 180*100*1024=18 MB.

Thanks Tom
vijay


Tom Kyte
August 17, 2005 - 2:07 pm UTC

or goto 9i and let is all be very simply automated for you with undo_retention and v$undo_stat.

that would be my preference, so much easier.

A reader, April 28, 2006 - 3:51 pm UTC


Size of Smallest RBS and 1555

VLS, May 27, 2006 - 4:10 pm UTC

Hi Tom,

Greetings !

In reply to the first question in this forum, you said

"You should use equi-sized rollback segments (do NOT have a "big one" and then other small ones). the probability of the ora-1555 is in direct proportion to the size of your SMALLEST rbs. The largest only helps writes -- does nothing for the reads."

I totally agree with you that we should have an equi-sized rollback segments, but would appreciate if you can elaborate more on how ora-1555 is directly proportion to the size of SMALLEST rbs.

Thanks and Regards
VLS


Tom Kyte
May 27, 2006 - 9:36 pm UTC

because if I need rbs from the small one.... It is being used, I need that rollback data, it is small, it cycles and gets overwritten - I get 1555.



Clarification needed

karthik, August 20, 2007 - 12:07 pm UTC

Tom,
You quoted in your reply for sagi's question
"You need to have more then 16* X blocks of undo configured." for accomodating both read and write transactions. Can you explain what is that more than...what factors/attributes you think that we need for "more than"
does it not enough to have minimum accomodation for read and write transactions?


Tom Kyte
August 22, 2007 - 10:35 am UTC

you need to take that IN CONTEXT

...

Say you generate X blocks of undo
in 1 minutes. You have a long running query that takes 15 minutes. You need to have more then 16
* X blocks of undo configured.

.....


facts I presumed (because I wrote that I did)

a) you generate X blocks of undo in 1 minute
b) you have a query that runs for 15 minutes
c) hence, you need 16 (16 being a bit larger than 15) times X blocks of undo allocated to be able to run for 16 minutes without wiping out any undo that query that takes 15 minutes might need.

Clarification

karthik, August 21, 2007 - 3:03 pm UTC

Tom,
Sorry if iam not clear in my previous question.
I know the average amount of undo generated per minute by
my application.
I know the length of the longest running query in my system

So i use a formula,with reference to your example
Size = average undo generated per minute for write(x=100k)* length of my longest running query (15 mts)=16*100*1024

I agree that RBS should not be undersized only for write part,at the same time why i should overshoot to a big size
when i know exactly what is the requirement(16*100*1024).

Just curious to know why you consider this calculation as a minimum value
Tom Kyte
August 22, 2007 - 11:56 am UTC

because 16 is just a little larger than 15

15 minutes was my query duration. 16 is the next number just a little longer than that.

Vincent Wu, November 12, 2007 - 3:49 am UTC

Hi Tom,

Your answers have alway been clear and helpful and I appreciate the effort you've put into this site.

From your previous posts I understand that:
a) the solution to 1555 is to size the RBS properly, and
b) if a row is not revisited the cleanout would not occur.

But I think I am missing something here cause I cannot work out how the following situation can be resolved.

If a row is updated and not revisited for a long time, say 1 year, does it still need the original rollback data from one year ago? Does it mean the RBS need to hold data for a year? Otherwise how does Oracle perform the delayed cleanout?

Thanks
Tom Kyte
November 16, 2007 - 12:34 pm UTC

cleanouts happen at the block level, so b) is not correct.

and after some duration of time, the block will be known to be "old enough" so no, the undo need not persist for a year.

and undo isn't needed to cleanout the block, only to restore the block to some prior image.

some confusion about "old enough"

jian huang zheng, June 09, 2009 - 1:04 pm UTC

Hi Tom

after some duration of time, the block will be known to be "old enough" so no, the undo need not persist for a year.
do you mean that under this circumstance oracle would not use the undo to restore the image of the block? just ues the "current old enough" data, could read consistency compromised?

by the way,normally how old is old enough?

thanks!


Tom Kyte
June 10, 2009 - 8:40 am UTC

under that circumstance - we would know we don't need to use undo - that the block image is both "consistent and current"

the answer to "old enough" is typically either "right after it failed" - the second run of the query would now know - or "it depends" (on activity)

Thanks, very clear!

jian huang zheng, June 10, 2009 - 12:52 pm UTC


delayed black clean out

aliyar, June 26, 2010 - 4:51 am UTC

Dear Tom ,

1 ) consider found 1555 error in alert log file along with sql. how can i find out whether this is due to delayed block clean out issue ?


2 ) i read in one website gives some tips to avoid 1555 in 9i/10g

importantly A ) increase undo tablespace size .... B ) increase undo segment size

option A is Ok .. what about option B .. in AUM also do we need to increase undo segment size ... oracle will not take care of the undo segment sizes ?


3 ) another article says.. some times oracle will not allocate space if you use auto extent feature undo datafiles and sql will fail with ora-1555 ...

so need to increase size of undo manually and run the sql again


while handling 1555 errors , i am often confused with above questions .. kindly help me to get clear idea about this .


Appreciate your help and i learnt really a lot from your site.. please continue your service

Thanks
Aliyar

Tom Kyte
July 06, 2010 - 10:59 am UTC

1) it is not recorded, it is just a 1555

2) if you have an undo tablespace, (a) makes sense, (b) makes no sense - since you would be using automatic undo management.

You would have to couple (a) with increasing the undo retention period OR using automatic undo tuning.


3) that is not true. If you use autoextend data files and we can autoextend them in order to save undo for your undo retention period - we will.

snapshot too old and readonly tablespace

Jim, March 12, 2011 - 11:35 pm UTC

Hi, Tom,

This is regarding the original example you gave above.

Assume that we made a tablespace readonly without using select count(0) to cleanout all blocks in that tablespace, then sooner or later, we will hit snapshot too old error, is that right? ( because after some time, all undo tablespace will be overwritten, and when we visit some block in that tablespace, it needs to clean out the "uncommit" header to "commit", but since tablespace is readonly, we can't do so hence get snapshot too old)

If so, we HAVE TO make sure all blocks are cleaned out before turn a tablespace into readonly. But oracle doc and your site only mentioned that it is GOOD to do so, it didn't mentioned that it is necessary.

Did I miss anything?
Tom Kyte
March 14, 2011 - 7:52 am UTC

Assume that we made a tablespace readonly without using select count(0) to
cleanout all blocks in that tablespace, then sooner or later, we will hit
snapshot too old error, is that right?


no, as time goes on - we would "know" implicitly that "every transaction before X has committed". That is why it works the second time around (and every time after that).

If I would have started the "select * from t" sometime later (without ever having run it) - we likely would not hit the ora-1555.


Continus with my previous question

Jim, March 13, 2011 - 12:24 am UTC

Hi, Tom,

I don't quite understand why we get ora-1555 in the following example you gave.
Say at line 6, the SCN is X. At line 15, the earliest SCN possible would be X+1. Hence, at line 9, Oracle only need to make header from "uncommitted" to "committed" and it doesn't really need to "rollback" anything hence it doesn't really care of that undo entry is there or not. Why it raise snapshot too old in this case?


tkyte@TKYTE816> begin
2 commit;
3 set transaction use rollback segment rbs_small;
4 update t
5 set object_type = lower(object_type);
6 commit;
7 execute immediate 'alter tablespace users read only';
8
9 for x in ( select * from t )
10 loop
11 for i in 1 .. 20
12 loop
13 set transaction use rollback segment rbs_small;
14 update small set x = x+1, y = x;
15 commit;
16 end loop;
17 end loop;
18 end;
19 /

Tom Kyte
March 14, 2011 - 7:56 am UTC

And how do you know that at line 15 the earliest SCN possible was X+1 in general?

I mean - you reading the code and understanding the flow of things makes that possible FOR YOU, but in general - how would you know that.

All you know is you have a block (forget the updates and commits happening in the example - they are not relevant to the query "select * from t" that is executing.

On this block, there is some transaction information (remember, the concept of an SCN being on a block is more conceptual than anything else - we have the transaction information). We do not know if this transaction is

a) still in progress
b) committed - and committed way before we began
c) committed - and committed after we began

So, we follow the trail from the block header to the undo - only to discover the undo is gone. So, now we can rule out (a) - but we cannot determine between (b) and (c).

So, we ora-1555, the system now bumps up that "X" value I mentioned above and we don't have to worry about it again.

bharathy, October 20, 2011 - 9:39 pm UTC

hi,
we are getting ORA-01555: snapshot too old: rollback segment number error when we are trying to fetch a data from other database and try to insert in to ours.we are running this as a batch and moreover we are not using any cursors in this

example

delete our_db_table;
begin
insert into our_db_table
(col1,col2,col3)
select col1,col2,col3 from other_db_table;

update db_erro_table;
commit;
end

we think instead of hitting other database everytime , if we use cursor or buffer it will reduce the executing time and get rid of the above error.

kindly suggest


Tom Kyte
October 20, 2011 - 9:48 pm UTC

using procedural code will be slower than using a single insert as select.


You are hitting the ora-1555 simply because the remote database you are fetching from has not configured enough undo to be preserved - period.

Say your insert as select takes 30 minutes. The other site, the remote site is configured to preserve undo for 20 minutes (for example, I'm just trying to explain what is happening). When you are 25 minutes into your insert as select - you read a block from the remote site that was modified 23 minutes ago (shortly after you started). You need to roll back that block for read consistency - however, the undo you need is no longer available (they overwrote it - they only keep 20 minutes worth). Therefore, your select gets the dreaded ora-1555.

The solution:

o make the query go faster (but rewriting as procedural code will only make it go slower)

o have the remote site configure a more reasonable undo_retention_period - setting it to be longer than it takes to run your query.