Skip to Main Content
  • Questions
  • questions about rollbacks/snapshot too old

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Geoffrey.

Asked: March 13, 2001 - 7:07 pm UTC

Last updated: July 09, 2010 - 7:10 pm UTC

Version: 7.3.4

Viewed 1000+ times

You Asked

Tom,

I am looking into ORA-1555 that we are getting and need clarification on a few points.

1/. does Oracle try to recreate the snapshot based on the SCN of the whole block, or of a row. That is, if row A is updated and committed, but then a running cursor needs to access row B in the same data block, can we get ora-1555 if the rollback info for the row A change has been overwritten, ever though we do not want row A.

2/. if you update a column in one row, is just the previous column value stored in the rollback segment, or the whole row, or whole block?

I am confused as to why Delayed Logging Block Cleanouts get such
a big mention in all the texts I am reading re snapshot too old.

My understanding is that when a transaction is committed,the rollback segment header status goes to committed and a new SCN is assigned, but the data block status remains uncommitted until the next operation on the block.
3/. is this possibly mins, hours, days away?

If the next operation finds the block status is uncommitted, it must determine whether or not it still is by querying the rollback segments. If the rollback info has been overwritten already,
I guess the assumption is that the block has committed. I have read that Oracle can use an algorithim for determining a block's SCN during block cleanout even when the rollback segment has been overwritten, so I assume it can clean up the datablock with the new SCN and status.

This would then give us the latest SCN for the block, but no rollback data and then we would get then get ora-1555 depending upon whether this new SCN is before or after the current query SCN.

4/. I cannot see how this could be any different if there was an immediate cleanout.

In anticipation that you have corrected me on the above, here is my next question.
5/. Some of the books suggest to avoid the above by doing a full table scan before launching into your big cursor. Wont this just mean that the full table scan will get the ora-1555 instead?

Fetching across commits also gets the finger pointed at it, but isn't this just an example of the generic problem of running long queries against lots of update activity. Is it particuarly relevant whether your session or another one is doing the updates.

Thanks for your help,

Geoff


and Tom said...

ahh, one of my favorite topics ;)

See
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:275215756923 <code>

1) yes, delayed logging block cleanout can cause this without any updates to the block in fact (just lots of other transactions happening). Also, we rollback the block, not just a row in the block.

2) changed bytes only. An insert stuffs a rowid (to be deleted upon rollback), an update just the changed columns, a delete the whole row.

as for:

"I am confused as to why Delayed Logging Block Cleanouts get such
a big mention in all the texts I am reading re snapshot too old.

My understanding is that when a transaction is committed,the
rollback segment header status goes to committed and a new SCN
is assigned, but the data block status remains uncommitted until
the next operation on the block. "

that is answered in great part by the link above. It is the very ACT of cleaning out a block that did not get cleaned out that causes the 1555. I can get an ora-1555 against a table EVEN THOUGH there have been no modifications to the table since my query began (the last modification to that table committed before my query began and NO ONE else has touched that table).


3) depends on your system but yes.

that is correct.

4) the immediate cleanout happens AFTER the transaction commits, but before the block is flushed from the buffer cache. It is as if the transaction never happened. The next guy to come along to do a READ (not a write, a write will still do a cleanout since the immediate cleanout is a fast cleanout that generates no redo log) will see the block "clean". It will not need to do the read aside to the rbs and hence cannot get the 1555.


5) see the link above, it suggests courses of action.

"fetching across commits" -- yes and no. With the fetching across commits what is happening is something like this:

for x in ( select * from t where x = some_value )
loop
update t set y = f(y) where t.pk = x.pk;
commit;
end loop;

here we are doing lots of tiny updates to the table we are reading. We are creating an artificial situation that should not happen. Normally, if I just ran:

for x in ( select * from t here x = some_value )
loop
.... do something but don't update t ....
end loop

we would not get the 1555 because the volume of updates to T is small, by doing the update to the table we are reading, we cause an artificial -- not normal situation whereby the table we are reading is getting updated by tons of tiny transactions.

If you think you are SAVING space by commiting in a cursor for loop, you are only deluding yourself. you NEED that space in order to keep the query going. You aren't saving a byte of storage, you shouldn't commit inside of the cursor for loop like that (how the HECK do you restart something like that??? if it is restartable -- you can then break the big query into a bunch of little queries and run the queries+updates in a single transaction -- avoiding the fetch across commit again.



My advice on this topic, because of what I typically find in most cases, is that people are trying to "save space" and they are using as small as possible RBS's as they can. You need to be generous with them. You need to size them so that they do not wrap faster then your longest query. You do not size them so that your concurrent transactions are OK -- that is only half of the picture. Lets say your longest query is 5 minutes. On that system I would want to have at least "10 minutes" of rollback -- at least enough rollback permanently allocate (minextents set out that high) so that during peak it would take 10 minutes to wrap around.

Rating

  (15 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

questions about rollbacks/snapshot too old

Geoffrey Gray, March 13, 2001 - 11:13 pm UTC

Can I first say that I am staggered that I could receive a reply to this within a matter of minutes. Thanks very much.

It has answered all my questions, but raised one more.

Getting an ora-1555 because of some updates done 3 weeks ago is intriguing. If there are 1000's of uncleaned blocks,
wont it take ages to catch them one by one.

If the block is cleaned out as you get the ora-1555, is the answer to scan thru a table ignoring 1555's so you can clean everything up.

Thanks very much
Geoff Gray (ggray@mits.com.au)

delayed cleanout

kiro, June 04, 2002 - 11:12 am UTC

Tom
How can I clean out blocks after so big update?
select count(*) from table will work or not (for oracle 8.1.6)?
thanks
kiro.

Tom Kyte
June 04, 2002 - 9:34 pm UTC

it'll just happen over time (a select *, resulting from say an export, would do it)

Mr. Rory B. Concepcion, June 04, 2002 - 11:20 pm UTC

Is it advisable to have a parameter of 20 minextents for the rollback? Doesn't the rollback segments allocate another extent when short in space? This was the advice to us by an Oracle tech support. Please explain the difference of having only 2 minextents against 20. Thanks and God bless you!

Tom Kyte
June 05, 2002 - 6:33 am UTC

There are two factors here:

o how many extents
o the size of each extent.


If the extents are 1gig apiece -- this would be a 20gig RBS, pretty sizable.
If the extents are 1m apiece, this would be 20m, too small for practical use perhaps.


I personally start with 25, 1m extents and let them grow. You want lots of extents in a RBS generally, yes. 20 is smallish. But the point here is that the direct path insert shouldn't need any rbs (well, one block maybe). We are trying to figure out why it is blowing out the rbs.

does high Rollback segment sizing make database unstable?

Pallabi, October 01, 2003 - 8:18 am UTC

we have occasionally been getting the ora-1555 error and despite increasing the size of the rollback segment the problem doesn't seem to have disappeared. we have a database size of around 240gb and 6 rollback segments of 1.5gb each amounting to a total rollback segment of 9gb. the query that we run retrieves data in an iterative manner (select...insert) with about 0.75 million records from a from a table containing 9-10 million records per iteration.

we tried intermediate commits (fetch across commits) to solve the problem and implemented a startup mechanism to take care of process failure and the problem did seem solved for a while but now it has started occuring again and there seems to be no end to increasing the size of the rollback segment.

it appears we will have to increase the rollback segment size anyway. whether we retain the fetch across commit processing or not.

question is, will a size of 4gb for each rollback segment with a total of 8 rollback segments be too big for the current scenario? will it result in any unstability for the database?

over this if we remove the fetch across commits process, will we need to increase the size even further?

we are running an enviornment that is a mix of a database warehouse and an OLTP.

is there any other solution to solving the problem? need urgent help.



Tom Kyte
October 01, 2003 - 9:34 am UTC

"we tried intermediate commits (fetch across commits) to solve the problem"

that CAUSES the problem!!!!


you need enough rollback configured in order to NOT wrap a rollback segment in the time it takes to run your long running query.



Independent rollback segment selection.

alvin, October 02, 2003 - 11:36 pm UTC

16:38:56 rtbetelephony@RTBE.SUNFIRE01> SET TRANSACTION USE ROLLBACK SEGMENT RBS_BATCH;

Transaction set.

Elapsed: 00:00:00.70
16:39:08 rtbetelephony@RTBE.SUNFIRE01> exec pg_cdr2br.pc_trig_cleanup ( 1064484047000,106539839999
9, 20655465 );
BEGIN pg_cdr2br.pc_trig_cleanup ( 1064484047000,1065398399999, 20655465 ); END;

*
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 2 with name "RBS1" too small
ORA-06512: at "RTBETELEPHONY.PG_CDR2BR", line 459
ORA-06512: at line 1


Elapsed: 04:287:17233.01

Why did it choose rollback segment 1 when i specifically specified that rbs_batch is to be used ?

Am i missing some settings in the init.ora parameter ? i had to comment out the 'set transaction...' on my package and specifically set it on sqlplus prior to calling the package.

I am using oracle 8.1.7.0.0 on Solaris 5.8. And prior to running the pg_cdr2br.pc_trig_cleanup packaged procedure i successfully ran another procedure using the same session.

I now tried running it on a fresh session.

Tom Kyte
October 03, 2003 - 8:11 am UTC

it did not.

you have a long running query in that procedure.

that query needed undo information from rbs1.

rbs1 is "too small", it wrapped around and was reused while your query was running.

the set transaction told oracle where to put the UNDO YOU GENERATED.

Your queries need undo from virtually all rbs's (where OTHER people put their undo)

The probability of a 1555 is directly proportional to the size of the SMALLEST rbs.

I *hate* "use rollback segment". I firmly believe all rbs's should be equisized and sized so as to not wrap for the length of your longest running query. I like automatic undo management in 9i for helping to solve this easily.

Still confused over block cleanout.

Nick, October 21, 2003 - 3:05 pm UTC

You say:
"I can get an ora-1555 against a table EVEN THOUGH there have been no modifications to the table since my query began (the last modification to that table committed
before my query began and NO ONE else has touched that table). "

How so? It seems to me that there should be no problem generating the read-consistent view in this case. What am I missing ?

Tom Kyte
October 21, 2003 - 5:43 pm UTC

the problem is it cannot TELL at that point. the information it needs to verify it doesn't need a read consistent rollback operation is gone.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:895410916429 <code>

is an example

ORA-1555 with SCN

A reader, December 27, 2003 - 1:34 am UTC

Hi,

    On our production server we had 45 rollback segments.
    Every day on particular time we are getting following error some time with SELECT statement or INSERT..SELECT statement:
    ORA-01555 caused by SQL statement below (SCN: 0x0001.8bbac31a):
    Thu Dec 25 22:30:20 2003

    In Error, we are getting SCN number but not any rollback segment number.
    
    During that period we ran statpack report for @45min. In report, Pct Waits is 0(Zero) and Avg Active size is also not crossing Optimal Size. But still we received an error, after that we decided to increase no of rollback segment upto 75 with INITIAL 10M and NEXT 5MB, Problem is not solved.
    We tried to check whether it is happening because of Block clean out process by executing individual select on the tables but we got same redo size before and after select statement.
    
    We don't want to set particular rollback segment to transaction.
    
    How to solve this problem, what are the things need to look now? 

    
    ->A high value for "Pct Waits" suggests more rollback segments may be required
    ->RBS stats may not be accurate between begin and end snaps when using Auto Undo
      managment, as RBS may be dynamically created and dropped as needed
    
            Trans Table       Pct   Undo Bytes
    RBS No      Gets        Waits     Written        Wraps  Shrinks  Extends
    ------ -------------- ------- --------------- -------- -------- --------
         0           10.0    0.00               0        0        0        0
         2          256.0    0.00         655,860        0        0        0
         3           20.0    0.00             806        0        0        0
         4           44.0    0.00           7,242        0        0        0
         5           52.0    0.00           3,434        0        0        0
         6           48.0    0.00           8,248        0        0        0
         7           60.0    0.00           4,000        0        0        0
         8           24.0    0.00           1,050        0        0        0
         9           24.0    0.00           3,818        0        0        0
        10           14.0    0.00             368        0        0        0
        11           20.0    0.00           1,694        0        0        0
        12           30.0    0.00           4,724        0        0        0
        13           30.0    0.00           2,322        0        0        0
        14           22.0    0.00             888        0        0        0
        15           14.0    0.00             276        0        0        0
        16           22.0    0.00           1,108        0        0        0
        17           18.0    0.00             728        0        0        0
        18           18.0    0.00             552        0        0        0
        19           18.0    0.00             668        0        0        0
        20           31.0    0.00          19,268        0        0        0
        21           30.0    0.00           1,524        0        0        0
        22           18.0    0.00             588        0        0        0
        23           32.0    0.00           1,566        0        0        0
        24           30.0    0.00           4,958        0        0        0
        25           16.0    0.00             426        0        0        0
        26           20.0    0.00             702        0        0        0
        27           44.0    0.00           2,762        0        0        0
        28           52.0    0.00           9,528        0        0        0
        29           50.0    0.00           3,144        0        0        0
        30           62.0    0.00          12,064        0        0        0
        31           25.0    0.00           8,236        0        0        0
        32           24.0    0.00           1,114        0        0        0
        33           16.0    0.00             426        0        0        0
        34           20.0    0.00             746        0        0        0
        35           28.0    0.00           1,502        0        0        0
        36           28.0    0.00           2,840        0        0        0
        37           24.0    0.00           1,194        0        0        0
        38           16.0    0.00             642        0        0        0
        39           20.0    0.00             714        0        0        0
        40           18.0    0.00             740        0        0        0
        41           18.0    0.00             576        0        0        0
        42           18.0    0.00             716        0        0        0
        43           26.0    0.00           2,340        0        0        0
        44           30.0    0.00           2,546        0        0        0
        45           16.0    0.00             470        0        0        0
              -------------------------------------------------------------
    Rollback Segment Storage for DB: O05CSW3  Instance: O05CSW3  Snaps: 29 -30
    ->Optimal Size should be larger than Avg Active
    
    RBS No    Segment Size      Avg Active    Optimal Size    Maximum Size
    ------ --------------- --------------- --------------- ---------------
         0         401,408               0                         401,408
         2      15,720,448       5,245,490      15,728,640      47,177,728
         3      15,720,448       5,369,973      15,728,640      20,963,328
         4      15,720,448       5,303,413      15,728,640      36,691,968
         5      15,720,448       5,363,618      15,728,640      31,449,088
         6      15,720,448       6,849,381      15,728,640      57,663,488
         7      15,720,448       5,483,326      15,728,640      26,206,208
         8      15,720,448       5,532,050      15,728,640      26,206,208
         9      15,720,448       5,553,493      15,728,640      57,663,488
        10      15,720,448       5,267,081      15,728,640      31,449,088
        11      15,720,448       5,665,448      15,728,640      26,206,208
        12      15,720,448       5,268,269      15,728,640     110,092,288
        13      15,720,448       5,800,168      15,728,640      41,934,848
        14      15,720,448       6,006,413      15,728,640      47,177,728
        15      15,720,448       7,062,102      15,728,640      52,420,608
        16      15,720,448       5,705,462      15,728,640      41,934,848
        17      15,720,448       5,462,843      15,728,640      57,663,488
        18      15,720,448       5,309,849      15,728,640      31,449,088
        19      15,720,448       6,119,094      15,728,640     445,636,608
        20      15,720,448       5,268,105      15,728,640     445,636,608
        21      15,720,448       5,450,980      15,728,640      47,177,728
        22      15,720,448       7,020,310      15,728,640      47,177,728
        23      15,720,448       7,002,246      15,728,640      41,934,848
        24      15,720,448       5,346,636      15,728,640     183,492,608
        25      15,720,448       5,987,975      15,728,640      41,934,848
        26      15,720,448       5,333,102      15,728,640      31,449,088
        27      15,720,448       5,462,412      15,728,640      47,177,728
        28      15,720,448       5,247,884      15,728,640      47,177,728
        29      15,720,448       5,241,468      15,728,640      36,691,968
        30      15,720,448       5,380,344      15,728,640      31,449,088
        31      15,720,448       5,281,938      15,728,640      26,206,208
        32      15,720,448       5,395,238      15,728,640      31,449,088
        33      15,720,448       5,290,287      15,728,640      62,906,368
        34      15,720,448       5,256,007      15,728,640      41,934,848
        35      15,720,448       5,243,383      15,728,640      20,963,328
        36      15,720,448       5,266,780      15,728,640      41,934,848
        37      15,720,448       5,295,158      15,728,640      73,392,128
        38      15,720,448       5,668,529      15,728,640      52,420,608
        39      15,720,448       6,501,956      15,728,640      47,177,728
        40      15,720,448       5,570,874      15,728,640      20,963,328
        41      15,720,448       5,393,767      15,728,640      31,449,088
        42      15,720,448       5,366,338      15,728,640      36,691,968
        43      15,720,448       5,476,919      15,728,640      26,206,208
        44      15,720,448       5,973,525      15,728,640      41,934,848
        45      15,720,448       5,425,109      15,728,640      52,420,608
          -------------------------------------------------------------
          
SQL> select sum(bytes),tablespace_name from dba_data_files where tablespace_name like 'RBS_%' group by tablespace_name;

SUM(BYTES) TABLESPACE_NAME
---------- ------------------------------
4194304000 RBS_01
4194304000 RBS_02

Thanks,
 

Tom Kyte
December 27, 2003 - 9:42 am UTC

the ora 1555 means "rbs still not big enough"

I would not recommend adding MORE rbs -- I would suggest going back to your original number (or less -- depending on the average active number of transactions) and set their MINEXTENTS higher. That is, have fewer large RBS's.

delayed block cleanout and dbwr

A reader, February 05, 2004 - 5:33 am UTC

Hi

After reading some documents about delayed logging block cleanout some doubts came to my mind.

When a block is modified and fast committed (by setting RBS transaction slot to commit) this block wont be clean out until it is visited again. I wonder if the block is never visited again how Oracle make the change permanently? DBWR by writing dirty buffers to disk? All the documents I have read none of them refers dirty blocks written by dbwr which I think it performs a key role as well dont you think so?

Tom Kyte
February 05, 2004 - 7:43 am UTC

if the block is cleaned out after the commit in the buffer cache, it will be written clean by dbwr when it checkpoints it naturally later on.

note 45895.1

A reader, February 05, 2004 - 10:18 am UTC

Hi

this note says this:

Delayed block cleanout on old committed updates. An update operation completes and commits; the updated blocks are not touched again until a long-running query begins. Delayed Block Cleanout (DBC) has never been done on the blocks. This can result in a scenario which happens only under specific circumstances in VLDB, causing ORA-01555 errors when NO updates or inserts are being committed on the same blocks a query is retrieving.
All of the following must be true for an ORA-01555 to occur in this case:

(i) An update completes and commits and the blocks are not touched again until...
(ii) A long query begins against the previously updated blocks.

Here it´s saying a block is never cleaned until it´s visited?!?! Is it true?!

Tom Kyte
February 05, 2004 - 7:20 pm UTC

if the block cleanout was delayed -- that is correct.

If you have my book "Expert One on One Oracle" -- i go into this in some depth and even set up cases to show you how to get these "delayed block cleanouts" to happen in a test scenario.

ORA-01555

atul, April 02, 2004 - 12:54 am UTC

Hi,

I'm getting this famous error(ORA-01555) while executing following
query,

set transaction use rollback segment trs;
select distinct(x.table_name) from dba_tables x , (
(select table_name,column_name,position from user_cons_columns
where constraint_name in
(select constraint_name from user_constraints x,user_snapshots y where
constraint_type='P' and x.table_name=y.name and
master_link like '%QATLO%' ) ));


Even if after giving "set transaction" its using other rollback segment..

I don't know why?
Also why select statement is using rollback?


Thanks,
Atul

Tom Kyte
April 02, 2004 - 10:02 am UTC

set transaction only counts for when you WRITE to rbs.

when you read (query) you need to read the RBS the others that have modified your data wrote to. You are reading their changes -- your query needs to read perhaps EVERY rollback segment in the system!

Every query might use RBS to provide the read consistent (eg: correct) answer.

The odds of a 1555 are directly related to the size of your smallest RBS, your RBS are too small for the work you do in your database.

commit list

A reader, July 09, 2010 - 3:30 pm UTC

Hi Tom,

What are "commit lists" ? where do they reside?
Are just a list of pointers to the blocks that just have been modified ? When the server process re-visits the modified block as part of commit activity , will the server process go to these lists to find the modifier blocks to 'clean' them ?









Tom Kyte
July 09, 2010 - 5:07 pm UTC

are you talking of the use of the phrase "commit list" in my book Expert Oracle by Design - or some other context.

If you mean in my book - I go over that in the book. It is a list of the last couple of blocks your transaction has modified. It is used to perform a commit cleanout of blocks.

If you mean elsewhere, I'd need a reference to see it in context to explain what they might mean.

A reader, July 09, 2010 - 5:10 pm UTC

Corret - from your book - expert oracle architecture
while explaining about block cleanout .

Tom Kyte
July 09, 2010 - 7:03 pm UTC

then it is explain therein. it is a list of blocks maintained by your transaction - the list is of the last few blocks your transaction modified. when you commit, we walk that commit list of blocks specific to your transaction and if they are still in the cache - we clean them out.

that means that for most small OLTP transactions - you never have "delayed block cleanouts", it is only for large transactions.

A reader, July 09, 2010 - 7:07 pm UTC

I have clearly understood the concept of 'commit list' after reading your EXCELLENT book

but just interested to know- how are they implemented internally ?are they just pointers to the blocks in the buffer cache?
Tom Kyte
July 09, 2010 - 7:10 pm UTC

just a list of DBA's (data block addresses) to blocks that may or may not be in the cache. In the book I describe how the list is walked and IF the block is still in the cache - we clean it out - else we don't.

A reader, July 09, 2010 - 7:18 pm UTC

THANKS TOM ,,,

have been watching your photos on http://www.flickr.com/photos/tkyte/show ,,,,,,,,,, VERY long list of photos ...
watching for the last two hrs :) still not finished
nice ...

A reader, July 15, 2010 - 10:23 am UTC

I like the photos too, especially the cute cat.
Now -- in 2010 -- I wonder if people post here more for interacting with Tom Kyte (as a symbol) than for asking technical questions.
That might be an answer to your concern about the lack of new questions, in contrast to the increasing number of comments.
=D