Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Anthony.

Asked: February 01, 2002 - 9:06 pm UTC

Last updated: January 08, 2004 - 11:20 am UTC

Version: 8.1.6

Viewed 1000+ times

You Asked

Hi tom ,

I understand the concept behind the snapshot-to-old error, my question is, Can we anticipate before hand if we are going to encounter this error.

I tried to keep a table in the keep pool where can i query that the table that i pinned is already in that buffer pool.


Tnx TOM....

and Tom said...

If your rollback segments wrap faster then your longest query runs - you will hit an ora-01555 at some point.

So, yes, you can anticipate this. You should be able to say how fast your RBS wraps and how long your longest running query is. If your query is longer then the time to wrap, you are a 1555 candidate.

Doesn't matter if you buffer it, we need the ROLLBACK to give you the right answer. If you overwrite the rollback - it quite simply doesn't exist -- buffered or not.

Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Snapshot to old

Anthony K Velarde, February 02, 2002 - 2:48 am UTC

So what you're saying is , I can still prevent this oracle error from happening. If so, can you give the full details, steps or even scripts.

tnx very much.......

Tom Kyte
February 02, 2002 - 5:38 pm UTC

Mark is dead on in the following comment, all you need to do is ensure you have sufficiently sized, PERMANENTLY allocated rollback segment space.

In Oracle9i, the steps are easier:

1) determine your longest running query
2) set the undo_retention_period to that time plus a little more and use an undo tablespace instead of rollback segments.

How to size rollback

Mark J. Bobak, February 02, 2002 - 11:01 am UTC

1.) Determine the length of time of your longest running query.
2.) Determine the total volume of undo your database writes
in that length of time.
3.) That number should be the total size of the sum of
the OPTIMAL of all your rollback segments.
4.) Determine the largest number of concurrent transactions
you will encounter, and divide by 4. That's the number
of rollback segments you need.
5.) Take the number from step 3 and divide by number in
step 4, that's the size of OPTIMAL for your rollback segments.
You may want to put a little (say, 5%) padding
in the number from step 3, to avoid resize thrashing.
(If you inadvertently make OPTIMAL just slightly too small,
your rollback segments will shrink/wrap themselves to
death, and you'll get a flood of ORA-1555.)

The above should properly size your rollback segments
to avoid ORA-1555.

6.) What's the largest peak transaction volume your
database needs to support?
7.) How many concurrent peak transactions do you expect?
8.) Multiply number from step 6 by number from step 7.
9.) Total size of your rollback tablespace should be
large enough to contain the sum of the OPTIMALs of all
your rollback segments (from step 3) plus the number from
step 8.

The above should allow you to properly size the rollback
tablespace to avoid ORA-1650.

Hope that helps....

Just Clarify

Anil Pant, January 07, 2004 - 6:46 am UTC

To the ealier response u said

In Oracle9i, the steps are easier:
1) determine your longest running query
2) set the undo_retention_period to that time plus a little more and use an undo tablespace instead of rollback segments.

Use an undo tablespace instead of rollback segments. This line is bit confusing. As far as my knowledge goes we dont have any thing called rollback segment (though internally RBS is used) in Oracle 9. Pls calrify

Tom Kyte
January 07, 2004 - 8:23 am UTC

rollback segments as they existed in 8i and before are fully supported and exist in 9i.

YOU make the choice

a) do it yourself and create rbs's the old fashioned way
b) use undo_management = auto and let the database do it.

fetch across commit

Darren, January 07, 2004 - 4:25 pm UTC

Tom,

if you have a fetch across commit scenario wont that artifically create 1555 no matter what the size of your RBS is. So you should also look to rout out all the code that does this sort of thing:
open cursor
loop
some dml;
commit;
end loop;
close cursor;

in addition to sizing your rollback space.


Tom Kyte
January 08, 2004 - 11:20 am UTC

Not really -- if your rbs is big enough to not wrap during the time you are fetching over commits, you should be OK.

The fetch over commits allows undo to be reused (undo you've generated). Meaning if the rbs's ARENT big enough to not wrap, YOU wont cause them to extend (since you are comitting and releasing the storage).

If the rbs's started "more than large enough", they wouldn't have extended had you NOT committed in the first place -- so, it won't happen.

The thing is -- people typically COMMIT in the for loop specifically because they get the "failed to extend rbs XXX by YYY" and figure "ah hah, the fix is 'commit' to release the rbs storage" but that just brings on the 1555 with a vengance.

still, a horrible practice this is:

open cursor
loop
some dml;
commit;
end loop;
close cursor;

since most people never consider "what happens when we've done 100 out of 200 rows in the cursor and the system fails". They cannot restart that process without double processing the 100 records OR skipping the last 100 (eg: they corrupt their data). Use transactions for what they were designed for -- TO PROTECT your data integrity.