Skip to Main Content
  • Questions
  • Snapshot too old error - more explanation

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Ravi.

Asked: December 31, 2000 - 2:16 am UTC

Last updated: December 18, 2004 - 8:36 am UTC

Version: 815

Viewed 1000+ times

You Asked

Dear Tom,

I have a condition like this. We have a table with 30 columns and we are interested in adding another 7 columns to it. The table is roughly large like 400million rows.

After adding another 7 columns we would like to update the 7 columns of all the rows using a tool. This tool commits at predetermined number of rows.

At the same time, users also query the same table, but limited to the old 30 columns and not the new 7 columns. No user do any DML on this table. This query also returns large rows and takes 2 to 3 hours to complete in a data ware house env.

I read in a backup and recovery book that for an update of a row, only the prior image of the updated columns would be stored in the rollback segment-- not the entire row. Then I assume that, in this case the prior image of the 7 columns of the update rows.

I would like to clarify the following:

At time t1, a query q1, started reading the old 30 columns

At time t2, an update occurs on the 7 newly addded columns for some rows

At time t3, commit occurs and the portion of the rbs for this transacion is wrapped (or overwritten) due to some other reasons.

At time t4, the same query q1, tries to read the 30 non updated columns of the updated rows.

Will oracle check the rollback segment for read consistency and give "the snapshot too old error" or
Oracle will not go for the rollback segment at all since the query donot try to read the updated columns.

Could you please tell how oracle maintains the read consistency.


Please give your comment at the earliest possible.

Thanks
Ravi





and Tom said...

We do the read consistency at the block level, not the column level so yes -- the rollback segments would be checked at that point in time. We reconstruct the block as it appeared at the time your query began. We do not bother to check (in fact there would be insufficient data to check if you overwrite the rollback) if the particular COLUMNS you desire have been modified. There is no way for us to do this.

Your choices are:

o suffer the inevitable ora-1555's you will get on the long running queries.

o size a rollback segment sufficiently large to hold the rollback generated by the update WITHOUT committing and use this rollback segment specifically. This will avoid the ora-1555's during the update. drop the rbs after you no longer need it.

o size the rollback segments to be large enough to not wrap in a 3-4 hour period. Make them this big BEFORE you start updating (do not rely on them extending -- extend them yourself). This will avoid the ora-1555.

o since adding 7 columns and then updating them will undoubtably cause MASSIVE row chaining, almost every row in this table will migrate I am sure, consider building the table in its entirety off to the side as a combination of OLD table and the NEW updated rows -- do this as a CREATE TABLE AS SELECT with the unrecoverable option (instead of updating these 7 columns, perform the logic in the SELECT portion of the create table as select you would have done during the update). Index this table using parallel unrecoverable index creates. rename the OLD table to T_OLD. Rename this new table to T. Drop the old table after you are convinced all is well. In order to do the DDL to rename -- you will have to find a short period where no one is querying the table. This will result in a nicely rebuilt, unchained table and no ora-1555's and a very very short downtime window.

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Excellent Solution, Small question to this

Sreekanth, July 28, 2001 - 2:24 pm UTC

The solution is excellent, I guess the down time of the table is affordable compared to the frequent failures with 1555 error. It also avoids future performance problems on the table.

Hi Tom,

Just in case we assume the table is not partitioned. How about partitioning it ? while we do the re-building of the table . Will that reduce any down time in this scenario ?Or does it have any other side effects ? Please clarify

Thanks


Long running queries

Js, December 17, 2004 - 2:25 pm UTC

Hi,

As you said long running queries can be a reason of
ora-1555. Could you pls. explain how can we know the long
running queries of our database.

Actualy we are getting ora-1555 frequently in our production server.
One month back every thing was fine.

Is there any way to get which long running query or heavy
updation is genrating ora-1555.

Thanks

Tom Kyte
December 18, 2004 - 8:36 am UTC

if you are using 9i, v$sql shows runtimes (elapsed, cumulative runtimes).

The long running queries are not the cause of the 1555 as much as they are victim.

What causes a 1555?

simply put: insufficient undo has been configured for the system.

What makes a 1555 for a long running query stop happening?

simply put: one of two things. A) make the query run faster or B) configure sufficient undo for your system.


the long story is of course, above.

If you are using 9i and up, use AUM (automatic undo management), this will let you use the v$undostat view to monitor in a very detailed fashion how well you are doing in this area, how many queries failed with 1555's and when, and how much more undo you would actually need to support your system's workload.