Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Amit.

Asked: May 21, 2000 - 7:54 am UTC

Last updated: July 29, 2007 - 1:25 pm UTC

Version: 8.0.6.0.0

Viewed 1000+ times

You Asked

Hi Tom,

Could you please clarify the following question about rollback segments.
I am working on oracle 8.0.6.0.0 (on NT).

Oracle Enterprise Manager help gives the following recommendations for
creating rollback segments:

1. Allocate one rollback segment for each 10 concurrent OLTP (online transaction processing)
transactions and one rollback segment for each concurrent batch job.

2. keep extents within a rollback the same size by specifying STORAGE parameters
where NEXT=INITIAL.

3. It is also recommended that you set the MINEXTENTS parameter value to 20.
Because rollback segments are logically regarded as a circular queue of extents,
they are required to have MINEXTENTS value of at least 2.
The more extents the rollback segment has, the less likely it that a rollback
segment will require an extent allocation that could be avoided.

My questions are:

1. Is there any method for calculating what is the "right" NEXT=INITIAL storage
parameters for an OLTP system , or for monitoring whether existing values
are appropriate ?
2. Assuming there is a lot of RAM in the system, is it recommended to put
the OLTP rollbacks in the KEEP bufferpool ?
3. If batch jobs are run against the OLTP database, larger rollbacks need
to be used (one per concurrent batch job?).
In order to avoid getting "snapshot too old" errors, are there any tips
for the OLTP rollbacks ?


Thanks,
Amit.

and Tom said...

1) Is there a method for calculating what is the right extent size?

The following advice is from support note 62005.1, "Creating, Optimizing, and Understanding Rollback Segments"

<quote>
To find out the size and number of rollback segments needed
to handle normal processing on the database you need to do
some testing. A good test is to start with small rollback
segments and allow your application to force them to extend.
Here are the steps to run such a test:

o Create a rollback segment tablespace.

o Select a number of rollback segments to test and create
them in the tablespace.

o Create the rollback segments so that all extents are the
same size. Choose an extent size that you suspect will
need between 10 to 30 extents when the segments grow to
full size.

o Each rollback segments should start with two extents
before the test is run. This is the minimum number of
extents any rollback segment can have.

o Only the rollback segments that you are testing and the
SYSTEM rollback segment should be online.
Run transactions and load typical of the application.

o Watch for rollback segment contention.

o Watch for the maximum size a rollback extends to.

The maximum size any one of the rollback segments reaches
during the test is the size you want to use when
configuring. This size we will call the "minimum coverage
size". If you see contention, adjust the number of segments
and rerun the test. Also, if the largest size requires
fewer than 10 extents, or more than 30, it is a good idea to
lower or raise the extent size, respectively, and rerun the
test. Otherwise, space may be getting wasted during the
test and this would throw the number off.

At the end of this test, you will have some good base
estimates for the size and number of rollback segments
needed for normal processing. However, handling large
transactions has not been discussed yet. Calculating the
segment size needed for these types of transactions as well
as how to meet the requests for the calculated space will be
addressed.

For sizing rollback segment extents, it is strongly
recommended that each extent be of the same size. The minimum and maximum number of extents for an individual segment should be around twenty for best performance.
</quote>

I like to create my rollback segments with around 20-25 minextents and set maxextents to 255 or so. I place each rollback segment in its own tablespace and allow each tablespace to have 1 autoextendable datafile allocated to it. In this fashion, the rollback segments will grow as needed, if space is available on the system -- upto 255 extents. Periodically (typically before backups) I manually offline and then drop each rollback segment, shrink the datafile associated with its tablespace back to the right size for 20 extents and recreate the rollback segment. This allows for the infrequent "large" transaction to dynamically grow the rollback segment without having to interject a "set transaction use rollback segment BIG_RBS" into it.


2) "is it recommended to put the OLTP rollbacks in the KEEP bufferpool"

Well, the only DDL that accepts the buffer pool clause are:

o CREATE TABLE table name... STORAGE (buffer_pool_clause)
o CREATE INDEX index name... STORAGE (buffer_pool_clause)
o CREATE CLUSTER cluster_name...STORAGE (buffer_pool_clause)
o ALTER TABLE table_name... STORAGE (buffer_pool_clause)
o ALTER INDEX index_name... STORAGE (buffer_pool_clause)
o ALTER CLUSTER cluster_name... STORAGE (buffer_pool_clause)

so, the answer to this is no. There is no syntax to support this construct.

3) "if batch jobs are run against the OLTP database, larger rollbacks need to be use. In order to avoid getting snapshot too old errors, are the any tips for the OLTP rollbacks."

This is a tricky one to answer. There are 2 concepts here

A) you need a larger rollback segment for the big batch transaction, larger then you need for the OLTP transactions in any case, just to handle the volume of updates it performs.

B) you need larger rollback segments for the OLTP transactions to use as well during the batch job to support the consistent read mechanism employed by Oracle -- not only does the batch job need to have a big rollback segment but you want to try and make it so that your other rollback segments used by the OLTP job do not wrap while the batch job is processing else it (the batch job) will get the ORA-1555 when it tries to reconstruct some data from the RBS's used by the OLTP jobs.


What that means is in general -- any of your rollback segments should be large enough to accomidate your batch job (or at least allow it to grow as described above) AND your rollback segments should be large enough so that during the period of time the batch job is executing, the OLTP transactions do not cause them to "wrap" around -- overwriting data the system no longer needs to perform a rollback with but that your batch job might need to perform a consistent read.


Rating

  (21 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Sagi, October 03, 2002 - 10:11 am UTC

Hi Tom !

You Said
"Periodically (typically before backups) I
manually offline and then drop each rollback segment, shrink the datafile
associated with its tablespace back to the right size for 20 extents and recreate the rollback segment."

q1) Normally the HOT Backups are scheduled automaticlly during some offpeak time. Can we still follow this methind in HOT Backups.

q2) Secondly, if its a 24*7 and if we drop the rollback segments and say someone is trying to work, would he not encounter error.

Regards,
Sagi


Tom Kyte
October 03, 2002 - 10:32 am UTC

q1) sure

q2) the person trying to drop it would, yes. they would have to wait or just skip it and do it next time. it is not vital to drop and recreate it.

A few clarifications (please) and guidelines

Yogeeraj, October 30, 2002 - 8:19 am UTC

Hi,
a.
Can you please clarify the following?
<quote>
I like to create my rollback segments with around 20-25 minextents and set maxextents to 255 or so. I place each rollback segment in its own tablespace and allow each tablespace to have 1 autoextendable datafile allocated to it.
</quote>
- Do you mean One Rollback segment per tablespace? (Each equi-size?)

b. I am running an OLTP and reviewing my RBS tablespace which is DMT. The current size is 1.5 GB. Can you please guide me? I need to know if it has been properly sized and how to plan for the new rollback segments using statistics from the current operating environment.

c. My OLTP with a large number of users (~300). How will i know if any of my users are hitting ORA-01555?

Thank you in advance for your explanations.

Regards
Yogeeraj


Tom Kyte
October 31, 2002 - 4:25 pm UTC

a) yes, 1 RBS / tablespace (before 9i anyway -- in 9i, undo tablespaces are what I've been using)

b) do you get more wraps then YOU want. do you get lots of waits on undo segment headers. do you frequently get ora-1555's. those are questions you need to ask yourself.

c) they will tell you. if they don't tell you, they must not care.... Or the developers can instrument the code to capture and handle errors and log them somewhere.




rbs tablespace maxextents unlimited?

Jerry, May 12, 2003 - 11:03 pm UTC

Hi, Tom,

Follow up questions on ORA-01555.

I have a database with rbs tablespace init=next=256k, maxextents unlimited, minextents 20. I created 20 rollback segments in the rbs tablespace.

When I query dba_tablespaces,it has maxextents=2147483645 for the rbs tablespace. When I query dba_rollback_segs, for each rollback segments, it shows the maxextents=32765. 32765 x 20 = 655300 !=2147483645 , why?

The database experiences sporadic ORA-01555 errors when running a procedure. The code is like:

declare
cursor c1 is
select a, b from table1 order by a;
begin
open c1;
loop
update table1
where xxxx;
commit;
end loop;
end;
/

Is this a 'fetch across commit'? I read from metalink that 'add an order by statement to the query will create a temp segment to hold the results of the order by and will not require consistent read'

I can't understand why I could not re-produce the ORA-01555 error. I used the same rbs segment and tested on a quite system. If there were active transactions, it should always extends because of the rbs tablespace setting. How can I find out what caused the ORA-01555?

Thanks as always!





Tom Kyte
May 13, 2003 - 7:05 am UTC

32k is the MAX for rollback segments.


that is the CLASSIC fetch across commit -- classic cause you are updating the table you are reading and committing the changes. the UNDO you are overwriting (since you released it via commit) is the exact UNDO you yourself need. You have shot yourself in the foot.

Adding an order by *MIGHT*, MAY, can but does not have to do that. (and it does require consistent reads, they just happen before the updates is all -- but they are still all there).


commiting in a cursor for loop like that

o is slow
o generates more undo/redo
o leads to total loss of data integrity
o is the second worst practice I see all of the time (bind variables and not
using them being the first)


In fact, I'd bet that procedural code should be a SINGLE update statement -- there should be no cursor for loop, no looping, no single row updates -- just

update table1 set..... where ....;


and the CLIENT that invoked this process should commit (wish plsql didn't support the verbs commit and rollback personally)



And in your case -- the rbs will *NOT* extend. only modifications cause an RBS to extend. You commit -- ending the transaction -- hence you will never cause it to extend.

could you please clarify

Jerry, May 13, 2003 - 10:12 am UTC

Thanks Tom for the explaination. That's very helpful!

Several more questions:

1. What's the definition of 'fetch across commit'? If I have the code like this, is that fetch across commit (cursor table opened being different from the table updated):

declare
cursor c1 is
select a, b from TABLE1;
begin
open c1;
loop
update TABLE2
where xxxx;
commit;
end loop;
end;
/

2. Could you please clarify:
'Adding an order by *MIGHT*, MAY, can but does not have to do that. (and it does
require consistent reads, they just happen before the updates is all -- but they
are still all there).'

3. If the application code caused the ORA-01555 error, why can't I re-produce the error with the same data? I re-ran the procedure and can't re-produce the error.

Thanks again!


Tom Kyte
May 13, 2003 - 10:33 am UTC

1) you hae no fetches in there so i'll assume you really mean

open c
loop
fetch c
exit when c%notfound
update
commit
end loop
close c


that is a fetch across commits. You open, fetch -- COMMIT -- fetch ....


2) what is to clarify?

a) an order by MIGHT cause us to materialize the result set before the first fetch. it might not.

b) an order by does not disable or affect the consistent read mechanism, we do consistent reads. It may affect WHEN they were done (all before the first fetch or row by row)

3) your rollback segments just happened to be big enough and didn't wrap around. I can reproduce 1555's at will -- if you have my book "expert one on one oracle", tons of examples in there.

Mike, May 13, 2003 - 4:54 pm UTC

Rollback Segment Stats for DB: xxxDB Instance: xxxdb Snaps: 226 -227
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed

Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
------ -------------- ------- --------------- -------- -------- --------
0 7.0 0.00 0 0 0 0
1 1,951.0 0.00 9,710,280 10 0 10
2 2,467.0 0.32 13,078,576 15 1 12
3 2,300.0 0.00 12,706,808 14 0 11
4 769.0 0.00 430,534 1 0 0
5 722.0 0.00 587,162 1 0 0
6 390,504.0 0.00 19,390,538 21 1 18
7 738.0 0.00 280,982 0 0 0
8 797.0 0.00 387,670 0 0 0
9 903.0 0.00 1,439,410 1 0 0
10 2,220.0 0.00 12,171,202 11 0 10
-------------------------------------------------------------
Rollback Segment Storage for DB: xxxDB Instance:xxxdb Snaps: 226 -227
->Optimal Size should be larger than Avg Active

RBS No Segment Size Avg Active Optimal Size Maximum Size
------ --------------- --------------- --------------- ---------------
0 401,408 0 401,408
1 12,705,792 4,504,095 14,802,944
2 15,851,520 3,974,749 15,851,520
3 13,754,368 4,166,232 13,754,368
4 2,220,032 2,729,481 20,045,824
5 2,220,032 1,885,905 13,754,368
6 21,094,400 5,934,144 21,094,400
7 3,268,608 3,362,346 12,705,792
8 5,365,760 2,217,066 14,802,944
9 2,220,032 3,267,504 21,094,400
10 14,802,944 3,994,321 14,802,944
-------------------------------------------------------------
Undo Segment Summary for DB: xxxDB Instance: xxxdb Snaps: 226 -227
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed

Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/
TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU
---- -------------- ---------- -------- ---------- -------- ------ -------------
1 8,192 1,911 226 4 0 0 0/0/0/0/0/0
-------------------------------------------------------------


Undo Segment Stats for DB: xxxDB Instance: xxxdb Snaps: 226 -227
-> ordered by Time desc

Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/
End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU
------------ ------------ -------- ------- -------- ------- ------ -------------
13-May 10:56 8,192 1,911 226 4 0 0 0/0/0/0/0/0
-------------------------------------------------------------
Hi Tom,
The above is from the statspack @15min for our 9ias infrastructure DB , web application.
I would know how do you think the rollback segs performed? especially on these have extends/shrinks/wraps? what changes shall I make?

Tom Kyte
May 13, 2003 - 5:16 pm UTC

the extends might mean they are too small however, since there is fortunately no optimal, they'll fix themselves over time (they'll extend and stay there)

I don't see major cause for concern, you had some big transactions running -- doing "big things" during this period.

Number of rollback segments

Himnish Narang, March 12, 2004 - 8:47 am UTC

Hi Tom,

The following is the output from the database dictionary views v$waitstat and v$sysstat. According to Oracle if any of the undo block is not zero then u need to add rollback segments.

We are using oracle 8i (8.1.7) on IBM AIX 4.3. Database size is 270GB and we are having 180 rollback segments. All the rollback segments are being used with optimal option and optimal is having a value of 10. The application is an OLTP and concurrent users connected at peak time is more than 800.

Please suggest whether we need to add the rollback segments or not.

Thanx for your time.


SQL> l
  1* select * from v$waitstat where class 
  2  in ('undo block','undo header','system undo block','system undo header')
SQL> /

CLASS                   COUNT       TIME
------------------ ---------- ----------
system undo header          0          0
system undo block           0          0
undo header               115         42
undo block               1930        301

SQL>
SQL> l
  1* select name,value from v$sysstat where name in ('consistent gets','db block gets')
SQL> /

NAME                         VALUE
------------------------ ----------
db block gets              31546735
consistent gets           696922744

SQL>
 

Tom Kyte
March 12, 2004 - 9:55 am UTC

impossible to say.

this is like asking "the last time I took a car ride, i stopped at 100 red lights. is that good or bad?"

no matter what you say to that, I'll prove that either the opposite is true OR that it is neither good or bad.


those stats are since the system was up. if you were up for 30 seconds, they might be "alot". if you were up for a day or two, they are not.



Rollback segments for OLTP

Himnish Narang, March 13, 2004 - 4:08 am UTC

Hi Tom,

The statistics were taken out when the database has run for 10 hours. The time these were tken was also the peak time. So according to the statistics given,the number of users logged on to the application(800) and number of rollback segs that we have currently, please suggest us how to manage our rollback segments and also how can we monitor the performance /contention of the same.

Thanx in advance.

Tom Kyte
March 13, 2004 - 8:45 am UTC

Well, quick question to you -- are you meeting your performance goals in the first place and if not, you have already done the application level tuning right?

But anyway, this is 301 seconds of wait over a 10 hour period. 0.15 seconds of wait on average per event. Tell me -- in your system -- is this the low hanging fruit, something to be concerned about? (do you really want more than 180 rbs's is my question).

rollback segments and cursor loops

Chris, April 28, 2004 - 10:26 am UTC

Tom,
You mentioned in one response that you wished PL/SQL didn't support commit/rollback. This was pertaining to a cursor for loop with a commit in it. You also said it generated more undo/redo and leads to a total loss of data integrity. Can you help me understand this as it pertains to my situation outlined below.

I have an archive database with a large amount of data (at least large as far as RBS's come into play). For example, the current table I am working with has 30 million rows. The table is partitioned by month from June 02 - May 04. Now I've read your article on doing the create, insert, rename, etc. when needing to update but am fighting like heck with the DBA's to get space to do that, so humor me and work within the constraint that we need to do this as an update (knowing we both disagree with this method). Due to RBS issues I tried an update a partition at a time method. Basically fetch partition names into PL/SQL table (to avoid 1555) then loop through issuing update and commit (pseudocode below).

for each partition loop
update all rows
commit
end loop

In testing I have still run into a) max number of extents reached in RBS, b) can't extend tablespace for RBS by xxx, etc.

Can you elaborate on your statements about not commiting in the loop, as well as the undo/redo (using 8i by the way so a redo issue I'd assume) and any general hints as to how to most effectively perform such an update (I was able to get the DBA to take all but one RBS offline and leave only the largest one out there).

Also, would dropping all indexes before performing the update and recreating after help. Obviously the indexes that contained columns being updated would help, bu what about any indexes on columns not being updated? Would dropping/recreating those improve performance for any reason.

Thanks in advance,
Chris

Tom Kyte
April 28, 2004 - 3:49 pm UTC

why not just run PDML? it would have the same effect -- each PQ slave would get its own RBS....


but -- if you cannot update a single PARTITION in a single update, then you would have the same issue with PDML.


disabling indexes on the columns being updated would definitely help the update speed and massively cut down on redo/undo generated (as long as the index was not needed to find the rows of course!).

If I were updating ALL ROWS, I would more likely:

o prevent modifications
o CREATE TABLE NEW as select <modifications>
o index new
o drop old
o rename new to old
o allow modifications

that way -- no redo, no rollback need be generated (schedule a backup if you skip redo!) at all. can be done in parallel, nologging -- all of the steps.

PDML, partitions, rollback, cursors, etc.

Chris, April 29, 2004 - 9:30 am UTC

Tom, thanks for your help. I agree with the preferred method, but right now can't gather the necessary space, so I'm doing the best I can with the update scenario in case our space issues don;t work themselves out. I actually have gone to a parallel approach for the updates, but you raised an interesting point. You said each PQ would get its own RBS. I indicated that I has taken all but the largest RBS offline, should we not do that? What if the RBS's that go back online are small, do I still run the risk of running out of space with a particular RBS?

Anyhow, here is my code (simplified the update for readability as the update being performed is more complicated, but not relevant), is this what you had in mind, or is there something I should change?

DECLARE
TYPE t_names IS TABLE OF VARCHAR2(30) INDEX BY BINARY_INTEGER;
l_names t_names;
CURSOR c IS
SELECT partition_name
FROM all_tab_partitions
WHERE table_name = 'MYTABLE';
l_count NUMBER := 1;
BEGIN
FOR crec IN c LOOP
l_names(l_count) := crec.partition_name;
l_count := l_count + 1;
END LOOP;
FOR l_count IN l_names.FIRST .. l_names.LAST LOOP
EXECUTE IMMEDIATE 'UPDATE /*+ parallel(mytable,4) */ mytable PARTITION(' || l_names(l_count) || ')' ||
' SET column1 = 5';
COMMIT;
END LOOP;
END;
/

Thanks,
Chris

Tom Kyte
April 29, 2004 - 10:34 am UTC

if you use PDML you'll want at least as many RBS's as PQ slaves -- so each can get its own (they'll round robin naturally)......

The probability of "unable to extend" (as is the probability of ora-1555) is always a direct function of your smallest available RBS, yes.


I had in mind

alter session enable parallel_dml;
update ....;


actually (see the server concepts guide, there is a chapter on this).

if I were updating every row in a really large table -- I would *seriously* use CTAS, drop old, rename new to old. No undo, No redo (if you want), parallel, no row migration. An update of the entire table is an "offline operation" anyhow (no one else is going to update it, you'll have locked it eventually!)

Unusual RBS's

RD, September 08, 2004 - 12:40 am UTC

Hi Tom,
This new company I have recently joined has a production database that has rollback segments like this ( oracle8i and its an OLTP database):-

ps8sys > desc segment_name, tablespace_name, initial_extent , next_extent, min_extents, status from
dba_rollback_seg;

SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS STATUS
-------------------- -------------------- -------------- ----------- ----------- ----------------
SYSTEM SYSTEM 57344 57344 2 ONLINE
RBS_BIG RBS 536870912 65536 1 ONLINE
RBS0 RBS 2097152 65536 1 ONLINE
RBS1 RBS 2097152 65536 1 ONLINE
RBS2 RBS 2097152 65536 1 ONLINE
RBS3 RBS 2097152 65536 1 ONLINE
RBS4 RBS 2097152 65536 1 ONLINE
RBS5 RBS 2097152 65536 1 ONLINE
RBS6 RBS 2097152 65536 1 ONLINE
RBS7 RBS 2097152 65536 1 ONLINE
RBS8 RBS 2097152 65536 1 ONLINE
RBS9 RBS 2097152 65536 1 ONLINE
RBS10 RBS 2097152 65536 1 ONLINE
RBS11 RBS 2097152 65536 1 ONLINE
RBS12 RBS 2097152 65536 1 ONLINE
RBS13 RBS 2097152 65536 1 ONLINE
RBS14 RBS 2097152 65536 1 ONLINE
RBS15 RBS 2097152 65536 1 ONLINE
RBS_BIG01 RBS 268500992 65536 1 ONLINE


Here the next and initial are so much different and also I saw the DDL for rollback seg. creation and there the optimal clause was NULL. These rollback segments don't look right to me. Or am I wrong?
If I am right how can they in the present scenerio cause any performance degradation and why ?
Please explain ......

Thanks as always,
RD.

Tom Kyte
September 08, 2004 - 9:22 am UTC

yes, they are a little strange.

optimal -- lack thereof -- a good thing.

rbs's normally have initial=next.

i don't like "rbs_big" concepts -- sure fire way to get ora-1555's


before hypothesizing on the ways something "could be causing a performance issue", one normally says "i've got a problem". do you have an identifiable performance related issue

Not understood

RD, September 08, 2004 - 5:19 pm UTC

Hi Tom,

I did not understand

1. Why the lack of 'optimal' would be good.
2. Also the fact that rbs_big would ever be the cause of
ora-0155 error. I thought that snapshot too old error is
caused by small rollback segments.
3. No we have no performance issues as yet but because I,m doing the database rewiew I have to point it out.But do I have to change them ??? or whatever you suggest..

Thanks yet again from ALL DBA's in the world.

Regards,
RD.

Tom Kyte
September 08, 2004 - 7:30 pm UTC

1) because optimal increases the odds of 1555's, optimal happens when you least want it to. the optimal size of an RBS is the size it grows to.

2) it is -- the probability of an ORA-1555 is directly proportional to the size of your SMALLEST rbs. Your large RBS's are an ill advised attempt to "save disk space".

3) don't change anything except in response to an identified problem would be my advice. If everything is working "awesome", there is nothing to do. I'd look at the standard operating procedures for recovery first and foremost and ensure that they are 100% bullet proof, tested (by you tested)

Thanks Tom

RD, September 08, 2004 - 7:44 pm UTC

Great advice ,
thanks again.
Regards,
RD.

Extends and wraps

RD, September 20, 2004 - 8:02 pm UTC

Hi Tom,

What exactly is the difference between extends and wraps??

ps8sys > select n.name,s.usn,s.extends ,s.wraps
2 from v$rollname n, v$rollstat s
3 where n.usn = s.usn;

NAME USN EXTENDS WRAPS
------------------------------ ---------- ---------- ----------
SYSTEM 0 0 1
RBS0 2 0 5785
RBS1 3 0 8862
RBS2 4 0 9028
RBS3 5 0 3312
RBS4 6 0 11758
RBS5 7 0 3646
RBS6 8 0 10241
RBS7 9 0 8473
RBS8 10 0 4301
RBS9 11 0 5882

NAME USN EXTENDS WRAPS
------------------------------ ---------- ---------- ----------
RBS10 12 0 4945
RBS11 13 0 4395
RBS12 14 0 5396
RBS13 15 0 5229
RBS14 16 0 4168
RBS15 17 0 6757
RBS_BIG 1 0 9719
RBS_BIG01 31 0 4461


This is a 8i database.
Wraps - lots
Extends - none.

Please clarify ....

Thanks in advance,
Regards,
RD.

Tom Kyte
September 20, 2004 - 8:56 pm UTC

wraps are when a transaction is made to span extents -- when a transaction "T1" used extent 1 and then needed to advance, or wrap, into extent 2.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1539206178475 <code>

extends would be how many times the rollback segment was made to grow to allow some transaction sufficient space.

Reader

A reader, January 14, 2005 - 2:10 pm UTC

What exactly is the definition for
RBS extends and RBS wraps in the Statspack report

wraps => number of times the RBS extends into the next
extent

then
extends => ?

Tom Kyte
January 14, 2005 - 8:39 pm UTC

wraps -- number of times a transaction spanned extents (started in extent 1, finished in extent 2)

extends -- number of times the rbs GREW, added more extents.

Reader

A reader, January 16, 2005 - 9:49 am UTC

By rbs GREW, do you mean, the following.
If the RBS initially started with 30 minextents,
the rbs ring has 30 extents. At some point in time
if the rbs ring has 35 extents, then it grew by
5 more extents( value of column extends = 5) , because
it may have to add an extent to the ring at some point,
due to the fact that it could not use next extent in the rbs ring because an active TX in that extent.


Tom Kyte
January 16, 2005 - 11:40 am UTC

yes

Rollback segment extent sizes

balu, March 23, 2007 - 8:29 am UTC

I have seen in of our databases the initial and next extent sizes are 10mb for the rollback segment what do you suggest for this is this good or bad having 10mb initail and next pls suggest

SQL> select segment_name,INITIAL_EXTENT,NEXT_EXTENT,MIN_EXTENTS,PCT_INCREASE from dba_rollback_segs
  2   where tablespace_name='RBS';

SEGMENT_NAME                   INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS PCT_INCREA
------------------------------ -------------- ----------- ----------- ----------
RBS01                                10485760    10485760          20
RBS02                                10485760    10485760          20
RBS03                                10485760    10485760          20
RBS04                                10485760    10485760          20
RBS05                                10485760    10485760          20
RBS06                                10485760    10485760          20
RBS07                                10485760    10485760          20
RBS08                                10485760    10485760          20
RBS09                                10485760    10485760          20
RBS10                                10485760    10485760          20
RBS25                                10485760    10485760          20
RBS26                                10485760    10485760          20
RBS27                                10485760    10485760          20
RBS28                                10485760    10485760          20
RBS29                                10485760    10485760          20
RBS30                                10485760    10485760          20



2. will delayed block cleanout causes the ora-01555 can you please explain orally please this is creating me confusion .

Expecting your reply positively.

Regadrs

Balu.
Tom Kyte
March 23, 2007 - 10:07 am UTC

it is

o good
o bad
o neither good nor bad


it is fine. What you want to do is have sufficient permanently allocated rollback space in order to avoid ora-1555's



2) i just explained orally. But, you likely did not hear me.

If you would like to read what I just said, browse these articles:
http://asktom.oracle.com/pls/ask/search?p_string=delayed+block+cleanout

ORA-01562: failed to extend rollback segment number 7

A reader, July 25, 2007 - 11:28 pm UTC

Hello Tom,

We are running a huge insert .. close to a million records ....
insert as select ... from table1, table2@dblink .....

This table has a trigger based on a sequence for inserting the primary key.

Table in nologging mode and the insert has an append hint. Indexes present on this table.

In production environment we are getting the following error

ORA-01562: failed to extend rollback segment number 7
ORA-01650: unable to extend rollback segment RBS6 by 320 in tablespace RBS

To avoid this problem, We are looking at
Creating a huge RBS and use that RBS for our transcation
Disabling indexes

DB Version: 8.1.7

My questions
1) Will query tuning in any way impact the rollback generated?
2) What other options can be considered to avoid ORA-01562.

Thank you

Tom Kyte
July 27, 2007 - 8:20 am UTC

you want to have equi-sized rollback segments, you don't want a 'huge' one (you'll just introduce ora-1555 into your vocabulary if you do).


if you use /*+ append */, we can and will skip undo generation for the TABLE
you'll always have undo generated for the indexes (but append will minimize the amount)

if you are adding a small number of records (like a million) to a really large table - then leave the indexes in place (more efficient than rebuilding them)

if you are inserting these rows into a nearly empty table (or empty table), sure - mark them unusable, load, rebuild them. if you used append, you'll generate almost no undo

Reducing rollback..

A reader, July 27, 2007 - 12:01 pm UTC

Tom,

I am loading data into an empty table...about a million records. These records are pulled from 2 databases, through a DBLink. I am looking at the following options and would like your comments on them.

DBVersion: 8.1.7

Table inserted into: InsertTable
No of records inserted : 1 Million

1) Option #1
a) Put table InsertTable in nologging mode.
b)Mark indexes unusable on InsertTable
c) Truncate table InsertTable
c) Do a insert into InsertTable (....) select .... from tabl1, table2@dblink......, with append hint
d) Rebuild indexes

2)Option #2
a) Drop table
b) Run create table as select .... from tabl1, table2@dblink......
c) Create indexes with execute immediate


Going with option #2, there would be no no rollback and minimal redo genarated. Woul appreciate your comments.


Tom Kyte
July 27, 2007 - 3:24 pm UTC

1)

a) - won't reduce undo, will just make you have to coordinate this with the DBA, get their permission, and make sure they backup right after you are done. Not recommended.

b) see above
c) see above
d) see above

2)
a) don't like it
b) see a
c) see a


truncate, then mark unusable (as a truncate would reenable them all), insert /*+append*/, rebuild

suggest you do not use nologging unless you really truly understand what that means and take the correct actions right after.

Reducing rollback..

A reader, July 27, 2007 - 6:04 pm UTC

Thank you Tom for your valuable advice

separate tablespace

Pasko, July 28, 2007 - 1:30 pm UTC

Hi Tom,

What do you think about this solution regarding Data-loading with minimal redo/undo:

1--create a separate Tablespace in nologging mode
2--use only one file for that Tablespace
3--create a staging Table in that Tablespace
4--if DB Recovery is needed then we can restore/recover without this Tablespace because
it is used only for staging.

Regards



Tom Kyte
July 28, 2007 - 5:55 pm UTC

or just create a non-logging table, direct path into it.

in any tablespace
with as many files as you want

it would be the same - but your approach would work too - however, unless you needed to index this staging table, just use an external table - zero undo and redo.

Global Temporary Table

Pasko, July 29, 2007 - 9:40 am UTC

Hi Tom,

Thanks for your follow-up.
Yes, i had forgotten about External Tables :)

regarding creating a non-logging table and direct pathing into it,
would this option not cause problems during Recovery?
I mean,shouldn't we always backup the DB right after the direct-path load operation?

...and continuing with the list of Options with minimal undo/redo,there is also the Global Temporary Tables.





Tom Kyte
July 29, 2007 - 1:25 pm UTC

if you need to recover the data - yes

but you said you didn't, so no.


only the unrecoverable blocks would not be recovered - everything else would be ok