A reader, December 13, 2002 - 9:35 am UTC
Is there a way to issue some SQL statement
to shrink the datafile for UNDO tablespace
after the batch process while the database is online?
I tried alter database datafile .. resize
But always get
ORA-03297: file contains used data beyond requested RESIZE value
December 13, 2002 - 9:48 am UTC
why? it is just going to grow again - it is a waste of time??? what is the point here. You will just end up growing it all over again tonight.
A reader, December 13, 2002 - 9:56 am UTC
The batch process is run probably once a year or so.
I don't want to keep a huge undo file around for
the rest of the 364 days and I don't want to go out
to buy disks.
So please just treat it as a theoretical interest
and give me a straight answer.
Thank you,
December 13, 2002 - 12:24 pm UTC
ok, of theoretical interest. Read the chapter in the admin guide on managing undo. there you can find out about
ALTER SYSTEM SET UNDO_TABLESPACE = ....
whereby you can create your tiny undo, switch to it, drop the old.
straight enough?
Oh, Tom!
CJ, December 13, 2002 - 10:05 am UTC
Tom, how do you expect to get those 5-star ratings when you only give the best/right answer? You need to start giving the answers people want! :->
Steve, December 13, 2002 - 10:30 am UTC
Thank You Tom , you hit the nail on the head. WHAT IS the point!! The Undo tablespace is only going to grow again, if not tomorrow, then in 364 days when the large batch file is executed again.....
3.5 Gig :-)
Jan van Mourik, December 13, 2002 - 10:58 am UTC
hk, December 21, 2002 - 7:36 am UTC
Original questioner asked also:
"How do I know if I actually do need 3.5G undo space
or Oracle is not reusing the undo space where it should
be?"
That is good question anyhow and it's still open here?
If undo_tablespace size is 100Megs and we run some batch job, which take 95megs undo space. Now, batch job is done and undo_tablespace is 95% full. Then immediately starts another transaction and it needs over 5 megs undo space. Is undo_retention ignored and transaction reuses undo_space?
If undo_retention is not ignored and we have autoextend on, we may want decrease undo_retention when we run many batch jobs. Or we have to configure dedicated undo_tablespace for batch jobs.
December 21, 2002 - 9:09 am UTC
it is not open -- they used 3.5 gig of undo space, they need 3.5 gig of undo space. sorry, thought that was clear.
As for the 95% question -- IF you said "retain undo for 5 minutes" and 5 minutes hasn't gone by -- well, we won't reuse it yet.
Or third option: you could size undo to satisfy your needs year round and be done with it?
hk, December 21, 2002 - 10:21 am UTC
OK, it's not open - but if you handle it like new question: how do we know if undo_tablespace is used for "actual" undo or if it is because of retention time hasn't gone yet?
Thank's for the answering to "95% question". Why I asked this is because i have heard undo_retention parameter is only "directive" and undo space is reused if undo_tablespace is full regardless of value of undo_retention parameter.
December 21, 2002 - 10:46 am UTC
it doesn't matter if it is used for "actual" undo or because of the retention period -- it is all "actual" undo.
But anyway, select sum(used_ublk) from v$transaction will tell you how much undo is being used for current, right now, transactions.
And -- allow me to clarify. IF the undo tablespace can grow to accomidate the undo retention period -- it will. If it cannot -- it will not. So consider this example:
ops$tkyte@ORA920> @test
<b>shows my undo tablespace is 1m right now.
The biggest it can autoextent to is 2gig and it'll grow in 1m increments (i know that cause I created it that way, this report doesn't show that 1m increment)
</b>
MaxPoss Max
Tablespace Name KBytes Used Free Used Kbytes Used
---------------- ------------ ------------ ------------ ------ ---------- ------
...
*UNDOTBS 1,024 960 64 93.8 2,088,960 .0
.....
------------ ------------ ------------
sum 2,001,920 1,551,936 449,984
13 rows selected.
ops$tkyte@ORA920>
ops$tkyte@ORA920> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 10800
undo_suppress_errors boolean FALSE
undo_tablespace string UNDOTBS
ops$tkyte@ORA920>
ops$tkyte@ORA920> drop table t;
Table dropped.
<b> my undo retention is 3 hours -- 10,800 seconds...</b>
ops$tkyte@ORA920> create table t ( x char(2000), y char(2000), z char(2000) );
Table created.
ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t values ( 'x', 'x', 'x' );
1 row created.
ops$tkyte@ORA920>
ops$tkyte@ORA920> begin
2 for i in 1 .. 500
3 loop
4 update t set x = i, y = i, z = i;
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
<b>now each of those transactions is 6+ kbytes of undo -- 3 * 2000 byte "before images" to save off... That should generate well over 3meg of undo by the time it is done BUT in 500 tiny transactions.
If the undo retention period is 3hours and I have 1meg of undo and that 1meg of undo can grow to 2gig -- Oracle will grow it and we can see that:</b>
ops$tkyte@ORA920> set echo off
MaxPoss Max
Tablespace Name KBytes Used Free Used Kbytes Used
---------------- ------------ ------------ ------------ ------ ---------- ------
...
*UNDOTBS 5,120 4,608 512 90.0 2,088,960 .2
.....
13 rows selected.
<b>the RBS is now 5m with 4.6 meg "used" (well, none of the undo is really used right now, it is just going to sit there for 3 hours waiting to be reused).
Now I do this:</b>
ops$tkyte@ORA920> create undo tablespace undotbl_new datafile size 1m;
Tablespace created.
ops$tkyte@ORA920> alter system set undo_tablespace = undotbl_new scope=both;
System altered.
ops$tkyte@ORA920> drop tablespace undotbs;
Tablespace dropped.
ops$tkyte@ORA920> exec print_table( 'select * from dba_data_files where tablespace_name = ''UNDOTBL_NEW'' ' );
FILE_NAME : /usr/oracle/ora920/OraHome1/oradata/ora920/o1_mf_undotbl__z0936pcx_.dbf
FILE_ID : 2
TABLESPACE_NAME : UNDOTBL_NEW
BYTES : 1048576
BLOCKS : 128
STATUS : AVAILABLE
RELATIVE_FNO : 2<b>
AUTOEXTENSIBLE : NO
MAXBYTES : 0
MAXBLOCKS : 0
INCREMENT_BY : 0</b>
USER_BYTES : 983040
USER_BLOCKS : 120
-----------------
PL/SQL procedure successfully completed.
<b>
And I rerun the test:</b>
ops$tkyte@ORA920> drop table t;
Table dropped.
ops$tkyte@ORA920> create table t ( x char(2000), y char(2000), z char(2000) );
Table created.
ops$tkyte@ORA920> insert into t values ( 'x', 'x', 'x' );
1 row created.
ops$tkyte@ORA920> begin
2 for i in 1 .. 500
3 loop
4 update t set x = i, y = i, z = i;
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> set echo off
old 29: order by &1
new 29: order by 1
% MaxPoss Max
Tablespace Name KBytes Used Free Used Kbytes Used
---------------- ------------ ------------ ------------ ------ ------- ------
*UNDOTBL_NEW 1,024 1,024 0 100.0 0 .0
13 rows selected.
<b>and here, we can see that the undo tablespace is still 1m. Oracle could not grow the undo -- but it did not fail the transactions.
So, in that respect, yes, the undo retention can be thought of as a "desire" -- if there is no way to get the undo space AND the undo space can be reused - it will reuse it. If the datafiles are autoextend or the undo tablespace is big enough all by itself, it will not reuse it</b>
hk, December 21, 2002 - 11:34 am UTC
Thanks Tom! I think this issue is very clear now.
Undo retention and flashback queries
Logan Palanisamy, February 11, 2003 - 1:25 pm UTC
Tom,
I was under the impression that, if the undo_retention=1800, I can "always" flashback 30 minutes into the past.
Looks like that is not the case if the undo_tablespace reaches its maxsize and can not grow any further, and hence the undo space is reused even before the undo_retention time is reached.
Am I correct?
Thanks in advance
February 11, 2003 - 4:57 pm UTC
correct.
HELP WANTED, PLEASE.
David Jiang, June 14, 2003 - 9:40 pm UTC
Tom,
We got stuck with our import process with a huge database, one of the tables is about 7 gig. In the middle of importing, it failed on this giant table with UNDO table space reaching around 32.7 gig. We created another small, new UNDO table space and tried drop the origianl UNDO table space but it give an error message (SYSSMU7$ is in use).
I am using the following init file parameters for this instance:
undo_management=AUTO
undo_retention=180
undo_tablespace=UNDOTBS1
1. How do I shrink Originale UNDO tablespace back to 300 MB. Right now it shows 32.7 GB full and it is not getting empty?
2. How Do I get rid of the above original message?
3. What do I need to do to impor this very big table that my UNDO tablespace size won't increase to around like 200 MB?
Also when I tried to switch new small UNDO tablespace, I got this error message in alert log:
SMON: mark undo segment 7 as available
SMON: about to recover undo segment 7
June 15, 2003 - 9:38 am UTC
1) wait for the rollback to finish, create a new undo tablespace, switch over to it and drop the old. the "busy" rbs will eventually become "unbusy"
2/3) commit=y on the insert might help. sounds like you must be importing into an indexed table -- suggest you don't do that. import into an UNINDEXED table. If you can, put the db in noarchivelog mode for the duration of the import (avoids redo generation on the index creates) or create the indexes yourself using NOLOGGIN
Peculiar Instance Behavior
A reader, July 09, 2003 - 9:43 am UTC
Tom,
I executed a large number of direct load inserts, and the UNDO tablespace extended to about 33gig.
My undo management is AUTO, and my UNDO_RETENTION was 10800.
I left work while it was still running, and when I came in the next day, I saw that the machine was rolling redo logs (10m) about 1 every 2 seconds. So I bumped the redo log sizes to 100m to reduce the overhead. Then I got to thinking. What is Oracle doing... The loading was done. No one was connected to the machine, and no jobs were running at all.
I looked in Top Sessions, and saw a P000 process executing 98% of the I/O, and the system was under a load of about 25% of the single CPU.
I queried v$undostat and saw the TXNCOUNT column for one segment increasing steadily.
I issued a shutdown immediate, but it hung obviously because the ongoing processing needed to finish before shutdown.
I then changed the UNDO_RETENTION to 120, thinking that Oracle was performing some sort of admin process on the vast amount of undo, and changing that value may tell it to forgo that process.
It continued to spin through redos... .. .
Can you give me an indication of what was going on?
If you need more info, just ask.
Thanks much
July 09, 2003 - 11:34 am UTC
P000 is a pq slave, someone was running something somewhere.
UNXP* Columns in v$undostat
Vivek Sharma, October 09, 2003 - 5:59 am UTC
Dear Tom,
Could you please tell us the significance of UNXPSTEALCNT, UNXPBLKRELCNT, UNXPBLKREUCNT columns in v$undostat. I am not able to get much information from anywhere.
Metalink Note : 180946.1 says that
UNXPSTEALCNT
NUMBER
The number of attempts to obtain undo space by stealing unexpired extents from other transactions.
UNXPBLKRELCNY
NUMBER
The number of unexpired blocks removed from certain undo segments so they can be used by other transactions.
UNXPBLKREUCNT
NUMBER
The number of unexpired undo blocks reused by transactions
But what is the difference between the three.
Thanks in Advance.
Regards
Vivek Sharma
October 09, 2003 - 4:53 pm UTC
first is extents (extents are groups of blocks)
the second is blocks that were made available to be reused (since each extent could have a different number of blocks...)
third is blocks actually reused
Different Number of Blocks in 2 Extents
Vivek Sharma, October 10, 2003 - 12:01 am UTC
Dear Tom,
Thanks for your response. It was very helpful.
But for your comments
"the second is blocks that were made available to be reused (since each extent
could have a different number of blocks...)"
How is it possible that 2 extents can different number of blocks. Since undo_tablespace is created with a standard block size, suppose 4k, then an extent size of 1 mb will have 256 blocks.
Please clarify my doubt to help me understand this concept in detail.
Regards
Vivek Sharma
October 10, 2003 - 8:02 am UTC
but AUM uses locally managed tablespaces with system allocated extents. They are not UNIFORM.
simple undo quesiton.
A reader, November 05, 2003 - 10:26 am UTC
case 1: Insert 1000 rows in a loop with an insert
statement in between.
case 2: Insert same 1000 rows in a single insert statement.
The undo generated in case 1 is more than in case 2.
I know that row by row is same as "slow by slow" but
can you give me the exact reason what kind of overhead
causes this. It has to be some undo overhead with
each statement but what exactly is it?
thanx!
November 05, 2003 - 6:10 pm UTC
each statement will generate some undo just as "part of being"
1000 statements = 1000 "parts of being" vs just 1 "parts of being"
every statement generates some amount of undo and redo. multiply that by 1,000
Cursor instead of for loop...
Steve, November 14, 2003 - 7:14 am UTC
Hi Tom,
Platform: Solaris 8 64 bit. Oracle 9.2.0.1
=========
UNDO TABLESPACE: 3 Gigs
UNDI Management -> AUTO
I have your book and read very carefully on ORA-01555.
And I understand (at least I think I do) why it occurs
when committing inside cursors.
But I'm still a little confused why we sometimes get
"Unable to extend RBS" (even when no cursors are involved).
It's not consistent and we don't see a pattern.
So I ran some tests. First of all, I'm all alone
on the server and I have the following table (T)
with 100 000 rows:
SQL> desc t
Name Type
------- -----------
A CHAR(2000)
B CHAR(2000)
C CHAR(2000)
D CHAR(2000)
E CHAR(2000)
F CHAR(2000)
G CHAR(2000)
Then I did a bunch of statements as follows:
SQL> update t set a=b, b=c,c=d,d=e,e=f;
SQL> commit;
SQL> update t set a=b, b=c,c=d,d=e,e=f;
SQL> commit;
SQL> update t set a=b, b=c,c=d,d=e,e=f;
SQL> commit;
.
.
.
While these were running, I was monitoring the RBS.
I saw the transactions go from one RBS to the next.
The active RBS would grow, somtimes as much as 800M.
After the commit, another RBS would be chosen and
grow, etc....
Eventually, it got to the point where more space was
needed for the current active transaction/RBS. That's
when I saw the other RBS start to shrink. So far so
good. In fact, I had trouble reproducing the
"Unable to Extend" error.
So I created a shell script that did the following
LOOP
print LOOP number
Take snapshot of RBS
update t set a=b, b=c,c=d,d=e,e=f where
rownum < 50000;
commit;
Take snapshot of RBS
END LOOP
I let it run all night. It took 30 iterations
before getting the first "Unable to Extend" error.
In the whole 545 iterations, it happened 41
times.
Since I'm alone on the system:
o I don't see first of all why it is happenning
o I don't see a pattern.
Can you please help me understand this one and also
understand when Oracle decides to shrink other RBS
to allow the current one to extend.
Thanks Tom.
Here's a snapshot the first 2 times when it occurs
(on Loop # 30 and 95).
==========================================
RUN Number 29
RBS Before:
===========
NAME EXTENTS RSSIZE EXTENDS WRAPS HWMSIZE
----------- -------- --------- -------- ----- ---------
SYSTEM 7 .42 0 0 .42
_SYSSMU1$ 79 589.11 469 657 792.17
_SYSSMU2$ 2 .11 663 695 1717.98
_SYSSMU3$ 2 .11 740 832 1743.98
_SYSSMU4$ 2 .11 211 212 782.86
_SYSSMU5$ 52 400.11 725 812 1733.98
_SYSSMU6$ 103 789.11 903 1011 1759.17
_SYSSMU7$ 89 671.11 918 939 1717.98
_SYSSMU8$ 43 311.11 795 1033 1721.98
_SYSSMU9$ 26 192.11 688 724 787.86
_SYSSMU10$ 6 32.11 661 662 1728.98
49999 rows updated.
Commit complete.
RBS After:
===========
NAME EXTENTS RSSIZE EXTENDS WRAPS HWMSIZE
----------- -------- --------- -------- ----- ---------
SYSTEM 7 .42 0 0 .42
_SYSSMU1$ 63 461.11 469 657 792.17
_SYSSMU2$ 2 .11 663 695 1717.98
_SYSSMU3$ 199 787.86 937 1030 1743.98
_SYSSMU4$ 3 .17 212 212 782.86
_SYSSMU5$ 36 272.11 725 812 1733.98
_SYSSMU6$ 88 669.11 904 1011 1759.17
_SYSSMU7$ 71 539.11 918 939 1717.98
_SYSSMU8$ 28 191.11 795 1033 1721.98
_SYSSMU9$ 11 72.11 688 724 787.86
_SYSSMU10$ 2 .11 661 662 1728.98
==========================================
RUN Number 30
RBS Before:
===========
NAME EXTENTS RSSIZE EXTENDS WRAPS HWMSIZE
----------- -------- --------- -------- ----- ---------
SYSTEM 7 .42 0 0 .42
_SYSSMU1$ 63 461.11 469 657 792.17
_SYSSMU2$ 2 .11 663 695 1717.98
_SYSSMU3$ 199 787.86 937 1030 1743.98
_SYSSMU4$ 3 .17 212 212 782.86
_SYSSMU5$ 36 272.11 725 812 1733.98
_SYSSMU6$ 88 669.11 904 1011 1759.17
_SYSSMU7$ 71 539.11 918 939 1717.98
_SYSSMU8$ 28 191.11 795 1033 1721.98
_SYSSMU9$ 11 72.11 688 724 787.86
_SYSSMU10$ 2 .11 661 662 1728.98
update t set a=b, b=c,c=d,d=e,e=f where rownum < 50000
*
ERROR at line 1:
ORA-30036: unable to extend segment by 512 in undo tablespace 'UNDOTBS'
Commit complete.
RBS After:
===========
NAME EXTENTS RSSIZE EXTENDS WRAPS HWMSIZE
----------- -------- --------- -------- ----- ---------
SYSTEM 7 .42 0 0 .42
_SYSSMU1$ 81 565.17 487 737 792.17
_SYSSMU2$ 2 .11 663 695 1717.98
_SYSSMU3$ 92 715.11 937 1030 1743.98
_SYSSMU4$ 2 .11 212 212 782.86
_SYSSMU5$ 36 272.11 725 812 1733.98
_SYSSMU6$ 87 661.11 904 1011 1759.17
_SYSSMU7$ 70 531.11 919 939 1717.98
_SYSSMU8$ 27 183.11 795 1033 1721.98
_SYSSMU9$ 10 64.11 688 724 787.86
_SYSSMU10$ 2 .11 661 662 1728.98
==========================================
.
.
.
All runs up to run # 95 worked. Here's run #95:
.
.
.
RUN Number 95
RBS Before:
===========
NAME EXTENTS RSSIZE EXTENDS WRAPS HWMSIZE
----------- -------- --------- -------- ----- ---------
SYSTEM 7 .42 0 0 .42
_SYSSMU1$ 2 .11 1064 1437 792.17
_SYSSMU2$ 76 513.11 1498 1834 1717.98
_SYSSMU3$ 90 672.11 1557 1945 1743.98
_SYSSMU4$ 23 168.11 819 896 785.11
_SYSSMU5$ 12 80.11 1311 1461 1733.98
_SYSSMU6$ 56 424.11 1800 2545 1759.17
_SYSSMU7$ 2 .11 1734 2211 1717.98
_SYSSMU8$ 52 348.11 1594 2044 1721.98
_SYSSMU9$ 230 787.86 1407 1452 787.86
_SYSSMU10$ 2 .11 1476 1691 1728.98
update t set a=b, b=c,c=d,d=e,e=f where rownum < 50000
*
ERROR at line 1:
ORA-30036: unable to extend segment by 512 in undo tablespace 'UNDOTBS'
Commit complete.
RBS After:
===========
Session altered.
SYSDATE
-------------------
13-11-2003 19:33:47
NAME EXTENTS RSSIZE EXTENDS WRAPS HWMSIZE
----------- -------- --------- -------- ----- ---------
SYSTEM 7 .42 0 0 .42
_SYSSMU1$ 2 .11 1065 1437 792.17
_SYSSMU2$ 68 449.11 1498 1834 1717.98
_SYSSMU3$ 80 595.11 1557 1945 1743.98
_SYSSMU4$ 14 96.11 819 896 785.11
_SYSSMU5$ 3 8.11 1311 1461 1733.98
_SYSSMU6$ 47 352.11 1800 2545 1759.17
_SYSSMU7$ 159 566.98 1891 2368 1717.98
_SYSSMU8$ 41 266.11 1594 2044 1721.98
_SYSSMU9$ 86 659.11 1407 1452 787.86
_SYSSMU10$ 2 .11 1476 1691 1728.98
ORA:- 30036
Arindam, November 14, 2003 - 3:27 pm UTC
Hi Tom,
I got this opportunity to ask you this question regarding tablespace:
Oracle version :
Oracle9i Enterprise Edition Release 9.2.0.2.0 - 64bit Production
PL/SQL Release 9.2.0.2.0 - Production
CORE 9.2.0.2.0 Production
TNS for Solaris: Version 9.2.0.2.0 - Production
NLSRTL Version 9.2.0.2.0 - Production
UNDO Tablespace :- 10 gigs.
Now my question is we needed to update a table(200M records) . The table definition is :-
CREATE TABLE WARE_PROMO
(
CONTACT_ID NUMBER(20),
SITE_ID NUMBER(20),
CORP_ID NUMBER(20),
CONTACT_ID_PROMO NUMBER(20),
CONT_NAME_FLAG CHAR(1),
BUS_NAME_FLAG CHAR(1),
ZIP4_FLAG CHAR(1),
POBOX_FLAG CHAR(1),
HIGHRISE_FLAG CHAR(1),
BUS_RES_FLAG CHAR(1),
QUALITY_CODE CHAR(2),
STATE CHAR(2),
ZIP CHAR(5),
GENDER CHAR(1),
PREFERENCE_FLAG CHAR(1),
CAMPAIGN_ID NUMBER(6),
VERSION_ID NUMBER(8),
SEGMENT_ID NUMBER(10),
LIST_SRC_ID NUMBER(8),
ALLOC_SRC_ID NUMBER(8),
PROMO_SUB_TYPE CHAR(1),
LIST_NUM NUMBER(4),
DOMS_KEY_CODE NUMBER(5),
CO_NUM CHAR(2),
HOLDOUT_FLAG NUMBER(1),
CONT_EMAIL_FLAG CHAR(1),
CONT_REP_FLAG CHAR(1),
ASSOC_SRC_COUNT NUMBER(2),
COMPILED_SRC_COUNT NUMBER(2),
COMARKET_SRC_COUNT NUMBER(2),
DELL_SRC_COUNT NUMBER(1),
RENT_IN_SRC_COUNT NUMBER(2),
RENT_OUT_SRC_COUNT NUMBER(2),
PRIORITY_SRC_GROUP NUMBER(1),
CONT_MULTIBUYER NUMBER(2),
LAST_CONT_UPD_IN_DB DATE,
MOS_SINCE_LAST_DB_UPD NUMBER(2),
CONT_DELL_BUS_SEG CHAR(1),
CONT_DELL_DART_SEG CHAR(1),
MOS_CONT_ACTIVE NUMBER(2),
CONTS_PROMO_PER_SITE NUMBER(2),
SITE_REVENUE NUMBER(8),
CONT_LAST_PROMO_DATE DATE,
OUT_OF_BUS_FLAG CHAR(1),
CONT_DELL_UNITS NUMBER(5),
CONT_DELL_REVENUE NUMBER(8),
CONT_DELL_STATUS CHAR(1),
SITE_DELL_STATUS CHAR(1),
PENETRATION_NUM NUMBER(3),
FIRST_LOB_PURCH CHAR(3),
DECILE_SCORE_ABI CHAR(2),
DECILE_SCORE_DUNS CHAR(2),
SIC4_CODE_FINAL CHAR(5),
PCSPEND_FINAL NUMBER(8),
NUM_EMP_FINAL NUMBER(4),
NUM_EMP_FINAL_CODE CHAR(1),
PURCH_FLAG CHAR(1),
COUPON_FLAG NUMBER(1),
BACKFEED_FLAG NUMBER(1),
ALC_ORD_NUM CHAR(5),
STREET_DATE DATE,
RECENCY_NUM_MOS NUMBER(2)
)
TABLESPACE DELL_WARE_TABLE_8M
PCTUSED 0
PCTFREE 0
INITRANS 1
MAXTRANS 255
NOLOGGING
PARTITION BY HASH (CONTACT_ID_PROMO)
PARTITIONS 128
STORE IN (DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M, DELL_WARE_TABLE_8M)
NOCACHE
NOPARALLEL
ENABLE ROW MOVEMENT;
We have indexes on bitmap contact_id/site_id/corp_id.
We have to update the column holdout_flg based on 14 values of version_id column. if match then update holdout_flg = 1 else 0.
We tried to do in three different ways as listed below :-
i) update ware_promo
set holdout_flag = 0;
/
update ware_promo
set holdout_flag = 1
where version_id in (134,3456,136576....14 values)
ii)Writing PL/SQL (this one written by our DBA)
DECLARE
v_version_id ware_promo.version_id%TYPE;
v_holdout_flag ware_promo.holdout_flag%TYPE;
v_counter NUMBER(10) := 1;
CURSOR c_cursor IS
SELECT version_id, holdout_flag
FROM ware_promo
FOR UPDATE OF holdout_flag;
BEGIN
FOR v_ANYTHING in c_cursor LOOP
FETCH c_cursor INTO v_version_id, v_holdout_flag;
IF v_version_id in (
3090702,
3100123,
3100124,
3100125,
3100126,
3100404,
3100405,
3100502,
3100603,
3100703,
3100704,
3110119,
3110120,
3110122)
THEN
UPDATE ware_promo
SET holdout_flag = 1
WHERE CURRENT OF c_cursor;
ELSE
UPDATE ware_promo
SET holdout_flag = 0
WHERE CURRENT OF c_cursor;
END IF;
v_counter := v_counter + 1;
IF MOD(v_counter, 10000) = 0 THEN
COMMIT;
END IF;
END LOOP;
CLOSE c_cursor;
COMMIT;
END;
/
EXIT
iii) Writing PL/SQL (written by another developer)
DECLARE
TYPE t_ref_cur IS REF CURSOR;
TYPE NumList IS TABLE OF NUMBER;
v_ref_cur t_ref_cur;
v_ver_id_list NumList;
v_hold_flag_list NumList;
BEGIN
UPDATE ware_promo
SET holdout_flag = 0;
commit;
open v_ref_cur for
select version_id,holdout_flag
from ware_promo
where version_id in (3090702,3100123,3100124,3100125,3100126,3100404,3100405,
3100502,3100603,3100703,3100704,3110119,3110120,3110122);
fetch v_ref_cur bulk collect into v_ver_id_list, v_hold_flag_list LIMIT 1000;
forall i in 1 .. v_ver_id_list.count
UPDATE ware_promo
SET holdout_flag = 1
WHERE version_id =v_ver_id_list(i);
commit;
END;
/
EXIT
ALL OF THE PROCESSES FAILED GIVING ORA:-30036 which means we ran out of UNDO.
i) My question is why it "BOMBS OUT"?
ii) What could have gone wrong ?
iii) Is our 10 Gigs of UNDO really SHORT ? Which on
iv) What is your adevice on what way we should approach in achieving the same ? in the above theree example .
v) I have suggested to create a temp table having all the version_id's and holdout_flag updated and then use MERGE to get the BASE table updated. What is your opinion on this ?
PLEASE HELP !!!
Thanks
Arindam
November 15, 2003 - 8:55 am UTC
the right way:
update ware_promo
set holdout_flag = case when version_id in ( .... ) then 1 else 0 end;
10gigs may well be way too small, especially if there are lots of other transactions going on in there using rbs as well (especially long running ones).
Comment on previous question
Steve, November 15, 2003 - 9:27 am UTC
Hi Tom,
I was wondering if you could comment on the question before Arindam 's (before it gets burried in this thread...)
Thanks Tom
Steve
November 15, 2003 - 10:01 am UTC
i've got a test case setup waiting to run overnight :)
that one takes a bit of working to reproduce. they are huge transactions. I let it run for 4 hours yesterday while I was meeting with someone (it sort of consumes my machine doing those massive updates).... didn't hit the problem. trying with a longer test now.
Thanks!
Steve, November 15, 2003 - 12:30 pm UTC
Thanks Tom, I really appreciate it.
November 16, 2003 - 10:28 am UTC
well, i ran overnight -- i have a rather small machine, i was not able to reproduce -- but I could only do 62 iterations (not enough -- but I sort of needed my machine back :)
i'll have to try again later when I have more cycles.
Great BUT...
Arindam, November 15, 2003 - 9:58 pm UTC
This is an datawarehouse and no other transaction is going on when THIS UPDATE is taking place. We que up every process of update and each one is done sequentially. Do you still think the UNDO is kind of small ?
Please advice ?
Thanks
Arindam../
November 16, 2003 - 10:38 am UTC
in one word -- yes, apparently it is.
that 10gig is used by a couple (more then one) rbs. you can only use one rbs. 10gig for a DW seems pretty small to me. let it grow (and use the SINGLE update, not the two step, not the procedural code)
I'm also getting ORA-30036
Linda, November 18, 2003 - 5:16 pm UTC
This is a great site! Thanks for explaining things so well (I'm amazed at your patience!).
I'm testing a year-end update that updates lots of columns in a large table. We commit the update when the summary type and level change (part of the primary key), so the transactions have different sizes. If I use a small undo tablespace (250MB) and undo_retention is set to 10800, the program dies with ORA-30036 after less than a minute(52 transactions were committed and 61,246 records were updated). I watch v$rollstat, v$undostat and dba_free_space while the process is running and see that some unexpired undo is being reused. I've run this test several times and always get the error, although not always at the same spot.
When I reset undo_retention to 180, the program runs fine. With a larger undo tablespace and undo_retention set to 1800 it works some of the time.
Steve from Montreal's test would probably fail faster with a smaller undo tablespace and (possibly with) smaller transactions. I'm testing with a small test database and my program fails every time with a 250MB undo tablespace and undo_retention set to 10800.
Thanks!
November 21, 2003 - 11:10 am UTC
your undo tablespace is just plain "too small" for what you've asked for!
Sorry, now I'm confused
Linda, November 24, 2003 - 1:54 pm UTC
My program works with a short undo_retention (180), but fails with a longer undo_retention (10800). If the undo tablespace was too small, wouldn't it also fail with a short undo_retention?
November 24, 2003 - 2:04 pm UTC
[tkyte@tkyte-pc tkyte]$ oerr ora 30036
30036, 00000, "unable to extend segment by %s in undo tablespace '%s'"
// *Cause: the specified undo tablespace has no more space available.
// *Action: Add more space to the undo tablespace before retrying
// the operation. An alternative is to wait until active
// transactions to commit.
[tkyte@tkyte-pc tkyte]$
with a small undo retention, we didn't have to save as much, no need to extend -- we are not trying to keep undo -- we reuse it fast.
you asked to save a boatload of undo -- we grew the undo segments. Eventually -- we needed to grow some undo segment you were using but were unable to and hence you got the 30036 error.
when you said "don't save lots of undo", we never really grew or needed to grow the undo segments so we did not. hence, plenty of space for when we did need to grow them in reaction to a transaction.
Yes, I think I understand how it is supposed to work
Linda, November 25, 2003 - 10:15 am UTC
I think that my case is similar to Steve from Montreal's case, but I want to make sure that I'm not missing something. I'm alone on the system. There is only one transaction at a time. My understanding is that if there's not enough expired undo, the undo_retention 'suggestion' is ignored, and some of the unexpired undo is used.
I recreated the undo tablespace at 250M with autoextend off and left undo_retention set to 10800. I ran my test this morning and again monitored the undo info. This time the program finished, so I was wrong when I said that it always failed (I'd only run it twice before - my bad). I monitored several things while the program was running. Here is one of the selects:
SQL> select status, count(1)
2 from dba_undo_extents
3 group by status;
STATUS COUNT(1)
--------- ----------
EXPIRED 170
UNEXPIRED 26
Started the program ---
STATUS COUNT(1)
--------- ----------
ACTIVE 21
EXPIRED 14
UNEXPIRED 332
SQL> /
STATUS COUNT(1)
--------- ----------
ACTIVE 26
EXPIRED 14
UNEXPIRED 317
SQL> /
STATUS COUNT(1)
--------- ----------
ACTIVE 25
EXPIRED 15
UNEXPIRED 325
I think this shows that some of the unexpired extents were used (we didn't reach the undo_retention time during the program).
After the program finished:
STATUS COUNT(1)
--------- ----------
EXPIRED 1
UNEXPIRED 374
So, I ran the program again. This time it failed on an update of 18,800 rows. A transaction of 18,784 had completed a minute earlier and many smaller transactions had completed in between.
SQL> l
1 select status, count(1)
2 from dba_undo_extents
3* group by status
SQL> /
STATUS COUNT(1)
--------- ----------
EXPIRED 1
UNEXPIRED 415
Questions:
The error is:
Error: ORA-30036: unable to extend segment by 128 in undo tablespace 'UNDOTBS3'
1. Is this 128 bytes, blocks, or what? If there isn't room to extend the tablespace, does Oracle look for an unexpired undo segment of the size it wants to expire?
2. Should I be watching other things?
Thanks.
Linda
November 25, 2003 - 12:35 pm UTC
just remember -- you are NEVER the only one on in Oracle - NEVER. backgrounds are always there too. but anyway.
we can only expire UNDO in the current rollback segment you are using, the other N rollback segments -- for your transaction -- don't count.
counting extents doesn't tell us much -- we use system allocated extents, you'd want to look at the SIZE of the rbs's -- not the number of extents they are in.
your rollback segments (you have many in there) are all of different sizes (they grew at different rates). the one you got when it failed just "was not large enough" and there was no more room to extend it. The other rbs's -- didn't even come into consideration at that point in time!
Thanks - now I understand!
Linda, November 25, 2003 - 5:43 pm UTC
Automatic RBS Shrinking?
striker, January 15, 2004 - 1:05 pm UTC
Suppose you are using Automatic Undo Management, and you have a pool of rollback segments that are managed by the database.
Is there ever a situation where the database will automatically reduce the size of an RBS (similar to ALTER ROLLBACK SEGMENT ... SHRINK)?
You've said in the past how it is beneficial to keep rollback segments similar in size when managing them manually. I wondered if the automatic approach would do something similar.
I can imagine a situation where a large daily batch transaction could make one segment grow much larger than the others. Over a period of days, if the transaction repeats, other rollback segments might get assigned to handle that transaction and result in all rollback segments growing up to the limits of retention period and undo tablespace.
Let's say one segment hasn't grown yet, and is handling the transaction on a certain day--if it is out of undo tablespace, can the database automatically shrink one or more of the other rollback segments to get the space it needs (assuming the other rollback segments have information that is older than the retention period)?
January 15, 2004 - 2:10 pm UTC
it will drop and create rbs's as needed.
need more -- creates
finished using them -- drops
if space goes out of control for whatever reason (that wild batch job) then
a) create a new small undo tablespace
b) alter system to the new one
c) after any active transactions complete in old just drop it
(that is, if shrinking the original datafiles isn't sufficient of course)
My rbs's went to 46 of them on January 5th at about 10pm. We just shrank our undo datafiles back down 12 hours later after the newly added ones "went away"
Suggest the AUm
Friend, June 04, 2004 - 6:45 am UTC
Dear Tom
following is the undo setting in client DB
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 10800
undo_suppress_errors boolean FALSE
undo_tablespace string UNDO_DATA_TS
Please give inputs to set tablespace size and other setting to avoid Ora1555
June 04, 2004 - 8:42 am UTC
figure out your longest running query time.
set undo_retention to that number.
then either:
a) set the undo tablespace files to autoextend so they can grow to what they need to grow to to hold the undo for that period of time
b) use OEM (enterprise manager) or query the v$undostat table to figure out how large you need to allocate the undo tablespace in order to hold that much information given your typical workload. OEM will just "Tell you". you slide a little thing on a chart and it'll tell you "given your current workload , you will need XXXmb of undo space for YYYseconds of undo retention"
Top Sql
friend, June 05, 2004 - 2:49 am UTC
Dear Tom
Please suggest exact way/perfect way to work on top sqls
June 05, 2004 - 8:49 am UTC
My first approach is ask "is it reasonable"
Could well be that your top sql will be your top sql forever -- and nothing you do will ever change that. Reason:
1) it is as efficient as it can be
2) it is mandatory that it be executed
otherwise, one of two things is true:
1) it is not as efficient as it could be and you tune it to be better. You rewrite it, you change the algorithm (eg: if your top query is a query that is executed 5,000,000 times in a loop - you would rewrite the algorith to be set based instead of slow by slow (meant row by row)).
2) it is not mandatory that you execute it -- hence you tune it out of existence. It is surprising how often THIS is the case! I love it when that happens.
Problem with dvelopers
friend, June 05, 2004 - 10:30 am UTC
I have some seniors here who are developers and facing problem to convience them. anyways i know u cant help in this...
Tom thanx for you help
regards
@test scripts
Steven, September 22, 2004 - 8:47 pm UTC
Could you give me the @test scripts? I am trying to understand the context by your example.
ops$tkyte@ORA920> @test
shows my undo tablespace is 1m right now.
The biggest it can autoextent to is 2gig and it'll grow in 1m increments (i know
that cause I created it that way, this report doesn't show that 1m increment)
MaxPoss Max
Tablespace Name KBytes Used Free Used Kbytes Used
---------------- ------------ ------------ ------------ ------ ---------- ------
...
*UNDOTBS 1,024 960 64 93.8 2,088,960 .0
.....
------------ ------------ ------------
sum 2,001,920 1,551,936 449,984
13 rows selected.
September 23, 2004 - 4:07 pm UTC
everything but free.sql is there (my tablespace report).
a search for free.sql will find it on this site.
but otherwise, the entire script *is there*, i run them with set echo on so you see what I saw.
I Want My 3.2663E+10 Bytes Back! :-)
Doug Wingate, April 05, 2005 - 6:01 pm UTC
Tom,
Thanks for applying so much of your amazing energy to helping us get answers to our questions.
Regarding resizing over-large datafiles belonging to undo tablespaces under AUM, you said, "We just shrank our undo datafiles back down 12 hours later after the newly added [undo segments] 'went away'." If I need to reclaim the space more quickly than that, is there a way to get SMON to go ahead and drop the undo segments? If not, then is the only remedy the one already mentioned, to create a new undo tablespace and then take the old one offline and drop it? I should mention that I'm using Oracle Database 10g.
Thanks!
April 05, 2005 - 7:00 pm UTC
Not that there isn't another way, just that I'm not aware of one (and cannot dig deeper right now).
create new, switch over, drop old after it is done being used -- I do not personally know of another documented method
Resing is not happened at the Operating System level
N.Venkatagiri, April 06, 2005 - 4:23 am UTC
We have resized one undo tablespace before one month.
SQL> SELECT SUM(BYTES)/1024000 FROM Dba_data_files where tablespace_name='UNDOTBS3';
SUM(BYTES)/1024000
------------------
9216
But in the Operating system level
the file size is still 16 gb.
What could be the reason for the same?
Before we recreate the same, we just want to identify the reason for this.
Thanks
April 06, 2005 - 6:43 am UTC
prove it?
show us the output from dba_datafiles showing the file name(s) and sizes
and show us the ls from the OS with the same names/sizes.
Undo Tablespace SIze decreased--SIze not decreased at operating system level
N.Venkatagiri, April 07, 2005 - 2:25 am UTC
SQL> SELECT BYTES,SUBSTR(FILE_NAME,17),TABLESPACE_NAME FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='UNDOTBS03';
BYTES SUBSTR(FILE_NAME,17) TABLESPACE_NAME
---------------- ---------------------------------------- ------------------------------
9437184000 undotbs03.dbf UNDOTBS03
ls -lt undo*3*
1 16866353152 undotbs03.dbf
April 07, 2005 - 9:20 am UTC
I see no paths?
lets see *everything*
Undo tablespace filesize issue at Operating System level
N.Venkatagiri, April 07, 2005 - 2:31 am UTC
As per our policy I have hidden the folder names and changed the tablespace name.
Thanks
April 07, 2005 - 9:35 am UTC
well, I cannot verify anything -- sorry.
Very Useful solutions.
Jaywant, May 16, 2005 - 12:24 pm UTC
This thread was useful for me. All your answers are very practical.
Reason for Undo tablespace --space not released as OS level
N.Venkatagiri, May 19, 2005 - 5:23 am UTC
Tom,
With continuation to my previous post on this thread,
The reason for the problem reported is the bug of OCFS 1.0.11 which is fixed in the next version as told by Support. We have dropped and recreated the undo tablespace and gained the space.
Thanks
N.Giri
Re: Undo retention and flashback queries February 11, 2003
Joel Garry, June 02, 2005 - 2:50 pm UTC
>Reviewer: Logan Palanisamy from Santa Clara, CA, USA
>Tom,
>I was under the impression that, if the >undo_retention=1800, I can "always"
>flashback 30 minutes into the past.
>Looks like that is not the case if the undo_tablespace >reaches its maxsize and
>can not grow any further, and hence the undo space is >reused even before the
>undo_retention time is reached.
>Am I correct?
>Thanks in advance
>Followup:
>correct.
Should this be reworded "Correct if you do not have autoextend on"?
From Note 268870.1:
NOTE: The value for undo_retention also has a role in growth of undo tablespace. If there is no way to get the undo space for a new transaction, then the undo space (retention) will be reused. But, if the datafiles for undo tablespace are set to auto extensible, it will not reuse the space. In such scenarios new transaction will allocate a space and your undo tablespace will start growing.
My (9206 hp-ux) experience was Oracle wasn't smart enough to figure out the datafile had reached the maximum size of the file (which was much larger than the table), threw a 30036 "warning" and crashed the app, rather than reuse the space. Simply because of the autoextend. Not so sure I like the idea of an unlimited autoextend.
June 02, 2005 - 6:04 pm UTC
no, not really.
IF it cannot grow any further seems to cover it all.
My experience was different. That error gets thrown not because an autoextend failed -- but because perhaps you actually did NEED MORE space than could be allocated.
ops$tkyte@ORA9IR2> !df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 74807888 48844532 22163324 69% /
/dev/hda1 102454 14932 82232 16% /boot
none 1030804 0 1030804 0% /dev/shm
<b>/boot is small, not much room</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create undo tablespace testing datafile '/boot/test/testing.dbf'
2 size 1m
3 autoextend on
4 next 1m
5 /
Tablespace created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter system set undo_tablespace = testing;
System altered.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 10800
undo_suppress_errors boolean FALSE
undo_tablespace string TESTING
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t;
Table dropped.
ops$tkyte@ORA9IR2> create table t ( x char(2000) );
Table created.
ops$tkyte@ORA9IR2> insert into t values ( 1 );
1 row created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> begin
2 for i in 1 .. 30000
3 loop
4 update t set x = i;
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> !df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 74807888 48844648 22163208 69% /
/dev/hda1 102454 96155 1009 99% /boot
none 1030804 0 1030804 0% /dev/shm
<b>that filled up boot</b>
ops$tkyte@ORA9IR2> begin
2 for i in 1 .. 30000
3 loop
4 update t set x = i;
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
<b>that should have failed if a failed autoextend would return the 30036. that is not a "warning" you got, you really did run out of undo for your concurrently active transactions</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter system set undo_tablespace = undotbs;
System altered.
ops$tkyte@ORA9IR2> drop tablespace testing including contents and datafiles;
Tablespace dropped.
Joel Garry, June 03, 2005 - 1:28 pm UTC
Very convincing! I must've got mud in my eyes in the trenches.
June 03, 2005 - 1:37 pm UTC
no worries, they are somewhat hard to diagnose because by the time they are over - well, the evidence is generally "gone"....
It is like an ora-4030 or 4031
"I got it, but there was lots of free memory when I looked"
Of course there was, you were gobbling up memory -- failed, stopped (freeing memory) and then looked :)
Same thing with these sorts of errors. by the time you look -- everything looks very "available" -- your application got the error, failed (releases all undo it had) and it is hard to say "oh look, it really did run out of undo space"
I want straight answers, but I cant ask straight questions..!
Dave Martin, June 10, 2005 - 10:45 am UTC
Tom,
I want to create an undo tablespace and balace my parameers so that I can see that it stays about 80% full. I have colleagues that hate Autoextend. If I have a both empty space and expired inactive transactions (according to undo_retention), will new active transactions use the empty space or overwrite the expired inactive transactions?
How often do the expired inactive transactions get cleared out?
June 10, 2005 - 11:07 am UTC
why??? there is no such thing as "80% full" for an undo tablespace. The concept doesn't even really make sense.
undo segments and their extents are designed to be allocated -- up to 100% of the available space -- growing the files IF NEEDED to satisfy your undo retention.
undo space will go to 100% in order to satisfy your undo retention and then and only then will it start stealing expired or unexpired extents from itself and moving them around.
nothing is "cleared out", it is just "known to be reusable".
yeah okay,but..
Dave Martin, June 13, 2005 - 4:53 am UTC
..the issue always with undo (or rbs for that matter) is that you cant see by looking at it, whether it needs more space.Its ALWAYS 100% full!
The answer I suppose is to make your datafiles autoextend to a limit and use that to monitor. If it can't extend anymore then you need to look at increasing your UNDO or investigating your longer transactions. This presupposes that Oracle will only resort to extending the tablespace if it cannot re-use the old undo.
For all I know, it might even say, "Hey its autoextend, lets just increase it 'til we run out of disk space, after that we'll look to see if re-use the old undo"! I guess thats not the case?
June 13, 2005 - 11:03 am UTC
you can see how much is used? v$undostat, v$rollstat, v$transaction - many things show you information about them. Just like temp has its set of v$ views.
it would only autoextend in order to fullfill your desire to have undo retained for a period of time. That is, it only autoextends when NOT autoextending would cause it to overwrite undo you asked to have "kept".
More V$UNDOSTAT questions...
John Baughman, July 27, 2005 - 7:58 pm UTC
Tom,
I've been trying to figure out how to monitor our UNDO space and our UNDO_RETENTION on a 17GB UNDO tablespace with no file system space left to grow until we can add more disk space mid-month next month. This is an ongoing issue that periodically will come up again and again because of spending constraints. I am in UNDO_RETENTION monitor mode since we get ORA-01555 and ORA-30036 errors depending on the length of the load and the retention. We generally have small data loads during the week and our larger ones during the weekend. Right now we are juggling the larger ones so that they don't conflict and the system has time to clean up the undo as needed. But sometimes, we still run into errors.
What I am trying to understand right now is the several columns in V$UNDOSTAT. Here's one of the columns I am currently having trouble with:
UNDOBLKS - Oracle definition: "Represents the total number of undo blocks consumed. You can use this column to obtain the consumption rate of undo blocks, and thereby estimate the size of the undo tablespace needed to handle the workload on your system."
Is this the total blocks currently consumed at that moment in time, or is it the number of undo blocks consumed by the TXNCOUNT?
The other question I have is, how long does the data "normally" stay in V$UNDOSTAT? I am currently seeing up to 8 days worth of data. I read somewhere, I'm not sure about where, that the data is only good for 24 hours. This seems suspicious to me since every instance I've looked at has about up to 8 days worth. (I guess I might have answered that myself :) One of the reasons I am asking this is that I have one MAXQUERYLEN of about 38K seconds about 2 days ago that is throwing my average MAXQUERYLEN way off. The next longest one is about 14K seconds.
Are there any other columns in V$UNDOSTAT that I can utilize for "point in time" undo tablespace usage that will give me a clear view of overall usage at that moment? (kind of a duh question since it does give me data about usage, but just actual FULL undo tablespace usage is what I want)
Here is what I am trying to do: possibly compare the data in V$UNDOSTAT with the data in DBA_UNDO_EXTENTS to get a clear picture of how much space is being used and how it is being used. Is it a long query or something else? I am trying to create a small VB application to alert me when the undo tablespace is over a certain percentage rate and if I am going to see an ORA-* error due to the retention rate and the amount of undo being generated. Does that make sense?
Just for info sake, I have recommended when we get the additional disk space available (200GB), that we double, maybe even triple our current undo tablespace and set a retention rate to about 18K. In what I have read, we shouldn't run into any problems with that then. Also, I ahve run the following query on our database and found the "optimal" retention to be MUCH lower than I would have expected.
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
ROUND((d.undo_size / (to_number(f.value) *
g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"
FROM ( SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
( SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size'
/
My results are:
ACTUAL UNDO SIZE [MByte]
------------------------
17000
UNDO RETENTION [Sec]
-------------------------
18000
OPTIMAL UNDO RETENTION [Sec]
----------------------------
3338
The 18000 retention parameter is set so that the longer queries don't get ORA-01555 errors. If one of our people does an "unscheduled" data load during the week, we run this. And sometimes, we fill the undo tablespace...
Thanks for any help! (Boy can I write a lot... Does it make sense though...)
July 27, 2005 - 8:49 pm UTC
v$undostat should be counts of things between begin and end time. How much you used in that period of time. so it would be blocks used independent of transactions.
so, in a way, it is the number of blocks consumed by txncount too -- since txncount is the count of txn's in that window of time.
this view is like a histogram, a time plot.
the view documentation says:
<quote>
Each row in the view keeps statistics collected in the instance for a 10-minute interval. The rows are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 1008 rows, spanning a 7 day cycle.
</quote>
v$transaction (used_ublk) will be "point in time", v$undostat is 10 minute windows over time.
before dropping...!
Reader, July 28, 2005 - 8:56 am UTC
tom,
how can i know my old undo tablespace contains ACTIVE transactions..?
if i drop the old undo TS will it allow me to drop by having active transactions? or
do we need to check prior to dropping..?
July 28, 2005 - 10:23 am UTC
it won't let you drop it, but you can peek in v$transaction.
Don't forget, it is needed for read consistency as well - if you drop it right after you stop using it, you might inflict 1555's on yourself (eg: no harm in waiting for a while)
Thanks!
John Baughman, July 28, 2005 - 11:00 am UTC
That was what I was looking for!
Excellent thread on UNO management
A reader, April 13, 2006 - 5:27 am UTC
why not to keep undo large
Dean, May 17, 2006 - 10:13 pm UTC
Very early in the thread you noted that since you will need large undo, don't bother to shrink it.
Some users will have batch jobs, or have runaway jobs that inflate the undo beyond normal uses and will want to reduce it.
But, regardless of why you may want to reduce it, isn't not having to back up a potentially huge data file (if using colds or image copies) a great reason to want to shrink it?
May 18, 2006 - 10:49 am UTC
You would have to quantify that - how much time do you feel you would be able to "save" and is the amount of work you have to expend to save that much time "worth it".
Quantify it, then you will likely be able to decide if it means anything to you.
Reuse option in datafile.
NR, June 22, 2006 - 12:06 pm UTC
I have got this legacy system where I got several partitioned tables for HOurly, Daily, Weekely and Monthly.
Now all the hourly partitioned indexes are stored in one tablespace with one datafile
Create tablespace hourly_ind_ts datafile 'index_ts_ind01.dbf' size 1000M reuse;
Now I am getting an error "unable to extend" on index segments. As far my understanding is reuse option will drop previous contents of the file and create new one. So system is adding data every hour and I am keep getting this error.
Question: Is resizing the file to a bigger size will work?
Question: If i add any additional file as reusable will it be used or not?
e.g. alter tablespace hourly_ind_ts add datafile 'hourly_ts_ind02.dbf' size 4096M reuse;
Is my understanding about reuse option is right? or is there more to it.
June 22, 2006 - 2:42 pm UTC
reuse will reuse (overwrite) an existing file.
you are simply running out of space.
yes, resizing larger will allow you to allocate more extents.
if you add a file (reuse or not, doesn't really MATTER), you will be able to allocate more extents.
reuse only lets oracle overwrite an existing file, nothing after that.
UNDO_TABLESPACE
Hitesh, July 02, 2006 - 8:46 am UTC
Hi Tom,
During the index build creation of a 1 billion row table the UNDO_TABLESPACE UNDO_T1 has crossed 85G, then I decided to create another tablespace UNDO_T2 of 16G for our normal day to day operstions.
I switched the UNDO_T1 to UNDO_T2 so that the new transactions can switch onto the new undo tablespace.
Alter system set undo_tablespace=UNDO_T2 scope=BOTH;
After seeing the V$transaction view that there were no transactions happening in the database I made the database shutdown and made the change in the init.ora to reflect the undo_tablespace=UNDO_T2 and bounced the database once again.
I deliberatly put the tablespace UNDO_T1 offline, so that no transactions can come to the tablespace UNDO_t1 to register any transactions, but very surprisingly the transactions kept on looking for UNDO_T1 and failed saying 'file 2 cannot be read at this time' .
Luckily enough I have made the tablespace back to ONLINE mode for the transactions to go through.
Why the transactions could n't get the extents from UNDO_T2 and still looking in for UNDO_T1.
My undo_retention is = 36000 sec's but they were not select statements to provide any read consistent view of data, even I have bounced the database to avoid any snapshot too old errors.
Even at this stage I could see :
SQL>Select tablespace_name , status, count(*) from dba_undo_extents
2 group by tablespace_name , status ;
TABLESPACE_NAME STATUS COUNT(*)
------------------------------ --------- ----------
UNDO_T1 EXPIRED 2262
UNDO_T2 ACTIVE 1
UNDO_T2 EXPIRED 3803
UNDO_T2 UNEXPIRED 337
all the extents allocated to UNDO_T1 are EXPIRED,but i am afraid can I safely drop it.
Even the space acquired by the UNDO_T1 is
SIZE USED %USED
85716 3457 4.03
Why it is showing 4% of the space being used. I am actually quite stunned by this behaviour.
Can you please let me know how safely it can be dropped.
Thanks in advance,
Hitesh
July 07, 2006 - 3:00 pm UTC
Read consistency will still look to that undo segment - when it hits blocks that haven't been cleaned out
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:275215756923 <code>
You can likely drop it by now - as time has past and we will just "know" the transaction that dirtied the block is committed (long ago) and won't need to look at the undo segment header to verify the timing of it anymore.
Undo segments are still in there, still allocated - they are the "4% used", nothing remarkable about it.
Switching Undo tablespace
Hitesh, July 03, 2006 - 7:21 am UTC
Hi Tom,
Can you please suggest some tips on the above posting.
Thanks
Hitesh
July 07, 2006 - 3:29 pm UTC
sorry, I took a vacation :)
undo_retention
F. Walton, August 02, 2006 - 12:14 am UTC
Tom,
You mentioned somewhere that to make undo_retention unreasonable large will have negative performance impact. could you explain that ?
August 02, 2006 - 11:33 am UTC
I doubt I said "will have", I probably said "might" or "could" or "under some circumstances will"
say you have a very very small undo tablespace (forget the retention for a moment) and it is fixed size.
It consumes X number of blocks at most.
X fits in the cache nicely.
We wrap around in the undo segments 5 times (using 5X blocks) before a checkpoint occurs. We checkpoint the X blocks.
Ok, now say you are 5X (the undo tablespace is)
We never wrap around. But because 5X blocks won't fit into the buffer cache, we have to prematurely checkpoint some of them to make their buffer cache space available for another block. At the checkpoint, we checkpoint any remaining blocks in the cache.
We write 5X blocks of undo this time.
An unnecessarily large set of undo segments MAY potentially increase the amount of WRITE IO your system does.
Undo tablespace is growting with retention=0
Jagjeet Singh, October 07, 2006 - 6:39 am UTC
Hi Tom,
Can you see please this test case.
o I have created 1 table with 1 record.( a int)
o created a new undo tablespace with 4m size autoextend on.
o set undo retention=0 and update same table and commit.
after 300 seconds my undo tablespace's size was 251mb
o set undo_retention=99999 and did the same thing
and undo tbs was 400+ mb as a result.
Can you please look as in first case it should not grow
to 251 mb.
10g on linux with LMT manual SSM.
SQL>
SQL> r
1 select file_name,round(bytes/1048576) size_mb , autoextensible
2* from dba_Data_files where tablespace_name = 'UNDO'
FILE_NAME SIZE_MB AUT
-------------------------------------------------------- ---------- ---
/TEST/datafile/o1_mf_undo_2lgqkhgv_.dbf 4 YES
SQL>
SQL> create table t as select rownum a from dual;
Table created.
SQL> get und_test
1 declare
2 v_time int;
3 v_undo int;
4 begin
5 select sum(bytes)/1048576 into v_undo from dba_segments where segment_type like '%UNDO%';
6 ---
7 dbms_output.put_line(' -- Before Starting Undo Size : '||v_undo );
8 execute immediate ' alter system set undo_retention=0 ';
9 ----
10 v_time := dbms_utility.get_time ;
11 while ( ((dbms_utility.get_time - v_time)/100) < 300 )
12 loop
13 update t set a=a;
14 commit;
15 end loop;
16 ----
17 select sum(bytes)/1048576 into v_undo from dba_segments where segment_type like '%UNDO%';
18 dbms_output.put_line(' -- Undo Size with undo_retention=0 : '||v_undo );
19 ----
20 ----- second time with undo_retention = 9000
21 execute immediate ' alter system set undo_retention=9000 ';
22 v_time := dbms_utility.get_time ;
23 while ( ((dbms_utility.get_time - v_time)/100) < 300 )
24 loop
25 update t set a=a;
26 commit;
27 end loop;
28 ----
29 select sum(bytes)/1048576 into v_undo from dba_segments where segment_type like '%UNDO%';
30 dbms_output.put_line(' -- Undo Size with undo_retention=9000 : '||v_undo );
31 ----
32* end;
33 /
-- Before Starting Undo Size : 1
-- Undo Size with undo_retention=0 : 251
-- Undo Size with undo_retention=9000 : 476
PL/SQL procedure successfully completed.
what about 9i
Jagjeet Singh, October 07, 2006 - 1:58 pm UTC
Ok, Is it because of 10g -- automatic tuned_retention.
Would I get the same resule with 9i ?
-- Actually, I do not have 9i for testing purpose.
October 07, 2006 - 6:25 pm UTC
31 ----
32 end;
33 /
-- Before Starting Undo Size : 1
-- Undo Size with undo_retention=0 : 1
-- Undo Size with undo_retention=9000 : 213
PL/SQL procedure successfully completed.
ops$tkyte%ORA9IR2>
undo of insert statemement
Sven, December 01, 2006 - 7:17 am UTC
Hi Tom,
I have 3 short questions:
1. What kind of information are stored in undo segments for insert statement?
2. What kind of information is stored as redo for undo generated by insert stmt?
3. What is necessary to be read from undo segments for block modified by insert stmt. in order to get a construct a consistent read on modified block?
Thanks,
Sven
December 01, 2006 - 7:44 am UTC
1) delete + rowid
2) update set row = old row where rowid = the rowid (the before image)
3) the delete + rowid - put the block back the way it was
undo of insert statemement
Sven, December 01, 2006 - 10:56 am UTC
I have tested couple of cases from your book 9i/10g architecture, and if I update field in the table which is defined as varchar2(2000) the amount of redo for insert/update/delete is ~the same.
If I repeat the same test but the field is defined as char(2000) the difference is obvious...redo generated by update is biggest.
analogously to this, for update then undo generated will consist of 'anti-update' statement + before update block image?
create table t(x int ,y char(2000),z date); -- case 1
create table t(x int ,y varchar(2000),z date); -- case 2
declare
l_redo_size number;
l_cnt number := 200;
procedure report
is
begin
select value-l_redo_size into l_redo_size from redo_size;
dbms_output.put_line( 'redo size = ' || l_redo_size ||
' rows = ' || l_cnt || ' ' ||
to_char(l_redo_size/l_cnt,'99,999.9') ||
' bytes/row' );
end;
begin
select value into l_redo_size from redo_size;
for x in ( select object_id, object_name, created
from all_objects
where rownum <= l_cnt )
loop
insert into t values
( x.object_id, x.object_name, x.created );
commit;
end loop;
report;
select value into l_redo_size from redo_size;
for x in ( select rowid rid from t )
loop
update t set y = lower(y) where rowid = x.rid;
commit;
end loop;
report;
select value into l_redo_size from redo_size;
for x in ( select rowid rid from t )
loop
delete from t where rowid = x.rid;
commit;
end loop;
report;
end;
/
output case 1:
redo size = 105308 rows = 200 526.5 bytes/row
redo size = 104948 rows = 200 524.7 bytes/row
redo size = 105800 rows = 200 529.0 bytes/row
output case 2:
redo size = 563600 rows = 200 2,818.0 bytes/row
redo size = 901080 rows = 200 4,505.4 bytes/row
redo size = 522108 rows = 200 2,610.5 bytes/row
If for insert stmt. we save in undo only delete+rowid, how is possible that redo generated (2,818.0 bytes/row) is aprox. the size of the row?
Does this mean that oracle 'saves' inserted row/image in undo (or somewhere else)???
Thanks.
Sven
December 01, 2006 - 12:42 pm UTC
umm, the data in the table is recorded in redo, as well as the entry in undo recorded in redo.
hence, when you insert 2000 bytes, you
a) put 2000 bytes on the table block that needs to be logged
b) put delete+rowid into undo (small) which needs to be logged.
so, undo - small,
redo - well, you need to redo the insert, you best have the data no?
Resize Undo datafiles
Deepak, April 08, 2007 - 9:00 am UTC
Hi Tom,
Have the following scenario...
> Have set UNDO_MANAGEMENT = AUTO
> Ran a long transaction.
> The transaction was commited.
> The UNDO tablespace grew a lot.
> Shutdown the instance in normal mode.
> Started the instance.
> Tried to resize the data files of UNDO Tablespace.
> It says data exists beyond the resized limit. Though I tried to resize only by reducing 1MB from the current size.
> No other transactions are running...
My qyery is...
UNDO TBS is supposed to have the rollback data. As I had commited the transaction and a checkpoint had happened, therefore UNDO TBS is supposed to have no data of my previous transaction(s), (do you agree?)
But when I tried to resize what prevented me from doing so. Does Oracle not shrink the undo segments automatically? If it does when?
If I want to resize the UNDO TBS because of space constraints, then which approach should I take...
I need your expertise to clear my doubt.
April 09, 2007 - 10:41 am UTC
I do not agree.
UNDO is used by read transactions as well as write. Every select statement you execute may need to access the undo tablespace in order to provide read consistent results.
In addition to setting undo management = auto, you specify an undo retention period. Oracle will try to keep undo around AT LEAST that long after you commit in order to let SELECT (read) statements that need it, get it.
Also, undo extents will be deallocated in a lazy fashion in the background, when it feels like it. In general, if the undo tablespace has grown to be X units of storage in size, it is going to *stay that size*. You might see free space appear in it over time, but that won't necessarily let you shrink the datafile or anything (you can only shrink a file down to the point where the last extent is - if you have a file:
XffffffffffffffffffffffffffffffX
where the X represents a used extent and the f represents a free extent - the datafile is already as small as it can be - because of that last allocated extent)
You could (but probably should NOT)
a) create a small undo tablespace
b) alter system to point to small undo tablespace
c) wait at least your "undo retention period" after the last transaction that used the old undo tablespace commits
d) drop the old undo tablespace
but, your undo will likely just grow again and again.
UNDO after TRUNCATE...
Kevin T., May 11, 2007 - 2:29 pm UTC
Tom:
We're running 10.2.0.3 and we came upon a funny occurance the other day.
We have a 10GB UNDO segment, with a AUTO undo management, and 1800 sec retention period.
The table we were working with is fairly bit, at 16GB in size. We needed to reload the table so we TRUNCATED it, and then proceeded with our INSERT statement.
However, it wouldn't complete, since it ran out of UNDO. Now, the *best* that I can come up with, is that we had a dirty-block scenario, in which Oracle was attempting to save the blocks prior to the truncate into UNDO for read consistency purposes on sessions prior to the TRUNCATE.
However, I can't seem to find any documentation explanation given this scenario. I see with regard to deletes, and other types of transactions, but not a truncate.
I was under the understanding that a TRUNCATE included a implicit commit, and unless the system had a active cursor/query on the table prior to the TRUNCATE, that we wouldn't see UNDO since the INSERT would be applying to a blank table.
Could you explain/elaborate as to what might have been occurring?
May 11, 2007 - 3:43 pm UTC
... Now, the *best* that I can come up with, is that we had a dirty-block scenario, ...
nope, doesn't work that way, truncate does not generate undo for the table data you just truncated.
I would guess "indexes" as they tend to produce gobs of undo if you do things slow by slow - but you don't tell us how it works here, you don't tell us what you are doing (how you load)
Undo generation during INSERT after TRUNCATE
Kevin T., May 11, 2007 - 4:30 pm UTC
It's possible that it may have been the indexes. From what I see from the user's statements, they may have set the indexes unsuable PRIOR to issuing the TRUNCATE, which I know in the past has caused trouble, by having the side-effect of re-enabling the indexes.
The table is 16GB in size, the UNDO was 10gb, and the indexes took about 2GB. Since this is a new system there were no other user sessions, active or otherwise, other than system sessions.
I wouldn't think that 2GB of indexes would push over 10GB in UNDO, regardless of retention time... hence my puzzlement. It seemed by the behaviour and UNDO generation rate, that it was generating UNDO on the records, not the indicies.
We were just puzzled. The temporary solution was to just drop it all, then do CREATE AS SELECT instead.
May 14, 2007 - 12:35 pm UTC
if they set them unusable AND THEN truncate - they will all become "usable" again. Yes.
It would be easy for 2gb of index to generate gobs of UNDO if you do slow by slow processing.
Longest running query and UNDO
Naresh, August 14, 2007 - 9:41 am UTC
"For an AUTOEXTEND undo tablespace, the database tunes the undo retention period to be slightly longer than the longest-running query, if space allows."
Above is from the 10G Concepts manual.
Just to be sure, does it mean that, given
- undo_retention=1800 seconds (30 minutes)
- a query in the database is running for 2 hours
then the UNDO will have more than 2 hours "data" at that point? (assuming of course there is space)
August 20, 2007 - 11:07 am UTC
UNDO
Mahalingesh Daddi, August 22, 2007 - 2:01 pm UTC
I am trying to shrink the undo tablespace by
Create a new UNDO tablespace.
CREATE UNDO TABLESPACE undo_ts2
DATAFILE '/directory/undo_ts2_01.dbf' SIZE xxxM;
Modify the database parameter to use the new UNDO tablespace.
ALTER SYSTEM SET undo_tablespace=undo_ts2 SCOPE=BOTH;
Drop the old undo tablespace;
Can you tell me when I can drop the old undo tablespace... ?
August 22, 2007 - 2:45 pm UTC
wait at least until your undo_retention period has expired (eg: if you have 2 hours of undo retention, wait that long)
if there are existing transactions in there, it won't let you drop it and will tell you about that when you try, you just have to wait for them to complete then.
UNDO
Mahalingesh Daddi, August 22, 2007 - 4:53 pm UTC
After switching I am getting message in alert log
Undo Tablespace 76 moved to Pending Switch-Out state.
Do I have kill the session using the rollback segments
PLease help.
August 23, 2007 - 10:48 am UTC
just wait, why kill anything, you obviously have enough space right now for both the old and the new (they are there), just be patient.
you may read note 341372.1 on metalink if you really want to kill them (but that would be rude :)
Difference between undo tablespace in 9i and 10g
Jayavelan, September 11, 2007 - 4:14 am UTC
Hi Tom,
Our Application had no issues with 500 mb datafile for undo tablespace when it was working in 9i database. We recently migrated our database from 9i to 10g. Now it uses more then 1gb undo space for the same application. Please let me know if the oracle is updating more DD base tables in 10g or any other reasons for this issue.
Thanks & Regards,
Jayavelan
September 15, 2007 - 3:20 pm UTC
DD? what the heck is DD
No, the amount of undo did not double between 9i and 10g in general
ops$tkyte%ORA9IR2> create table t as select * from all_objects where rownum <= 20000;
Table created.
ops$tkyte%ORA9IR2> create index t_idx on t(object_name);
Index created.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> update t set object_name = lower(object_name);
20000 rows updated.
ops$tkyte%ORA9IR2> select used_ublk from v$transaction;
USED_UBLK
----------
816
ops$tkyte%ORA9IR2> delete from t;
20000 rows deleted.
ops$tkyte%ORA9IR2> select used_ublk from v$transaction;
USED_UBLK
----------
1663
ops$tkyte%ORA9IR2> insert into t select * from all_objects where rownum <= 20000;
20000 rows created.
ops$tkyte%ORA9IR2> select used_ublk from v$transaction;
USED_UBLK
----------
1774
and in 10gops$tkyte%ORA10GR2> create table t as select * from all_objects where rownum <= 20000;
Table created.
ops$tkyte%ORA10GR2> create index t_idx on t(object_name);
Index created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> update t set object_name = lower(object_name);
20000 rows updated.
ops$tkyte%ORA10GR2> select used_ublk from v$transaction;
USED_UBLK
----------
790
ops$tkyte%ORA10GR2> delete from t;
20000 rows deleted.
ops$tkyte%ORA10GR2> select used_ublk from v$transaction;
USED_UBLK
----------
1619
ops$tkyte%ORA10GR2> insert into t select * from all_objects where rownum <= 20000;
20000 rows created.
ops$tkyte%ORA10GR2> select used_ublk from v$transaction;
USED_UBLK
----------
1807
something else has changed - undoubtedly when you upgraded, you made application changes, or database configuration changes (eg: undo retention, or some other parameter)
1gb of undo is pretty small these days.
OK
Raju, September 30, 2007 - 10:24 am UTC
Hi Tom,
Is there any manual way to flush the contents of
Undo segments?
I am the only user of the development database and even if I logout and reconnect it shows a temp space usage of 96%.
I even droppped the temp tables created for intermediate
processing but no use.
Is there any way to reset it to zero?
i am doing a huge insert about 10 million rows and it
fails midway through saying unable to extend space in temp.
tablespace.
October 03, 2007 - 2:12 pm UTC
you are confusing me
undo or temp?
and undo and temp do not tend to shrink, we allocate space and the keep using it and reusing it, we do not inefficiently drop and reallocate it over and over and over.
temp will never shrink, but it is all FREE when you log out.
UNEXPIRED/EXPIRED Undo segments
surjam, November 26, 2007 - 1:39 pm UTC
The situation is that there is one segment of 100 gb that is ACTIVE and 25 other EXPIRED/UNEXPIRED segments. And Oracle is not using these EXPIRED/UNEXPIRED segments for reuse as it errors out with 'unable to extend' error. This forces allocation of more and more space. Adding more and more space does not make sense but looks like there is no other way. Why does this happen?
November 27, 2007 - 2:53 pm UTC
segments are not "expired/unexpired"
extents are - NOT segments.
so, I have no idea what you "measured" and what you are reporting here.
do you have long running transactions? They will prevent an undo segment from wrapping around on itself - it only takes one tiny insert to lock up a bit of an undo segment.
v$transaction would be useful to query - to see how long ago your oldest transaction began.
Undo space usage used by session (v$transaction.addr and v$session.taddr)
Charles, June 12, 2008 - 7:32 pm UTC
Tom,
Undotablespace usage is 93% used in production.
Currently, when i looked at v$transaction, i have 14GB of undo being used... but still undotablespace usage shows almost 73Gb being used.
%
Tablespace Name KBytes Used Free Used Largest
---------------- ------------ ------------ ------------ ------ ------------
UNDOTBS1 79,564,864 73,992,384 5,572,480 93.0 2,386,816
SQL> select sid, serial#, osuser, taddr, logon_time, a.status, machine, addr, SES_ADDR, USED_UBLK, USED_UBLK*16384/1024/1024 UNDO_USAGE_MB
2 from v$session a, v$transaction b
3 where b.addr = a.taddr
4 /
SID SERIAL# OSUSER TADDR LOGON_TIM STATUS MACHINE ADDR SES_ADDR USED_UBLK UNDO_USAGE_MB
---- ---------- ------------------------------ ---------------- --------- ---------- --------------- ---------------- ---------------- ---------- -------------
161 32259 dsadm 00000003F42C93E8 07-JUN-08 ACTIVE dbr0et01 00000003F42C93E8 00000003F0C97DC8 3 .046875
110 16169 dsadm 00000003F73FBF30 13-MAY-08 ACTIVE dbr0et01 00000003F73FBF30 00000003F1E72170 4 .0625
SQL> select addr, SES_ADDR, USED_UBLK, USED_UBLK*16384/1024/1024 UNDO_USAGE_MB from v$transaction
2 /
ADDR SES_ADDR USED_UBLK UNDO_USAGE_MB
---------------- ---------------- ---------- -------------
00000003F73FB8D8 00000003F1E72170 923575 14430.8594
00000003F73FBF30 00000003F1E72170 4 .0625
00000003F42C93E8 00000003F0C97DC8 3 .046875
SQL> select * from v$session where taddr='00000003F73FB8D8' ;
SQL>
SQL> set feedb on
SQL> /
no rows selected
When i join with v$session and v$transaction above, i do not find session (addr=00000003F73FB8D8) but ses_addr is same. Can a ses_addr have two transaction address ?
I do not know why 14 GB undo is still showing in the transaction as being used.. but no session associated with it.
Here are the SQLs behind the sessions :
INR(110,16169) ospid = 2046 program = uvsh@dbr0et01 (TNS V1-V3)
Tuesday 00:00 Thursday 20:25 last et = 797
UPDATE ADL_MLN_LIST SET MAST_BILL_ID = :2, MAST_TRUNK_GRP_ID = :3, MAIN_LISTED_NUM = :4 WHERE LOC_ID = :1
--------------------
INR(161,32259) ospid = 11116 program = uvsh@dbr0et01 (TNS V1-V3)
Saturday 16:05 Thursday 20:25 last et = 432453
DELETE FROM ADL_MLN_LIST WHERE LOC_ID = :1 AND MAIN_LISTED_NUM = :4
My undo parameters are.
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 25000
undo_suppress_errors boolean FALSE
undo_tablespace string UNDOTBS1
June 12, 2008 - 8:24 pm UTC
could it be an autonomous transaction?
I just ran in one session:
set echo on
declare
l_n number;
begin
loop
select count(*) into l_n from v$session where client_info = 'inserted';
exit when l_n > 0;
dbms_lock.sleep( 2 );
end loop;
end;
/
select sid, serial#, osuser, taddr, logon_time, a.status, machine, addr,
SES_ADDR, USED_UBLK, USED_UBLK*16384/1024/1024 UNDO_USAGE_MB
from v$session a, v$transaction b
where b.addr = a.taddr
/
select addr, SES_ADDR, USED_UBLK, USED_UBLK*16384/1024/1024 UNDO_USAGE_MB
from v$transaction
/
and in another drop table t;
set echo on
exec dbms_application_info.set_client_info( '' );
create table t ( x int, data varchar2(80) );
insert into t
select level, rpad('*',80,'*') from dual connect by level <= 1000;
commit;
delete from t where x > 0;
declare
pragma autonomous_transaction;
begin
dbms_application_info.set_client_info( 'starting' );
insert into t values ( 0, '' );
dbms_application_info.set_client_info( 'inserted' );
dbms_lock.sleep( 10 );
commit;
end;
/
when the insert was done, before the commit, you would see in the first window:
ops$tkyte%ORA10GR2> declare
2 l_n number;
3 begin
4 loop
5 select count(*) into l_n from v$session where client_info = 'inserted';
6 exit when l_n > 0;
7 dbms_lock.sleep( 2 );
8 end loop;
9 end;
10 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select sid, serial#, osuser, taddr, logon_time, a.status, machine, addr,
2 SES_ADDR, USED_UBLK, USED_UBLK*16384/1024/1024 UNDO_USAGE_MB
3 from v$session a, v$transaction b
4 where b.addr = a.taddr
5 /
SID SERIAL# OSUSER TADDR LOGON_TIM
---------- ---------- ------------------------------ -------- ---------
STATUS MACHINE
-------- ----------------------------------------------------------------
ADDR SES_ADDR USED_UBLK UNDO_USAGE_MB
-------- -------- ---------- -------------
27 387 tkyte 3F2D7FE0 12-JUN-08
ACTIVE dellpe
3F2D7FE0 3FD531A0 1 .015625
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select addr, SES_ADDR, USED_UBLK, USED_UBLK*16384/1024/1024 UNDO_USAGE_MB
2 from v$transaction
3 /
ADDR SES_ADDR USED_UBLK UNDO_USAGE_MB
-------- -------- ---------- -------------
3F2D2AC0 3FD531A0 24 .375
3F2D7FE0 3FD531A0 1 .015625
the big "outer" transaction would not show in the join from v$session to v$transaction because v$session can only show the 'current' transaction - so the join says "that session is using 0.01mb" but in reality it is much more.
but in reality it is much more
Deepak, June 13, 2008 - 12:59 am UTC
Hi Tom,
I could not understand the following part from your response...
*but in reality it is much more*
Will be great if you can please elaborate the a bit.
June 13, 2008 - 7:33 am UTC
ops$tkyte%ORA10GR2> select addr, SES_ADDR, USED_UBLK, USED_UBLK*16384/1024/1024 UNDO_USAGE_MB
2 from v$transaction
3 /
ADDR SES_ADDR USED_UBLK UNDO_USAGE_MB
-------- -------- ---------- -------------
3F2D2AC0 3FD531A0 24 .375
3F2D7FE0 3FD531A0 1 .015625
the big "outer" transaction would not show in the join from v$session to v$transaction because v$session can only show the 'current' transaction - so the join says "that session is using 0.01mb" but in reality it is much more.
In the above, when you join v$session to v$transaction by taddr to addr, only one row from v$transaction will come back - so the join of v$session to v$transaction reported that session as using 0.015625mb of undo.
In reality, that session is using 0.375+0.015625mb - that is "much more" - I was demonstrating how a single session could have more than one entry in v$transaction
undo tablepsaces keep growing
Rohit Gupta, June 13, 2008 - 1:11 am UTC
Tom,
Our database has 6 UNDO tablespaces and for last few days a team has been doing some batch jobs on this database. We are obesrving continuous "unable to extend" errors in some UNDO tablespace daily. The specific transaction causing this problem is a "drop column" statement. I am not sure what could be the reason. I have already added autoextensible datafiles to each of these 6 tablespaces but still the error resurfaces daily.
--> Could you please point out what could be the reason or what should i check?
--> And is it necessary to have all UNDO tablespaces in a database to be of equal size?
I am still able to see lot of free space (around 10Gigs) in every UNDO tablespaces, probably because it is a no transaction period of the day. But when during the daytime users start doing there testing on this db, the problem starts
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 14400
undo_tablespace string undo01
Thanks in advance
Rohit
June 13, 2008 - 7:45 am UTC
no, you have 5 inactive undo tablespaces and one active one (undo01)
I'm confused by your description - you say "some undo tablesspace", but there is only one used.
A reader, October 27, 2008 - 1:12 pm UTC
Hi,
Apologies for asking something basic. Evenif you can direct me to a link, it will be great.
I have database using undo. I ma to update 0.15 M rows in a table having 18M rows.
How to know if the undo size is big enough to carry out the update? It is assumed that there will not be any other activity going on.
Thanks.
October 27, 2008 - 2:14 pm UTC
"it depends"
will you update an indexed column? You'll need a LOT more than if not.
are you updating a 4000 character field to a new 4000 character value? If you are, you'll need a LOT more than if you update a date column.
are you updating 100 columns? If you are, you'll need a LOT more than if you update just 1 of them.
and so on.
2.7 million is pretty small these days, I would suspect any reasonably configured database would be able to deal with it. You could make your undo tablespace datafiles autoextensible with a reasonable (to you reasonable) upper bound on what they can grow to.
Do some test updates and get a feel for how your session statistic "undo change vector size" reacts - that's the measure of the undo generated.
How far we can flashback
A reader, January 29, 2010 - 1:44 am UTC
Hi Tom,
My question is I want to Flashback a table as far behind as much I can go. But I am not able to get any timestamp and any SCN which help me to flashback table upto that point.
Given below are the parameters I'm using in my database.
NAME TYPE VALUE
------------------------------------ ----------- ---------
undo_management string AUTO
undo_retention integer 1200
undo_tablespace string UNDOTS
There is view FLASHBACK_TRANSACTION_QUERY which show me some information like :
select XID , START_TIMESTAMP , START_SCN , START_SCN ,COMMIT_SCN , COMMIT_TIMESTAMP , TABLE_NAME from FLASHBACK_TRANSACTION_QUERY where TABLE_OWNER = 'SCOTT' order by 2
2 /
XID START_TIMESTAMP START_SCN START_SCN COMMIT_SCN COMMIT_TIMESTAMP TABLE_NAME
---------------- ----------------- ---------------------- ---------------------- ---------------------- ----------------- ------------------------------
0002001B00000BBA 29-JAN-2010 05:19 3623105806745 3623105806745 3623105806751 29-JAN-2010 05:19 RESET_TEST
0006001900000BC6 29-JAN-2010 05:21 3623105806885 3623105806885 3623105806887 29-JAN-2010 05:21 RESET_TEST
as per this view information i can flashback my table to scn 3623105806745 but when I do flashback it eventually allow me to go beyond this point as well.
SQL> FLASHBACK TABLE RESET_TEST to scn 3623105806745 - 100;
/
Flashback complete.
Note: I don't want to Enable Flashback
January 29, 2010 - 4:16 pm UTC
the answer is "it totally depends and the answer changes every second the database is up". Some table might have undo in segment ABC and another in segment DEF - and DEF might have just had some extents expired/stolen from it - so the table in ABC could go back further than the one covered by DEF - perhaps.
Try to go back 5 days (theoretical maximum using undo). If that fails, try less than 5 days.
But seriously, look at your undo retention setting, it would give you a good idea how far back is safe to go - you might *might* be able to go further, but it (undo retention) is all you can expect and even then - if and only if you don't have any prematurely expired extents...
Flashback
Aseem Chowdhery, January 30, 2010 - 10:10 am UTC
Given below is the point where it gives me error ORA-01555 means undo image is no more available.
SQL> FLASHBACK TABLE RESET_TEST to scn 3623105806751 - 63995;
Flashback complete.
SQL> FLASHBACK TABLE RESET_TEST to scn 3623105806751 - 63996;
FLASHBACK TABLE RESET_TEST to scn 3623105806751 - 63996
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-12801: error signaled in parallel query server P000
ORA-01555: snapshot too old: rollback segment number 6 with name "_SYSSMU6$" too small
I'm just curious to know do we have any v$ or may be x$ to query this scn which keep on changing every second ? I m pretty sure Oracle is maintaining these values somewhere...
February 01, 2010 - 9:56 am UTC
no, we do not maintain these values anywhere. And it isn't a fixed point in time - it isn't until "SCN x".
Ok, lets say you have tables T1 and T2 and you'd like to flash them back to scn 'A' (scn's in my example go from A to Z in order).
In the undo tablespace we have 4 undo segments.
undo segment 1 goes from A-Z
undo segment 2 has transactions from F-Z (we did some big transaction and rolled around in this undo segment)
undo segment 3 has transactions A-Z
undo segment 4 has transactions Q-Z (we did a whopper of a transaction and prematurely expired extents in this one)
Now, we try to flashback table T1 to A. The last time a transaction was performed against table T1 was during SCN 'C'. That transaction was assigned to undo segment 1. The flashback succeeds.
Now, we try to flashback table T2 to A. The last time a transaction was performed against table T2 was during SCN 'M' (much much later than C, hours later!). That transaction was assigned to undo segment 4.
Guess what happens with table T2? We get a 1555.
It is NOT a fixed point in time, it is simply whether the undo for a given chain of transactions still exists - or not.
To get the current SCN...
Pasko, January 31, 2010 - 11:03 am UTC
SELECT dbms_flashback.get_system_change_number FROM dual;
February 01, 2010 - 10:21 am UTC
so???
what were you trying to say with that?
Flashback
Aseem Chowdhery, February 02, 2010 - 9:25 am UTC
Hi Tom,
I hope this is what you were trying to say.. Please confirm if i got it right...
======================== SCN Values A-Z ================>
########################################################################################
# A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Undo1 #
# T1 related transactions are here Flashback table T1 to scn A --- Succeed
########################################################################################
# F G H I J K L M N O P Q R S T U V W X Y Z
Undo2 #
########################################################################################
# A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Undo3 #
########################################################################################
# Q R S T U V W X Y Z
#
Undo4 # T2 related transactions are here Flashback table T2 to scn A --- Ora-01555
########################################################################################
February 02, 2010 - 12:31 pm UTC
I don't know, your picture wasn't worth a thousand words to me.
I wrote:
undo segment 1 goes from A-Z
undo segment 2 has transactions from F-Z (we did some big transaction and rolled around in this undo segment)
undo segment 3 has transactions A-Z
undo segment 4 has transactions Q-Z (we did a whopper of a transaction and prematurely expired extents in this one)
so, if you try to get back to point A in time, it will work if you only need undo from undo segments 1 and 3. It will fail if you need undo from segment 2 or 4.
The furthest back in time you can go for sure with the above would be Q. But you might be able to get back to A, or somewhere in between A and Q.
That is all.
Used undo blocks
Saptarshi, February 13, 2010 - 9:54 am UTC
Hi Tom,
The following two pieces of code perform the same activity. I was always under the impression that the cursor will be less efficient; however, here the cursor is using much less number of undo blocks.
declare
cursor cur_tab_half_million is
select rowid from tab_half_million;
begin
for cur_tab_half_million_rec in cur_tab_half_million
loop
update tab_half_million set owner='SAP' where rowid=cur_tab_half_million_rec.rowid;
end loop;
end;
--5598 undo blocks used
update tab_half_million set owner='SAP'
--9399 undo blocks used
The table contains half million rows and the number of undo blocks is obtained from v$transaction.used_ublk.
No other user is using the database.
Please can you comment? It therefore seems the anonymous block is more efficient in this case.
February 16, 2010 - 12:45 pm UTC
which one ran considerably faster than the other? and used lots less cpu? and ......
Meaning, which one in reality would you want to use? Trace it and tell me which approach you think is "more efficient"
ops$tkyte%ORA10GR2> create table t as select * from all_objects;
Table created.
ops$tkyte%ORA10GR2> insert into t select * from t;
50054 rows created.
ops$tkyte%ORA10GR2> insert into t select * from t;
100108 rows created.
ops$tkyte%ORA10GR2> insert into t select * from t;
200216 rows created.
ops$tkyte%ORA10GR2> commit;
Commit complete.
ops$tkyte%ORA10GR2> select count(*) from t;
COUNT(*)
----------
400432
ops$tkyte%ORA10GR2> commit;
Commit complete.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
2 cursor cur_tab_half_million is
3 select rowid from t tab_half_million;
4 begin
5 for cur_tab_half_million_rec in cur_tab_half_million
6 loop
7 update t tab_half_million set owner='SAP' where
8 rowid=cur_tab_half_million_rec.rowid;
9 end loop;
10 end;
11 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> select used_ublk from v$transaction;
USED_UBLK
----------
4660
ops$tkyte%ORA10GR2> commit;
Commit complete.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly statistics
ops$tkyte%ORA10GR2> select * from t;
400432 rows selected.
Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
33857 consistent gets
22 physical reads
139480 redo size
19047942 bytes sent via SQL*Net to client
294045 bytes received via SQL*Net from client
26697 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
400432 rows processed
ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2> commit;
Commit complete.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> update t set owner = 'XYZ';
400432 rows updated.
ops$tkyte%ORA10GR2> select used_ublk from v$transaction;
USED_UBLK
----------
5652
ops$tkyte%ORA10GR2> commit;
Commit complete.
a tkprof would show:
declare
cursor cur_tab_half_million is
select rowid from t tab_half_million;
begin
for cur_tab_half_million_rec in cur_tab_half_million
loop
update t tab_half_million set owner='SAP' where
rowid=cur_tab_half_million_rec.rowid;
end loop;
end;
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 21.15 22.34 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 21.16 22.35 0 0 0 1
********************************************************************************
SELECT ROWID FROM T TAB_HALF_MILLION
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4005 1.13 1.16 0 154365 0 400432
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4007 1.13 1.16 0 154366 0 400432
********************************************************************************
UPDATE T TAB_HALF_MILLION SET OWNER='SAP' WHERE ROWID=:B1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 400432 61.39 61.57 0 400468 410186 400432
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 400433 61.39 61.57 0 400468 410186 400432
********************************************************************************
versus
update t set owner = 'XYZ'
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 6.41 7.57 2 24516 417928 400432
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 6.41 7.57 2 24517 417928 400432
Now, you tell me, which is more efficient?
Saptarshi, February 16, 2010 - 2:37 pm UTC
Yes, got your point. Thanks!
Basically that significant difference in undo is something that i didn't expect
estimating size of required UNDO tablespace
VS, March 09, 2010 - 3:36 am UTC
Dear Tom,
good day to you. We are deleting data from a table which has roughly 20 million records, we would be deleting approx half the data.There is a job written to do the clean up from time to time and and it had some issues due to which the data was not deleted and the data kept growing, finally we have to do this with one time script while the job is being fixed to run on daily basis.
While performing the cleanup by one time script which is a single delete statement we are getting ORA-30036. I have been told to delete the data in chunks which I believe is not the best way to do perform the task.
The UNDO Tablesapce stands at 9 GB and I have requested it be extended. The DBA has asked for the size for UNDO Tablespace required and looking at the table size which comes out to 25 GB (using dba_segments and dba_lobs),
I have requested the DBA to make the size of UNDO Tablespace atleast 15GB.I wanted to know is this the correct to get rough estimate of required UNDO Tablespace size for such activities. If not can you please share how to estimate the size of UNDO tablespace.
Thanks a lot for your help on this.
March 09, 2010 - 11:49 am UTC
for a purge, if you can live with having the data disappear in pieces (the people querying won't be bothered by that), deleting by piece is likely ok.
You could use create table as select/index/grant/constrain/drop old/rename new instead of delete perhaps?
thanks for your response
VS, March 10, 2010 - 12:13 am UTC
Dear Tom,
good day to you, first of all thanks a lot for taking time and looking into my query. I thought of using create table as but then the data to be retained and to be removed is in the ratio of 50:50.
I will move forward with deletes in chunks but if you can please comment on the way of getting estimated UNDO tablespace size that might be required for these kind of activities.
Once again thanks for your time and sharing your knowledge.
March 10, 2010 - 8:43 am UTC
... but then the data to be retained and
to be removed is in the ratio of 50:50. ...
so why did you discount it? that seems like a point to give serious consideration to it. doesn't it???
... if you can please comment on the
way of getting estimated UNDO tablespace size that might be required for these
kind of activities.
....
benchmark, it depends on the organization of the rows on disk, number of indexes, size of rows, etc.
Abnormal undo extension
Avnish, March 11, 2010 - 11:36 pm UTC
Hi Tom,
Thanks again for all you valuable tips.
There is something which I fail to understand. We have a 9.2.0.8 DB.
The UNDO data file has extended to 32 GB as seen below:
SQL> select FILE_NAME,TABLESPACE_NAME,MAXBYTES/1024/1024,BYTES/1024/1024,autoextensible,INCREMENT_BY
2 from dba_data_files where tablespace_name='APPS_UNDOTS1';
FILE_NAME TABLESPACE_NAME MAXBYTES/1024/1024 BYTES/1024/1024 AUT INCREMENT_BY
---------------------------------------- -------------------- ------------------ --------------- --- ------------
/db03/oradata/ebt/undo01.dbf APPS_UNDOTS1 32767.9844 32767.9844 YES 4096
SQL> select tablespace_name,sum(bytes)/1024/1024 from dba_free_space where tablespace_name='APPS_UNDOTS1' group by tablespace_name;
TABLESPACE_NAME SUM(BYTES)/1024/1024
------------------------------ --------------------
APPS_UNDOTS1 229.375
1 row selected.
SQL> select status, sum(bytes)/1024/1024 "SIZE IN MB" from dba_undo_extents group by status;
STATUS SIZE IN MB
--------- ----------
ACTIVE 20.1171875
EXPIRED 2693.32813
UNEXPIRED 11.40625
3 rows selected.
The free space in the undo TS is only 229MB and Used space as shown by dba_undo_extents is apporx 2.7 GB.
Now the free space and used space does not add up to the datafile size of 32GB?
I tried in other instance and found out that it does add up, but not in this instance.
Below are the undo parameters used:
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 1800
undo_suppress_errors boolean FALSE
undo_tablespace string APPS_UNDOTS1
March 12, 2010 - 3:25 pm UTC
what does dba_segments have to say about what is in there?
Abnormal undo extension
Avnish, March 15, 2010 - 1:11 am UTC
Hi Tom,
I see one of the undo segment "_SYSSMU17$" of size 32 GB. I don't know how could that have happened?
SQL> select owner,segment_name,SEGMENT_TYPE,BYTES/1024/1024 from dba_segments
2 where tablespace_name='APPS_UNDOTS1';
OWNER SEGMENT_NAME SEGMENT_TYPE BYTES/1024/1024
------------------------------ -------------------- ------------------ ---------------
SYS _SYSSMU1$ TYPE2 UNDO 44.1171875
SYS _SYSSMU2$ TYPE2 UNDO 43.1171875
SYS _SYSSMU3$ TYPE2 UNDO 19.1171875
SYS _SYSSMU4$ TYPE2 UNDO 12.1171875
SYS _SYSSMU5$ TYPE2 UNDO 7.1171875
SYS _SYSSMU6$ TYPE2 UNDO 6.1171875
SYS _SYSSMU7$ TYPE2 UNDO 15.1171875
SYS _SYSSMU8$ TYPE2 UNDO 5.1171875
SYS _SYSSMU9$ TYPE2 UNDO 5.1171875
SYS _SYSSMU10$ TYPE2 UNDO 8.1171875
SYS _SYSSMU11$ TYPE2 UNDO 4.1171875
SYS _SYSSMU12$ TYPE2 UNDO 4.1171875
SYS _SYSSMU13$ TYPE2 UNDO 14.1171875
SYS _SYSSMU14$ TYPE2 UNDO 3.1171875
SYS _SYSSMU15$ TYPE2 UNDO 1.1171875
SYS _SYSSMU16$ TYPE2 UNDO 1.2421875
SYS _SYSSMU17$ TYPE2 UNDO 32215.4297
.
.
.
This is not full list, there are 299 segments.
March 15, 2010 - 11:07 am UTC
easy - very very easy. Very very natural and normal
You had a big transaction happen in there. Or, just a small one that hung around for a long long long time - preventing a wrap around from happening (causing that undo segment to grow and hence causing the datafile to grow)
Abnormal undo extension
Avnish, March 15, 2010 - 1:48 am UTC
Hi Tom,
As already mentioned one of the segment is 32 GB in size. But it has only 384 extents, all of them expired and the collective size is only 2.4 GB.
I have right now have absolutely no idea what is inside '_SYSSMU17$'?
SQL> select owner,segment_name,segment_type,bytes/1024/1024 "SIze in MB", tablespace_name from dba_segments where segment_name='_SYSSMU17$'
2 and tablespace_name='APPS_UNDOTS1';
OWNER SEGMENT_NAME SEGMENT_TYPE SIze in MB TABLESPACE_NAME
------------------------------ --------------- ------------------ ---------- ------------------------------
SYS _SYSSMU17$ TYPE2 UNDO 32215.4297 APPS_UNDOTS1
SQL>select status, sum(bytes)/1024/1024 "SIZE IN MB", count(*) from dba_undo_extents
where segment_name='_SYSSMU17$'group by status
SQL> /
STATUS SIZE IN MB COUNT(*)
--------- ---------- ----------
EXPIRED 2404.11719 384
Abnormal undo extension
Avnish, March 16, 2010 - 6:54 am UTC
Hi Tom,
Thanks for the prompt response. I agree with you that there may be a big transaction that could have caused the segment to extend so much (32GB), by adding on the extents till the data files reached 32GB. The transaction is long gone, but dba_undo_extents just shows 384 extents, all of them EXPIRED and collectively of size 2.4GB. Why I am not seeing 32GB of EXPIRED undo extents in dba_undo_extents for the segment _SYSSMU17$ ?
March 16, 2010 - 10:15 am UTC
run that for us please.
ops$tkyte%ORA11GR2> select a.*, b.bytes, sum_u_bytes-b.bytes seg_diff
2 from (
3 select segment_name, sum(dbae) sum_dbae, sum(dbau) sum_dbau, sum(dbae_bytes) sum_e_bytes, sum(dbau_bytes) sum_u_bytes, sum(dbae)-sum(dbau) cnt_diff, sum(dbae_bytes)-sum(dbau_bytes) bytes_diff
4 from
5 (
6 select segment_name,
7 count(*) dbae, sum(bytes) dbae_bytes,
8 0 dbau , 0 dbau_bytes
9 from dba_extents
10 where segment_name like '_SYSSMU%'
11 and owner = 'SYS'
12 group by segment_name
13 union all
14 select segment_name,
15 0, 0,
16 count(*), sum(bytes)
17 from dba_undo_extents
18 group by segment_name
19 )
20 group by segment_name
21 ) a, dba_segments b
22 where a.segment_name = b.segment_name
23 and b.owner = 'SYS'
24 order by a.segment_name
25 /
SEGMENT_NAME SUM_DBAE SUM_DBAU SUM_E_BYTES SUM_U_BYTES CNT_DIFF BYTES_DIFF BYTES SEG_DIFF
------------------------------ ---------- ---------- ----------- ----------- ---------- ---------- ---------- ----------
_SYSSMU10_4131489474$ 3 3 1179648 1179648 0 0 1179648 0
_SYSSMU11_2618034337$ 11 11 6619136 6619136 0 0 6619136 0
Undo tablespace query
A reader, September 09, 2010 - 12:48 am UTC
I have undo table of 4GB,and there is not running transaction at all,verified through v$transaction.
if i run dba_undo_extents then i get output as
SQL> select status,sum(bytes),count(*) from DBA_UNDO_EXTENTS group by status;
STATUS SUM(BYTES) COUNT(*)
--------- ---------- ----------
UNEXPIRED 4259840000 1018
i want to know how i can reclaim the used space as all transactions have been completed.
September 09, 2010 - 8:56 pm UTC
why? what would you do with this space, it is in your UNDO TABLESPACE - nothing else can use it other then undo.
so, tell me, what would "reclaiming this space" mean???
if 4gb is that important to you - just buy another USB pen drive and stick it into your system. It is a rather tiny bit of space.
but even if your undo was 400gb - what does "reclaiming this space" mean to you - what would reclaiming it do for you?
Undo tablespace query
Raghavendra Jha, September 09, 2010 - 12:48 am UTC
I have undo table of 4GB,and there is not running transaction at all,verified through v$transaction.
if i run dba_undo_extents then i get output as
SQL> select status,sum(bytes),count(*) from DBA_UNDO_EXTENTS group by status;
STATUS SUM(BYTES) COUNT(*)
--------- ---------- ----------
UNEXPIRED 4259840000 1018
i want to know how i can reclaim the used space as all transactions have been completed.
Undo tablespace query
Raghavendra Jha, September 10, 2010 - 1:22 am UTC
I think there is some problem in understanding my question.
I will put it again...
As per understanding of undo segments or table space ,it stores the last checkpointed data to ensure consistency and for rollback.
if i have 4gb of undo tablespace and it shrinks to 1 GB , after i commit or rollback the transaction it shoud again provide me tablesapce of size 4GB only or not!!
If not Why??That is my concern as i want to understand internal working of undo tablespace.
Pl help in correcting me..
September 13, 2010 - 7:07 am UTC
... it stores the last
checkpointed data to ensure consistency and for rollback. ...
no it doesn't.
Conceptually what it stores is:
a delete by rowid for every row you have inserted in your transaction.
a insert of the data you deleted, for every row you have deleted in your transaction.
an update to "unupdate" the data you modified - for every row you have updated in your transaction.
It is not the last checkpointed copy of data as the last checkpointed version of data might not have any relation to the data you modified - the checkpoint might have taken place for a block an hour ago, and the block might have been modified and committed 500 times (or any number of times) since then.
Undo doesn't shrink, why should it?
If you are interested in how undo is managed, I can recommend a book - I spent an entire chapter talking about undo and redo and how they work, what they do, what is in there and so on (see asktom.oracle.com - Expert Oracle Database Architecture) or the Server Concepts guide (free on otn.oracle.com).
Your undo tablespace will have undo segments created in it, we create those segements. We retain undo in this tablespace according to your undo_retention period. We create and segments (and allocate / deallocate extents for them) automagically - the tablespace grows - but tablespaces do not shrink by themselves (it is a time consuming process to shrink them, something an admin may do by themselves if they wish during a maintenance window). So, if your undo tablespaces datafiles grew to 4gb in size - 4g they will be unless and until you do something (but I would recommend doing NOTHING, 4g is tiny - and they undoubtedly will grow to 4gb again when your activity picks up again - meaning you would just be wasting your time)
@Raghavendra Jha
Michel Cadot, September 10, 2010 - 3:19 am UTC
Thanks
Roy, January 11, 2011 - 2:32 am UTC
Most entertaining and educational thread. Very much appreciated, thanks.
A reader, January 11, 2011 - 6:56 am UTC
Hi Tom,
I was trying to insert in a table from another table of size 10GB(with indexes) using
insert into ... select * from
This is abnormally using undo space even greater than 10GB (there were no other sessions using undo). Can you let me know why?
also please suggest some docs that explain the usage of undo for differnt DML activities.
Thanks in advance.
January 12, 2011 - 10:01 am UTC
for an insert - you will generate "delete rowid" in undo. The insert into the TABLE doesn't generate that much undo. As demonstrated (stage is a copy of all_objects):
ops$tkyte%ORA11GR2> create table t as select * from stage where 1=0;
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t select stage.* from stage,(select rownum from dual connect by rownum <= 14);
1011136 rows created.
ops$tkyte%ORA11GR2> select used_ublk*8/1024 "MB" from v$transaction;
MB
----------
3.734375
ops$tkyte%ORA11GR2> select blocks*8/1024 from user_segments where segment_name = 'T';
BLOCKS*8/1024
-------------
120
So, if you saw a large amount of undo generated (I'm not sure you did, you don't show your work, I don't know how you "measured" the amount of undo YOUR transaction generated - you might have done that completely wrong) - you probably inserted into a heavily indexed table:
ops$tkyte%ORA11GR2> truncate table t;
Table truncated.
ops$tkyte%ORA11GR2> create index t_idx1 on t(owner);
Index created.
ops$tkyte%ORA11GR2> create index t_idx2 on t(object_name);
Index created.
ops$tkyte%ORA11GR2> create index t_idx3 on t(object_id);
Index created.
ops$tkyte%ORA11GR2> create index t_idx4 on t(object_type);
Index created.
ops$tkyte%ORA11GR2> create index t_idx5 on t(timestamp);
Index created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t select stage.* from stage,(select rownum from dual connect by rownum <= 14);
1011136 rows created.
ops$tkyte%ORA11GR2> select used_ublk*8/1024 "MB" from v$transaction;
MB
----------
296.101563
ops$tkyte%ORA11GR2> select blocks*8/1024 from user_segments where segment_name = 'T';
BLOCKS*8/1024
-------------
120
quite a difference...
UNDO on DPImport (append) into IOT
Jim Cox, April 11, 2011 - 1:07 pm UTC
Hi Tom
I experienced something as mentioned above when trying to dpimport append into
and IOT with a constraint and one index
It used up all my UNDO tablespace on the OS that i had allocated:
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS'
My question is, how can i do an dpimport Append from a non-iot into an IOT
as I cannot drop the constraint on the IOT
SQL> ALTER TABLE LLTEST.DTREEANCESTORS DISABLE CONSTRAINT IX_ANCESTORS_1;
ALTER TABLE LLTEST.DTREEANCESTORS DISABLE CONSTRAINT IX_ANCESTORS_1
*
ERROR at line 1:
ORA-25188: cannot drop/disable/defer the primary key constraint for index-organized tables or sorted hash cluster
Extracted DDL:
CREATE TABLE "LLTEST"."DTREEANCESTORS"
( "DATAID" NUMBER(10,0) NOT NULL ENABLE,
"ANCESTORID" NUMBER(10,0) NOT NULL ENABLE,
CONSTRAINT "IX_ANCESTORS_1" PRIMARY KEY ("ANCESTORID", "DATAID") ENABLE
) ORGANIZATION INDEX NOCOMPRESS PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LIVELINK_DATA"
PCTTHRESHOLD 50;
CREATE UNIQUE INDEX "LLTEST"."IX_ANCESTORS_2" ON "LLTEST"."DTREEANCESTORS" ("DATAID", "ANCESTORID")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LIVELINK_INDEX" PARALLEL 1 ;
ALTER INDEX "LLTEST"."IX_ANCESTORS_2" NOPARALLEL;
according to your statement and i found out the hardway:
Followup June 15, 2003 - 9am Central time zone:
2/3) ..... sounds like you must be importing into an indexed table -- suggest you don't do that. import into an UNINDEXED table. If you can, put the db in noarchivelog mode for the duration of the import (avoids redo generation on the index creates) or create the
indexes yourself using NOLOGGIN
What are my options?
Thanks
April 13, 2011 - 9:04 am UTC
you'll need more undo. Index structures cannot be "appended" to - data in an index "goes in a specific location", it cannot just be added (appended) to the "end" of an index.
The IOT is an index, you cannot "disable" it - it doesn't even make sense. You'll need sufficient undo allocated for your operation.
UNDO on DPImport (append) into IOT (Thanks)
Jim Cox, April 19, 2011 - 2:05 pm UTC
Thanks Tom
appreciate the advice as usual
Jim
undo tablespace grows
Sita, July 25, 2011 - 11:09 pm UTC
Hi Tom:
Oracle 11.2.0.1.
Just for testing purpose, i was updating a CHAR(1) column in a table from 'A' to 'B'. This table has 270 million rows.
Surprisingly, While running this update, the UNDO tablespace was growing very big (>29GB). Table is 57 GB only.
Question:
1. When we update a CHAR(1) column value from 'A' to 'B' in an entire table,
Does Oracle stores the entire row or only the updated column old value or the entire block ?
Please clarify.
In my case, i am running Parallel update (PDML) on the entire table.
July 27, 2011 - 8:10 pm UTC
why is this surprising? You are updating every row in a table it sounds like - from one value to another - that is going to have a lot of overhead associated with it.
1) we store changed bytes and transaction information in the undo tablespace for each row - so, it will NOT be a single byte stored there - it will be many bytes per row.
Many times, if you are affecting most of the data, a create table as select - instead of an update/delete - would be much more efficient.
undo tablespace grows
Sita, July 27, 2011 - 10:32 pm UTC
Thanks Tom.
<< 1) we store changed bytes and transaction information in the undo tablespace for each row - so, it will NOT be a single byte stored there - it will be many bytes per row.
Does it mean Oracle stores the entire row or block in undo segment even if we update a single column in the entire table ?
July 28, 2011 - 7:15 pm UTC
we store
a) changed bytes (which is not a row)
b) transaction information (which is not a row)
no, we do not store the entire row, we store changed bytes and transaction information.
ora-30036
SVS, August 26, 2011 - 8:28 am UTC
Below are the settings on the prod server for UNDO
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string UNDOTBS1
the UNDOTBS1 is made up of 3 files
kn02/..../undotbs01.dbf 5GB without AUTOEXTEND
kn03/..../undotbs_02.dbf 5GB without AUTOEXTEND
kn07/..../undotbs_07.dbf 20GB without AUTOEXTEND
there is overnight batch cleanup job running and its hitting ORA-30036 error every time.
I made sure that there were no other transactions running other than the cleanup.
I have observed the UNDOTBS tablespace is 100% full throughout the day. I dont think that is an issue or is it ?
The alarming thing is on filesystem (Solaris in this case) all the partitions where the files reside are 100% full.
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/ora-prsqm2-dg/kn01vol 10485760 3645005 6413737 37% /kn01
/dev/vx/dsk/ora-prsqm2-dg/kn04vol 164626432 164260536 343031 100% /kn04
/dev/vx/dsk/ora-prsqm2-dg/kn02vol 164626432 164626432 0 100% /kn02
/dev/vx/dsk/ora-prsqm2-dg/kn03vol 164626432 164104752 489079 100% /kn03
/dev/vx/dsk/ora-prsqm2-dg/kn05vol 164626432 164118984 475736 100% /kn05
/dev/vx/dsk/ora-prsqm2-dg/kn06vol 206569472 205658942 853689 100% /kn06
/dev/vx/dsk/ora-prsqm2-dg/kn08vol 206569472 206064257 473770 100% /kn08
/dev/vx/dsk/ora-prsqm2-dg/kn07vol 206569472 206433515 127527 100% /kn07
So the way I understand is that UNDOTBS would resuse the space ( in my case 5G+5G+20GB=30GB) provided no other active transactions are going on.
If there are then it would try to allocate more space from file system which in my case is 100% full. Is this whats happening in my case ?
Its URGENT...please let me know if you need more details.....
Thanks a million !!
SVS
August 30, 2011 - 4:48 pm UTC
undo will be normally be 100% full all day, every day, all of the time. We allocate it once and just reuse it - without having to go through the expense of reallocating it.
you do not have autoextend enabled for these files, so any discussion of autoextend is moot - we won't do it because you told us not to.
Either:
Your batch job needs more than 30gb of undo space, which is not unreasonable for a batch job of any size. You need to give more space to the undo tablespace.
You have uncommitted transactions you are not aware of (query v$transaction) and they are preventing us from reusing some large portion of the undo tablespace. You either need more undo tablespace allocated or you need to get rid of those transactions once you figure out what they are.
Flushing UNDOTBS
SVS, August 28, 2011 - 5:30 am UTC
I have reset UNDO_RETENTION=0 with the intention of freeing UNDOTBS which is always 100%.
There are no active transactions right now.Ran the below query for the same.
SELECT s.sid , s.username , t.used_ublk
FROM v$transaction t
, v$session s
WHERE 1 = 1
AND t.ses_addr = s.saddr
but the UNDOTBS refuses to budge below 100% ....:)
1. what could be the reason here ?
2. Is there a way to force an UNDO flush ?
3. What are my options/alternatives ?
August 30, 2011 - 4:59 pm UTC
undo_retention doesn't do that when set to 0, it doesn't do that in general when set to any value.
the undo tablespace will almost certainly be 100% utilized, we allocate it once and then just keep reusing it. There is no sense to "free it up" just to reallocate it again later.
If you really want to shrink your undo tablespace (which should be running at 100% pretty much always, just like temp), you would
a) create a new smaller one
b) alter system to use the smaller one
c) after a while drop the old one - I'd wait quite a while to avoid ora-1555's.
UNDO Tablespace-Auto Extend
Jim Cox, October 19, 2011 - 9:20 am UTC
Hi Tom
just wondering if there is any rule of thumb or algorithm for determining how much to auto extend the UNDO tablespace? I currently have an undo tablespace of 4gb and have seen it grow 5-6gb at times.
Thanks
Jim
October 19, 2011 - 6:25 pm UTC
I myself would let it grow to as large as I felt comfortable. Which in general is many 10's of gigabytes if necessary.
Because if you don't let it grow one of two things will happen:
a) you'll get ora-1555's because your undo retention won't be obeyed.
b) you'll get errors in applications either because you are using guaranteed retention (and we cannot retain your undo) or because of "unable to extend"
In order to protect yourself from a "runaway" transaction generating way more undo then anyone anticipated (eg: that application that "forgot" a where clause on a delete or update - it has a bug in it) you could/should a resource plan that limits the amount of undo generated by a transaction to some reasonable amount.
You should find on most systems that the undo tablespace will grow to some size than then stay there - until you upgrade the applications at which point it might change again.
Remember - setting a generous undo retention and letting us grow to hold that can really save your butt sometimes. Flashback query, Flashback Table - both rely on that. Rather than use your backups to fix some data error introduced by a bug in an application - you can use flashback to easily and quickly repair things.
UNDO Tablespace-Auto Extend-Thanks
Jim Cox, October 25, 2011 - 3:00 pm UTC
Thanks for the assist Tom
Jim
UNDO_RETENTION Sql
Jim Cox, October 27, 2011 - 7:02 pm UTC
Hi Tom
just wanted your take on this UNDO_RETENTION algorithym I found in my research
I found on a web site:
The following query will help you to optimize the UNDO_RETENTION parameter:
Calculate UNDO_RETENTION for given UNDO Tabespace
optimal undo retention = actual undo size / db_block_size * undo_block_per_sec
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
ROUND((d.undo_size / (to_number(f.value) *
g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size';
###########
Calculate Needed UNDO Size for given Database Activity
undo size = undo retention + db_block_size * undo_block_per_size
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
(TO_NUMBER(e.value) * TO_NUMBER(f.value) *
g.undo_block_per_sec) / (1024*1024)
"NEEDED UNDO SIZE [MByte]"
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size';
Thanks
October 28, 2011 - 6:52 am UTC
why not just set self tuning undo retention?
http://docs.oracle.com/docs/cd/E11882_01/server.112/e25494/undo002.htm#ADMIN10180 The problem with the above approach is that undo retention is based on the amount of time it takes for your longest running query AND/OR your business requirement to support flashback queries.
It is not based on the amount of undo generated in a given period of time. That metric would help you estimate the disk space required but doesn't get you anywhere near knowing what to set undo_retention to.
(you should google for: Calculate UNDO_RETENTION for given UNDO Tabespace - with the SPELLING ERROR. How funny that exact spelling mistake is made by so many people. Lots of plagiarizers out there.)
UNDO_RETENTION Sql-Thanks
Jim Cox, November 01, 2011 - 1:37 pm UTC
Hi Tom
thanks again for your timn on this
Appreciate it
Jim
Alexander, January 26, 2012 - 9:50 am UTC
Hi,
Hoping you can explain something for me; I'm using this query to identify who's using undo:
SELECT SID,
Y.USERNAME,
Y.OSUSER,
X.LOG_IO,
X.PHY_IO,
ROUND(USED_UBLK*(SELECT VALUE/1024 FROM V\$PARAMETER WHERE NAME = 'db_block_size')/1024,2)
"UNDO MB",
X.START_TIME,
Y.STATUS
FROM V\$TRANSACTION X,
V\$SESSION Y
WHERE SADDR = SES_ADDR
/
I had a huge transaction take up almost all of my 40GB of undo. We killed it so I could tune it. This took a long time to accumulate to it's still rolling back.
I waited until I had about 30GB free (that transaction still had another 10GB to roll back). But for reasons I cant explain, I get a ORA-30036: unable to extend segment by 4 in undo tablespace 'UNDOTBS1' in another session trying to tune another bad statement. This is the highest it ever got based on that query I supplied:
(lx-eaoracl-t03p:oracle)> oundowho
DATABASE (OCP35T) USERS USING UNDO 2012/01/26 10:13:23
USER
SID NAME OSUSER LOG_IO PHY_IO UNDO MB START_TIME STATUS
---------- ------------------------- ------------------------- ---------- ---------- ---------- -------------------- --------
4729 OPS$ORACLE oracle 12173926 371024 297.5 01/26/12 10:09:16 ACTIVE
1892 BSARA_DW -643823875 75893633 10220.47 01/23/12 18:22:37 ACTIVE
I don't get it. The OPS$ORACLE is me in the other session.
January 31, 2012 - 8:31 am UTC
I had a huge transaction take up almost all of my 40GB of undo. We killed it so I could tune it. This took a long time to accumulate to it's still rolling back.
you could use the resource manager to set up an undo quota limit - so as to prevent this from ever happening again. If you *know* all of your transactions should generate less than "X bytes" of undo - set that as a hard limit - if they hit it, bamm, they are dead before it gets out of control. Very useful for an OLTP system - you could still have batch accounts that have higher quotas if you like as well
I'm not sure what you want me to comment on here though - until the transaction completed - the undo is all "needed", you'd have to wait for it to completely complete before the space is really 'free'.
Alexander, January 31, 2012 - 9:48 am UTC
"I'm not sure what you want me to comment on here though - until the transaction completed - the undo is all "needed", you'd have to wait for it to completely complete before the space is really 'free'."
That's exactly what I was looking for. I didn't know that. V$TRANSACTION shows the space as being released.
undo_retention causing execution slowness?
A reader, April 13, 2012 - 11:05 am UTC
Hello,
In our 11.1.0.7 2-node RAC database, the undo_retention is set to 167,768. I am not sure whether the execution of my UPDATE statement is slow due to the lack of space in undo tablespace.
Currently my update is running. The event and time_waited in my gv$session_event doesn't show any wait that is close to the total elapsed time (so far). I also enabled 10046 trace file and I see the entry in the trace file happens at very slow pace.
A sample of my raw trace file:
*** 2012-04-13 15:59:00.741
WAIT #3: nam='gc cr multi block request' ela= 46 file#=606 block#=365345 class#=1 obj#=7684800 tim=11753980749109
WAIT #3: nam='gc cr multi block request' ela= 52 file#=606 block#=365345 class#=1 obj#=7684800 tim=11753980751907
WAIT #3: nam='gc cr multi block request' ela= 501 file#=606 block#=365345 class#=1 obj#=7684800 tim=11753980753728
WAIT #3: nam='db file scattered read' ela= 9307 file#=606 block#=365330 blocks=16 obj#=7684800 tim=11753980765350
*** 2012-04-13 15:59:13.362
WAIT #3: nam='gc cr multi block request' ela= 104 file#=606 block#=365361 class#=1 obj#=7684800 tim=11753993076563
WAIT #3: nam='db file scattered read' ela= 15760 file#=606 block#=365346 blocks=16 obj#=7684800 tim=11753993095821
The elapsed time of each wait event is in few milliseconds but I am not able to find out what is Oracle doing in between those wait events. Could a big undo_retention be the cause?
The CPU utilization is under 65% and the top process consuming the CPU is smon process.
Thanks,
April 13, 2012 - 1:16 pm UTC
that is not due to a large undo tablespace (and a large undo tablespace would be a solution for 'lack of space in undo tablespace', not a cause of it!)
You are apparently processing a large amount of data and you are seeing the consistent read requests being shipped around.
How big is this table you are updating?
How much of it are you updating?
follow up,
A reader, April 13, 2012 - 1:50 pm UTC
The table that is getting updated has 2 million rows and around half of them is getting updated.
I was more concerned about the value of undo_retention rather the size of the undo tablespace.
Thanks,
April 13, 2012 - 2:40 pm UTC
I would suggest - in all seriousness - to not update.
instead, I would perform a create table as select - incorporate the update logic in the select list.
DDL would be infinitely more efficient for this mass update.
shrinking undotablespace
A reader, October 23, 2012 - 1:47 am UTC
hi Tom,
my undotbs is growing 14g (although i my undotbs actul size is 9g) I try to resize datafiles, but this may not work.
So, i am assuming to perform this task
- Create a new undo tablespace as :
SQL> create undo tablespace UNDOTBS2 datafile '<complete file path>' size <smaller size>;
- Change parameter UNDO_TABLESPACE
SQL> alter system set UNDO_TABLESPACE=UNDOTBS2;
- Drop UNDOTBS1
SQL> drop tablespace UNDOTBS1 including contents and datafiles;
but my question is , is it worthy to delete the undotbs1 whose having alot of data and if i deleted this (undotablespace) data ,i will not able to recover it thoroughly?
October 23, 2012 - 12:23 pm UTC
your undo tablespace is 14gb but actually is 9gb????? i don't understand.
you'll be able to recover fine, but 5gb in the grand scheme of things - in the year 2012 - is not that big is it?
and it'll probably just grow right back again.