Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, kathy.

Asked: June 29, 2000 - 7:43 pm UTC

Last updated: August 28, 2013 - 6:02 pm UTC

Version: 815

Viewed 100K+ times! This question is

You Asked


i have two questions about the temporary tablespace.
1.) if i'm running out space on temporary, how could i know which user or
transaction is using the temporary tablespace.

2.) Sometime, it happens to me that, obviously no transactions
running, but the temporary tablespace never get released unless shutdown the database. Seem to me like, some dead
transaction never release the resource, and their status is "pseudo", any idea what's happened and how should i handle
this except shutdown database ?

and we said...

Temporary tablespaces should appear "full" after a while in a normally running database. Extents are allocated once and then managed by the system. Rather then doing the rather expensive operation of "space management" (data dictionary updates), the system will allocate an extent in TEMP and then keep it and manage it itself. This is normal and to be expected and is not an indication that you do not have any temporary space.

See the dynamic performance views V$SORT_USAGE and V$SORT_SEGMENT for more information regarding the usage of space within these temporary segments. V$SORT_USAGE will tell you who's using what.


followup to the comment below

V$SORT_USAGE contains the session address. Perhaps an easier and more supported method then the one suggested by Jonathan would be:

create view my_v$sort_usage view
as
select ( select username from v$session where saddr = session_addr) uname,
v.* from v$sort_usage v
/


fact remains that v$sort_usage always has told you who is using the temporary segment.

In 9i, the user column is replace with the username column and this will not be necessary.



Rating

  (210 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

V$SORT_USAGE

Marco Gralike, March 27, 2001 - 4:58 am UTC

Sort_usage doesn't give you a correct answer of the username. See </code> http://www.jlcomp.demon.co.uk/sort_usage.html <code>for a workaround.

ORA-1652

I Singh, September 23, 2002 - 3:47 pm UTC

Hi Tom, 
I am unable to understand why I am getting 
ORA-1652: unable to extend temp segment by 256 in tablespace         TEMP.

SQL> select tablespace_name,next_extent,max_extents from dba_tablespaces where tablespace_name='TEMP';

TABLESPACE_NAME                NEXT_EXTENT MAX_EXTENTS
------------------------------ ----------- -----------
TEMP                               1048576
SQL> select tablespace_name,extent_size,total_extents,total_blocks,max_size,max_blocks from v$sort_segment;

TABLESPACE_NAME                 EXTENT_SIZE TOTAL_EXTENTS TOTAL_BLOCKS
------------------------------- ----------- ------------- ------------
  MAX_SIZE MAX_BLOCKS
---------- ----------
TEMP                                    256           295        76700
       295      76700

SQL> select blocks from dba_data_files where file_id=3;

    BLOCKS
----------
     76800

My questions are:
If max_size reached is 76700 blocks then going by extent size of 256 blocks there should have been 76700/256=299 total_extents. Why is it 295 ? 

Tom Kyte
September 24, 2002 - 7:11 am UTC

You should be using true temporary tablespaces (you have a permanent tablespace marked temporary -- you have "data files", not "temp files")

Anyway, dump dba_free_space for that tablespace and see whats "free".

2 temp tablespaces

Jan, October 14, 2002 - 4:20 am UTC

Our Database (Oracle 8.1.7) has 2 temporary tablespaces - TEMP (DMT) and TEMP_TRUE (true temporary LMT). All application users have still default temporary tablespace TEMP. I found in v$sort_usage that if the user perform some expansive SELECT operation with sorting - the TEMP tablespace is used. But if he run the application where temporary tables are used - then TEMP_TRUE is used for these data. Why? Why it is not used Default tablespace?

Thanks,

Tom Kyte
October 14, 2002 - 8:08 am UTC

The temporary tablespace would be taken from the CURRENT_SCHEMA -- so I'll have to assume you are accessing the temp tables (filling them) via a stored procedure.

It'll use the temp tablespace of the OWNER of the procedure in that case.

READER

A reader, December 06, 2002 - 11:25 pm UTC

Tom,

"
select ( select username from v$session where saddr = session_addr) uname,
v.* from v$sort_usage v
/
"

These construct of queries are very useful. I have a query
that computes i/o distribution from v$bh at a point in time

select file#, count(block#), count (distinct file# || block#)
from v$bh group by file# ;

Is it possible to list the segments that are belonging to
the file along with the file# in the same query by joining
dba_segments

Thanks


Tom Kyte
December 07, 2002 - 10:22 am UTC

not dba_segments -- segments span files, hence there is no file in dba_segments. You would have to join to something like

( select distinct owner, segment_name, file_id
from dba_extents
)




Reader

A reader, December 07, 2002 - 11:12 pm UTC

I was actually going to equate file# = header_file.

I like to use your approach. is it possible to combine
the queries

select file#, count(block#), count (distinct file# || block#)
from v$bh
group by file# ;
and

( select distinct owner, segment_name, file_id
from dba_extents
)

into one SQL statement

Tom Kyte
December 08, 2002 - 9:45 am UTC

yes, just make them both inline views and join


select ..
from ( v$bh query ), ( dba_extents query )
joining by file

Thanks

A reader, December 08, 2002 - 10:36 am UTC


which query blew it

george, December 09, 2002 - 11:50 am UTC

Tom,
We are using 8.1.7.4 on HP UX.
I have been getting this message abou the temp segment...very often...
my_v$sort_usage_view will tell you who is using it currently...
Our system throws an error once temp segment can not be extended any more...but by that time the query that blew it is gone.
I think it is because of a bad query..may be a cartesian join...or something...
How do we catch the query that blew it..because once we fix it..the problem will go away...I assume..
We have a small database about 3 gigs...Java front end..
about 2000 users...
I have given 1 gig for my temp tablespace (LMT)...
auto extended to 2 gigs...

Thanks.



Tom Kyte
December 09, 2002 - 12:53 pm UTC

Question:  why the heck doesn't your java front end have any logging or instrumentation?  I would hope that a program developed in house would have COPIOUS amounts of capabilities in this area to help track down things like this.  A generic error handler at least that logs unexpected errors.  



Well, in 817, we can use a SEVERERROR trigger to capture information when this happens.  This one captures all of the SQL cursors the current session that hits this problem has open:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t ( msg varchar2(4000) );

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace trigger failed_to_extend_temp
  2  after servererror on database
  3  declare
  4      
  5      
  6  begin
  7      if ( is_servererror(1652) )
  8      then
  9          insert into t values ( 'ora_sysevent = ' || ora_sysevent );
 10          insert into t values ( 'ora_login_user = ' || ora_login_user );
 11          insert into t values ( 'ora_server_error = ' || ora_server_error(1) );
 12
 13          insert into t select 'open cursor ' || rownum || ' ' || sql_text
 14                          from v$open_cursor where sid = (select sid from v$mystat where rownum=1);
 15      end if;
 16  end;
 17  /

Trigger created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> create temporary tablespace test_temp
  2  tempfile '/tmp/test_temp.dbf' size 512k reuse
  3  extent management local
  4  uniform size 64k
  5  /

Tablespace created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter user ops$tkyte temporary tablespace test_temp
  2  /

User altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from all_objects
  2  order by 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12;
select * from all_objects
              *
ERROR at line 1:
ORA-01652: unable to extend temp segment by 8 in tablespace TEST_TEMP


ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

MSG
-----------------------------------------------------------------------------------------------------------------------------------
ora_sysevent = SERVERERROR
ora_login_user = OPS$TKYTE
ora_server_error = 1652
open cursor 1 INSERT INTO T SELECT 'open cursor ' || ROWNUM  || ' '  || SQ
open cursor 2 insert into sys.aud$( sessionid,entryid,statement,timestamp#
open cursor 3 declare     l_sql_text ora_name_list_t;     l_n        numbe
open cursor 4 INSERT INTO T VALUES ( 'ora_login_user = ' || ORA_LOGIN_USER
open cursor 5 INSERT INTO T VALUES ( 'ora_server_error = ' || ORA_SERVER_E
open cursor 6 select * from all_objects order by 1, 2, 3, 4, 5, 6, 7, 8, 9
open cursor 7 INSERT INTO T VALUES ( 'ora_sysevent = ' || ORA_SYSEVENT   )

10 rows selected.

ops$tkyte@ORA817DEV.US.ORACLE.COM>



<b>you'll be able to narrow it down from there (hopefully)..</b>



In 9i -- there is a new function that will pinpoint this exactly:

ops$tkyte@ORA920> create or replace trigger failed_to_extend_temp
  2  after servererror on database
  3  declare
  4      l_sql_text ora_name_list_t;
  5      l_n        number;
  6  begin
  7      if ( is_servererror(1652) )
  8      then
  9          insert into t values ( 'ora_sysevent = ' || ora_sysevent );
 10          insert into t values ( 'ora_login_user = ' || ora_login_user );
 11          insert into t values ( 'ora_server_error = ' || ora_server_error(1) );
 12<b>
 13                  l_n := ora_sql_txt( l_sql_text );
 14                  for i in 1 .. l_n
 15                  loop
 16                          insert into t values ( 'l_sql_text(' || i || ') = ' || l_sql_text(i) );
 17                  end loop;</b>
 18      end if;
 19  end;
 20  /

Trigger created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create temporary tablespace test_temp
  2  tempfile '/tmp/test_temp.dbf' size 512k reuse
  3  extent management local
  4  uniform size 64k
  5  /

Tablespace created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> alter user ops$tkyte temporary tablespace test_temp
  2  /

User altered.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from all_objects
  2  order by 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12;
select * from all_objects
              *
ERROR at line 1:
ORA-01652: unable to extend temp segment by 8 in tablespace TEST_TEMP


ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from t;

MSG
-----------------------------------------------------------------------------------------------------------------------------------
ora_sysevent = SERVERERROR
ora_login_user = OPS$TKYTE
ora_server_error = 1652
l_sql_text(1) = select * from all_objects
order by 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

l_sql_text(2) = , 11, 12

ops$tkyte@ORA920>
 

A reader, March 03, 2003 - 10:48 am UTC

Veru useful info, thanks a lot

SYSTEM managed LMT or UNIFORM sized LMT

A reader, April 16, 2003 - 12:29 pm UTC

Hi Tom,

1) We can see that in a permanent system managed LMT the extents are allocated like 64k, then 1M , then 8M and so on...

But do the system managed Temporary LMT behave the same way as a system managed permanent LMT would ? ie would the extents be allocated like 64k,1M,8M in a system managed Temporary LMT as it is in a system managed permanent LMT ? Can you show an example as i'm not able to find out how to see the temporary extents allocated.

2) Is system managed temporary LMT better or a Uniform sized temporary LMT ?

Thanks and Regards

Tom Kyte
April 16, 2003 - 6:23 pm UTC

1) it is purposefully not documented.

anyway, you want to use fixed (uniform) sized extents in temp -- period.

2) see #1

Why use uniformed sized LMT for temporary, why not autoallocate?

A reader, April 17, 2003 - 12:47 am UTC

Hi Tom,

In your chapter 3 (pdf) from expert one on one performance,
you have mentioned that autoallocate LMT should be used whenever the size of the objects is not known. Then why not use autoallocate LMT for a temporary tablespace as we never know what size a temporary extent is allocated.

Ok we can know the size if we are using sort_area_size (we make extents equal to or a multiple of sort_area_size), but when we use workarea_polisy_size=auto and pga_aggregate_target the memeory (extents if disk is used for sorting) allocated will be handled by oracle and we wont know the size of the extents in that case. So is autoallocate temporary LMT is good or is it a bad idea ?

Thanks and Regards

Tom Kyte
April 17, 2003 - 10:16 am UTC

but you do know -- sort_area_size. When using pga_aggregate_target in 9i -- 1m is the recommendation.


1m uniform extents -- as said above.

It is somewhat of a MOOT discussion -- create temporary tablespace ... uses one extent management policy:

extent management local UNIFORM

there is no dictionary managed
there is no autoallocate.

But isn't the value of sort_area_size ignored ??

adewri, April 24, 2003 - 5:58 am UTC

Hi TOM,

This gets more confusing ?? When we are using WORKAREA_POLICY_SIZE=AUTO with PGA_AGGREGATE_TARGET then will not the value of sort_area_size get ignored.

I did a small test.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
First i change the workarea_size_policy to manual;

SYS@ACME.WORLD> alter system set workarea_size_policy=manual scope=both;

System altered.

SYS@ACME.WORLD> show parameter workarea

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy string MANUAL
SYS@ACME.WORLD>

In a new session.

SQL*Plus: Release 9.2.0.3.0 - Production on Thu Apr 24 14:51:21 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production

SCOTT@ACME.WORLD> create table obj as select * from all_objects;

Table created.

SCOTT@ACME.WORLD> select count(*) from obj;

COUNT(*)
----------
3386

SCOTT@ACME.WORLD> alter session set sort_area_size=1024000;

Session altered.

SCOTT@ACME.WORLD> set autot trace exp stat
SCOTT@ACME.WORLD> select * from obj order by object_id;

3386 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (ORDER BY)
2 1 TABLE ACCESS (FULL) OF 'OBJ'




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
45 consistent gets
0 physical reads
0 redo size
153997 bytes sent via SQL*Net to client
2974 bytes received via SQL*Net from client
227 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3386 rows processed

SCOTT@ACME.WORLD> alter session set sort_area_size=10;

Session altered.

SCOTT@ACME.WORLD> select * from obj order by object_id;

3386 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (ORDER BY)
2 1 TABLE ACCESS (FULL) OF 'OBJ'




Statistics
----------------------------------------------------------
0 recursive calls
32 db block gets
45 consistent gets
201 physical reads
0 redo size
153997 bytes sent via SQL*Net to client
2974 bytes received via SQL*Net from client
227 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
3386 rows processed

SCOTT@ACME.WORLD>

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Now i change the workarea_size_policy to auto.

SYS@ACME.WORLD> alter system set workarea_size_policy=auto scope=both;

System altered.

SYS@ACME.WORLD> show parameter workarea

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy string AUTO
SYS@ACME.WORLD>

In a new session.

SQL*Plus: Release 9.2.0.3.0 - Production on Thu Apr 24 14:54:10 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production

SCOTT@ACME.WORLD> drop table obj;

Table dropped.

SCOTT@ACME.WORLD> create table obj as select * from all_objects;

Table created.

SCOTT@ACME.WORLD> select count(*) from obj;

COUNT(*)
----------
3386

SCOTT@ACME.WORLD> alter session set sort_area_size=1024000;

Session altered.

SCOTT@ACME.WORLD> set autot trace exp stat
SCOTT@ACME.WORLD> select * from obj order by object_id;

3386 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (ORDER BY)
2 1 TABLE ACCESS (FULL) OF 'OBJ'




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
45 consistent gets
0 physical reads
0 redo size
153997 bytes sent via SQL*Net to client
2974 bytes received via SQL*Net from client
227 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3386 rows processed

SCOTT@ACME.WORLD> alter session set sort_area_size=10;

Session altered.

SCOTT@ACME.WORLD> select * from obj order by object_id;

3386 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (ORDER BY)
2 1 TABLE ACCESS (FULL) OF 'OBJ'




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
45 consistent gets
0 physical reads
0 redo size
153997 bytes sent via SQL*Net to client
2974 bytes received via SQL*Net from client
227 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3386 rows processed

SCOTT@ACME.WORLD>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Here when i changed workarea_policy_size to auto, then even after changing the value sort_area_size the sorting takes place in memory. if i'm not wrong then sort_area_size was ignored completely.

Now back to my question about having temp tablespaces as autoallocate. As we saw that sort_area_size was ignored completely. That means that if my output from v$sql_workarea_histogram is like this

PGA_SIZE OPTIMAL_EXECUTIONS ONEPASS_EXECUTIONS MULTIPASSES_EXECUTIONS
--------- ------------------ ------------------ ----------------------
8kb <= PGA < 16kb 4634932 0 0
16kb <= PGA < 32kb 9469 0 0
32kb <= PGA < 64kb 920 0 0
64kb <= PGA < 128kb 771 0 0
128kb <= PGA < 256kb 315 0 0
256kb <= PGA < 512kb 132 0 0
512kb <= PGA < 1024kb 2717 0 0
1mb <= PGA < 2mb 2437 331 0
2mb <= PGA < 4mb 0 25 0
4mb <= PGA < 8mb 0 58 36

Which shows that only sorts between 4M and 8M were sent to disks for sorting.

So my point was that when only larger chunks of sorts are sent to disks then why not create temporary tablespace with greater uniform sized extents or better keep them autoallocate.

Tom, either im right or im totally confused ??? Please enlighten me.

Regards
Amar


Tom Kyte
April 24, 2003 - 8:18 am UTC

that is the entire goal of AUTO - to render *_size parameters obsolete, to make automatic what was once manual.


temp MUST be uniform (it is a rule -- it cannot be system allocated, it doesn't work that way)

And as I said -- use 1meg -- it works well.

All extents are of 1M for autoallocate (so there is no auto allocate)

adewri, April 24, 2003 - 8:48 am UTC

Hi Tom,

While testing for allocation of extents in tempfiles i see that extents allocated are 1M in size for an autoallocated temp tablespace. So that means even if in the create clause i do not give uniform size it will by default become a LMT with uniform size of 1M.

SYS@ACME.WORLD> create temporary tablespace temp tempfile 'E:\ORACLE\ORADATA\ACME\TEMP01.DBF' size 300M
2 extent management local;

Tablespace created.
SYS@ACME.WORLD> SELECT tablespace_name, file_id, block_id, BYTES / 1024 / 1024, owner
2 FROM v$temp_extent_map
3 /

TABLESPACE_NAME FILE_ID BLOCK_ID BYTES/1024/1024 OWNER
------------------------------ ---------- ---------- --------------- ----------
TEMP 1 9 1 1
TEMP 1 137 1 1
TEMP 1 265 1 1
TEMP 1 393 1 1
TEMP 1 521 1 1
TEMP 1 649 1 1
TEMP 1 777 1 1
TEMP 1 905 1 1
TEMP 1 1033 1 1
.
.
TEMP 1 36617 1 0
TEMP 1 36745 1 0
TEMP 1 36873 1 0
TEMP 1 37001 1 0
TEMP 1 37129 1 0
TEMP 1 37257 1 0
TEMP 1 37385 1 0
TEMP 1 37513 1 0
TEMP 1 37641 1 0
TEMP 1 37769 1 0
TEMP 1 37897 1 0
TEMP 1 38025 1 0
TEMP 1 38153 1 0

299 rows selected.

SYS@ACME.WORLD>

The datafile size is 300M and each and every extent is 1M in size and these extents are pre-allocated, ie as soon as i resize the tempfile they get allocated, they are not allocated like permanent LMTs.

I tested this with a temporary uniform LMT of 5M with 300M tempfile;

SYS@ACME.WORLD> create temporary tablespace temp tempfile 'E:\ORACLE\ORADATA\ACME\TEMP01.DBF' size 300M
2 extent management local uniform size 5M;

SYS@ACME.WORLD> SELECT tablespace_name, file_id, block_id, BYTES / 1024 / 1024, owner
2 FROM v$temp_extent_map
3 /

TABLESPACE_NAME FILE_ID BLOCK_ID BYTES/1024/1024 OWNER
------------------------------ ---------- ---------- --------------- ----------
TEMP 1 9 5 0
TEMP 1 649 5 0
TEMP 1 1289 5 0
TEMP 1 1929 5 0
TEMP 1 2569 5 0
TEMP 1 3209 5 0
TEMP 1 3849 5 0
TEMP 1 4489 5 0
TEMP 1 5129 5 0
TEMP 1 5769 5 0
TEMP 1 6409 5 0
TEMP 1 7049 5 0
TEMP 1 7689 5 0
.
.
TEMP 1 32649 5 0
TEMP 1 33289 5 0
TEMP 1 33929 5 0
TEMP 1 34569 5 0
TEMP 1 35209 5 0
TEMP 1 35849 5 0
TEMP 1 36489 5 0
TEMP 1 37129 5 0

59 rows selected.

SYS@ACME.WORLD>


So this means that temporary LMTs are always uniform sized as you mentioned before. So i would stick to your advice of 1M :)

Thanks and Regards

Tom Kyte
April 24, 2003 - 9:28 am UTC

that is NOT a SYSTEM allocated tablespace -- if you query dba_tablespaces -- you'll find the extent allocation type for that tablespace is in fact UNIFORM in both cases (your original premise that:

...
While testing for allocation of extents in tempfiles i see that extents
allocated are 1M in size for an autoallocated temp tablespace.
......

is not really correct -- the tablespace is NOT system allocated, it is in fact uniform)

Ok, I get the message

adewri, April 24, 2003 - 10:03 am UTC

So i have been doing the wrong tests :)
You are great as usual
Thanks and Regards


how to understand v$sort_usage

Jerry, May 20, 2003 - 10:33 pm UTC

Tom, could you please help me understanding the result of the following query. I was trying to find out which user is using how much temp tablespace.

SELECT s.username, s.sid, u.TABLESPACE, u.CONTENTS, u.extents, u.blocks
FROM v$session s, v$sort_usage u
WHERE s.saddr = u.session_addr;

Results:

USERNAME SID TABLESPACE CONTENTS EXTENTS BLOCKS
MGR 35 TEMP TEMPORARY 51 1020
MGR 35 TEMP TEMPORARY 7 140
MGR 35 TEMP TEMPORARY 14 280
MGR 35 TEMP TEMPORARY 8 160
MGR 35 TEMP TEMPORARY 51 1020
MGR 35 TEMP TEMPORARY 7 140
MGR 35 TEMP TEMPORARY 14 280
MGR 35 TEMP TEMPORARY 12 240
MGR 35 TEMP TEMPORARY 13 260
MGR 35 TEMP TEMPORARY 51 1020
MGR 35 TEMP TEMPORARY 7 140
MGR 35 TEMP TEMPORARY 14 280
MGR 35 TEMP TEMPORARY 5 100
MGR 35 TEMP TEMPORARY 51 1020
MGR 35 TEMP TEMPORARY 7 140
MGR 35 TEMP TEMPORARY 14 280
MGR 35 TEMP TEMPORARY 8 160
MGR 35 TEMP TEMPORARY 8 160
MGR 35 TEMP TEMPORARY 7 140

Does it mean if I sum up the blocks, I'll get the totoal sort usage? Why so many entries for one session?

Thanks! Your web site is a great help!

Tom Kyte
May 21, 2003 - 7:47 am UTC

sum the blocks and you'll find the total temp space on disk used. is that total sort? no, not really (could be a global temporary tables, could be hash) and doesn't count sorts done in memory.

why so many entries? only YOU can answer that.. MGR in sid=35 must have many cursors open, or global temp tables in play. Consider:

scott@ORA817DEV> alter session set sort_area_size = 32765;
Session altered.

scott@ORA817DEV> create global temporary table gtt
2 as
3 select * from all_objects where 1=0;
Table created.

scott@ORA817DEV> create or replace package demo_pkg
2 as
3 cursor c1
4 is
5 select *
6 from all_objects
7 order by 1,2,3,4,5,6,7,8,9;
8
9 cursor c2
10 is
11 select *
12 from all_objects
13 order by 1,2,3,4,5,6,7,8,9;
14 end;
15 /

Package created.

those cursors will definitely sort to disk for me...

scott@ORA817DEV> SELECT s.username, s.sid, u.TABLESPACE, u.CONTENTS, u.extents, u.blocks
2 FROM v$session s, v$sort_usage u
3 WHERE s.saddr = u.session_addr;

no rows selected

scott@ORA817DEV> declare
2 l_rec all_objects%rowtype;
3 begin
4 open demo_pkg.c1;
5 fetch demo_pkg.c1 into l_rec;
6 end;
7 /

PL/SQL procedure successfully completed.

scott@ORA817DEV>
scott@ORA817DEV> SELECT s.username, s.sid, u.TABLESPACE, u.CONTENTS, u.extents, u.blocks
2 FROM v$session s, v$sort_usage u
3 WHERE s.saddr = u.session_addr;

USERNAME SID TABLESPACE CONTENTS EXTENTS BLOCKS
-------- ---------- ---------- --------- ---------- ----------
SCOTT 15 TEMPORARY TEMPORARY 7 448

and apparently, that result set is in temp now and takes 448 blocks. It'll stay that way till I close the cursor -- and furthermore:

scott@ORA817DEV>
scott@ORA817DEV> declare
2 l_rec all_objects%rowtype;
3 begin
4 open demo_pkg.c2;
5 fetch demo_pkg.c2 into l_rec;
6 end;
7 /

PL/SQL procedure successfully completed.

scott@ORA817DEV>
scott@ORA817DEV> SELECT s.username, s.sid, u.TABLESPACE, u.CONTENTS, u.extents, u.blocks
2 FROM v$session s, v$sort_usage u
3 WHERE s.saddr = u.session_addr;

USERNAME SID TABLESPACE CONTENTS EXTENTS BLOCKS
-------- ---------- ---------- --------- ---------- ----------
SCOTT 15 TEMPORARY TEMPORARY 7 448
SCOTT 15 TEMPORARY TEMPORARY 7 448

each cursor I open that does a sort, will do that -- and further:

scott@ORA817DEV>
scott@ORA817DEV> insert into gtt
2 select * from all_objects;

26804 rows created.

scott@ORA817DEV>
scott@ORA817DEV> SELECT s.username, s.sid, u.TABLESPACE, u.CONTENTS, u.extents, u.blocks
2 FROM v$session s, v$sort_usage u
3 WHERE s.saddr = u.session_addr;

USERNAME SID TABLESPACE CONTENTS EXTENTS BLOCKS
-------- ---------- ---------- --------- ---------- ----------
SCOTT 15 TEMPORARY TEMPORARY 7 448
SCOTT 15 TEMPORARY TEMPORARY 7 448
SCOTT 15 TEMPORARY TEMPORARY 6 384

my global temp tables will do that...

scott@ORA817DEV>
scott@ORA817DEV> pause

scott@ORA817DEV>
scott@ORA817DEV>
scott@ORA817DEV> begin
2 close demo_pkg.c1;
3 close demo_pkg.c2;
4 end;
5 /

PL/SQL procedure successfully completed.

scott@ORA817DEV> SELECT s.username, s.sid, u.TABLESPACE, u.CONTENTS, u.extents, u.blocks
2 FROM v$session s, v$sort_usage u
3 WHERE s.saddr = u.session_addr;

USERNAME SID TABLESPACE CONTENTS EXTENTS BLOCKS
-------- ---------- ---------- --------- ---------- ----------
SCOTT 15 TEMPORARY TEMPORARY 6 384

but when I close them -- poof, they go away


What's the best way to measure temp tablespace usage

Jerry, May 21, 2003 - 11:30 am UTC

Thanks Tom! That's very clear and helpful.

One more question about how to best measure the temporary tablespace usage. Since the v$sort_usage comes and goes with the application queries, what's the best way to measure how much temp an application uses. We have a web-based application that runs a fix set of queries. For rollback, I can have a single rbs online and measure its growth. But for temp usage, it seems to be quite difficult to quantify.

Tom Kyte
May 21, 2003 - 2:18 pm UTC

use a temporary tablespace per application schema and see how big each one is?

Great!

Jerry, May 21, 2003 - 4:15 pm UTC

Thanks Tom! Your answer is always great and enlightening. You are the best!

Temp tablespace history

houman, May 21, 2003 - 4:37 pm UTC

I have this situation that temporary tablespace sometimes grows very fast and I need to know what query or temporary table exhausted the temporary tablespace. the v$sort_usage only provide the information for currently running queries. The failed_to_extend_temp trigger described above only show the last query that has blown it up but this query may not be the "gulty" one and it just happen to be executed when no more space is available in the temp table space.
Is there a way to know how much space (in terms of extents) has each query consumed from temp tablespace and when?

Tom Kyte
May 21, 2003 - 5:09 pm UTC

you would have to "poll" for that information -- or when it "blows up" capture all of that information joining v$sort_usage to v$session to v$sql

How large should temporary tablespace be?

Paul, July 01, 2003 - 12:14 pm UTC

Creating tables/MVs from a join on 4 tables (with 5 to 15 million rows, producing a table of 15 million) , I keep getting (eventually):

ORA-01652: unable to extend temp segment by 1250 in tablespace TEMP

The DBA reckons this tablespace is about 4Gig and doesn't want to make it any bigger. Is this reasonable?

Thanks



Tom Kyte
July 01, 2003 - 12:34 pm UTC

the DBA doesn't want you to be able to perform your job? I'm confused - are they offering you an alternative?

4gig is pretty small -- even on my little desktop machine, I run with between 1/2 and 1.5 gig depending. 4gig on what sounds like a DW, is very small.

Ask the DBA how much 36gig of disk costs and explain to them that the time you've wasted costs alot more then that.

goto www.tpc.org, download the full disclosure for our 100gig tpc-h (small one).

we used 36x8 gig files for temp.


"reckons" -- don't they "know"?



MG, August 06, 2003 - 5:27 am UTC

Hi Tom,
I have confused on following :

While creating an index on table 'T', I gathered the temporary segment information as follows:

create index idx1 on t(object_name);

select username, tablespace, contents, extents, blocks from v$sort_usage;

USERNAME TABLESPACE CONTENTS EXTENTS BLOCKS
---------- ------------ -------- ------- --------
MG TEMP1 TEMPORARY 3 384

select OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME from dba_segments where tablespace_name;

OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
----- ------------ ------------ ---------------
MG 6.585 TEMPORARY USERS

the user 'MG's temporary tablespace =TEMP1.

Why Tom, these 2 view says different tablespaces for their temp segment.

Thank you.

Tom Kyte
August 06, 2003 - 8:18 am UTC



the index is placed into 'temporary' extents in the target tablespace during the index creation.

after the index is done 'creating', the last step is to convert the temporary extents into permanent.

the reason they are temporary in users is that they are destined to become the permanent ones at the end. they are temporary so that if the index create gets aborted (eg: someone does a shutdown abort) SMON will naturally clean them up upon restart without any additional steps on part of the DBA.

so, the index is placed into "temp" extents which are destined to become "permanent" at the end.

MG, August 07, 2003 - 8:41 am UTC

Thank you so much Tom. As usual your answers are more valuable.

MG, August 07, 2003 - 9:00 am UTC

Hi Tom,

You mean

select OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME from dba_segments where tablespace_name;

OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
----- ------------ ------------ ---------------
MG 6.585 TEMPORARY USERS

Is This segment '6.585' will become the (after finish creating index), Index Segment as follows in USERS tablespace?

OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
----- ------------ ------------ ---------------
MG IDX1 INDEX USERS

Thank you



Tom Kyte
August 09, 2003 - 4:46 pm UTC

yes.

temp tablespace

vj, September 08, 2003 - 7:09 am UTC

i am using 9ir2

My temp tablespace looks like below. The temp tablespace is been used only by SYS and SYSTEM. My temp tablespace size has grown OVER 2 GB now..How can i reduce it ?? i cant use offline drop or resize..any help highly appreciated..

BLOCK_SIZE 8192
INITIAL_EXTENT 1048576
NEXT_EXTENT 1048576
MIN_EXTENTS 1
MAX_EXTENTS 0
PCT_INCREASE 0
MIN_EXTLEN 1048576
STATUS ONLINE
CONTENTS TEMPORARY
LOGGING NOLOGGING
FORCE_LOGGING NO
EXTENT_MANAGEMENT LOCAL
ALLOCATION_TYPE UNIFORM
PLUGGED_IN NO


Rgds



Tom Kyte
September 08, 2003 - 11:24 am UTC

create new
make this new one the default temporary tablespace for the database
drop old

it has been used by EVERYONE, not just sys/system.



trigger for many server errors

mary, September 18, 2003 - 5:35 pm UTC

Tom, is there a way to code a trigger like your FAILED_TO_EXTEND_TEMP trigger above ("FAILED_TO_EXTEND_TEMP") so that it catches all server errors (or is this not a good idea)?

What I wanted to do was to use your trigger and mail pkg to send me an email everytime an alert log error occurs (using 9i and 8i). Would I need to code in every error number or is there like a wildcard or something that would work? What do you recommend?




Tom Kyte
September 18, 2003 - 6:02 pm UTC

the trigger looks like this in my example:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace trigger 
failed_to_extend_temp
  2  after servererror on database
  3  declare
  4      
  5      
  6  begin
  7      if ( is_servererror(1652) )
  8      then
  9          insert into t values ( 'ora_sysevent = ' || ora_sysevent );
 10          insert into t values ( 'ora_login_user = ' || ora_login_user );
 11          insert into t values ( 'ora_server_error = ' || ora_server_error(1) 
);
 12
 13          insert into t select 'open cursor ' || rownum || ' ' || sql_text
 14                          from v$open_cursor where sid = (select sid from 
v$mystat where rownum=1);
 15      end if;
 16  end;
 17  /

it fired for all errors -- i had to ask it "is this failed to extend temp?"

So, it catches all server errors.

you'll probably get lots more emails then you want, you should pick the errors you want to know about. 

alert log error list

A reader, October 16, 2003 - 9:27 am UTC

Tom, is there a list somewhere within oracle documentation of just the ORA- errors that are recorded in the alert log? I have requested thru metalink, but the support technician told me there is no such list and to go back thru my old alert logs to see what errors have occurred. I am afraid I will miss the ones that haven't occurred when implementing the server error trigger. I have the 3 books of error messages from the oracle documentation, but the errors are sorted alpha/numeric, so I have to read thru hundreds of pages of ora- errors to see if they're the ones I want to put in my trigger.

Do you know of a list of only the errors that are normally recorded in the alert log?

Tom Kyte
October 16, 2003 - 10:47 am UTC

there is no list that I'm aware of.

basically -- people keep "anti lists" -- messages they know they don't care about and report all others (and just update the anti list if an innocent message gets emailed to them)

shrink temporary tablespace

igor, November 11, 2003 - 3:00 pm UTC

Hi Tom,

you described how to shrink the temporary tablespace in general:

create new
make this new one the default temporary tablespace for the database
drop old

Can you provide more details please? In my understanding, before dropping the old tablespace, we have to make sure neither any session is currently using the old temporary tablespace, nor any session will try to use it. How to check it? Even if at the moment V$SORT_USAGE shows no usage, and there are no users having the old tablespace as default temporary tablespace, any existing session which was started when the old tablespace was in effect, may "remember" it and try to use it later. This will cause an application error.

In fact I had this situation once and I don't want to have it again. Here is what I did:

1. Changed temporary tablespace from TEMP1 to TEMP for all database users having TEMP1 as temporary tablespace.
alter user <username> temporary tablespace TEMP;

2. Waited until there is no usage of tablespace TEMP1 - the following query returned 0.
select count(*) from v$sort_usage where tablespace='TEMP1';

3. Deleted the tablespace TEMP1.
drop tablespace TEMP1 including contents;

But after that, one of the long-running tasks failed, so I realized that my method was wrong. (The failed task was a concurrent Oracle Applications job, but I think it doesn't matter).

The question is: How to drop safely a temporary tablespace without bouncing a database? Is that possible?

Thank you.

Tom Kyte
November 12, 2003 - 6:34 am UTC

(9ir2) so, in one session I:

ops$tkyte@ORA920LAP> create temporary tablespace temp2
  2  tempfile size 10m
  3  autoextend on next 1m
  4  maxsize 2048m
  5  /

Tablespace created.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> prompt go into another session and insert into a GTT
go into another session and insert into a GTT
ops$tkyte@ORA920LAP> pause


in that other session I:

ops$tkyte@ORA920LAP> insert into gtt select * from all_objects;

30724 rows created.


coming back -- i:

ops$tkyte@ORA920LAP> drop tablespace temp
  2  including contents and datafiles
  3  /


and that <b>blocked -- it was waiting for the other session to finish using</b>


so, it'll not let you drop an active one.  I suppose your concurrent manager was a query that was parsed and ready to go -- but didn't have a need for temp yet -- but its "temp" was fixed.  For that -- you would have to simply "wait" for what you deem a sufficiently long enough time in order to avoid having long running queries that were parsed but not yet in need of temp to get to that point.  eg: create new temp in the morning, drop old temp when you get ready to leave -- or schedule this sort of "shrink" during off peak times 

shrink temporary tablespace

Igor, November 12, 2003 - 8:51 am UTC

Yes, it explains what happened to me.
Very helpful. Thank you Tom.


Temporary tablespace

Yateendra Chaturvedi, November 24, 2003 - 5:18 am UTC

Dear Tom,

I have gone through the entire page it is very useful. But I am facing some typical problem. I have Oracle8 Enterprise Edition Release 8.0.4.1.0 - Production. In which if I create temprary tablespace of permanent nature it works fine. The moment I recreate this as a temporary tablespace, it is start giving problem. Most of the LOV's of forms show working and does not respond. Please help.



Tom Kyte
November 24, 2003 - 8:21 am UTC

"start giving problem"

hmm. not very clear.


as 804 is somewhat ancient history -- I guess I would recommend you leave it "as is". don't change anything. you cannot really develop or change anything on that system -- you have to "freeze it" as there is no support for it.

A reader, January 30, 2004 - 10:43 am UTC

Tom,

This is what i noticed on my test server.

sys@TEST> ed
Wrote file afiedt.buf

1 SELECT d.NAME, t.NAME AS tablespace_name
2 FROM V$DATAFILE d, V$TABLESPACE t
3* WHERE t.TS# = d.TS#(+)
sys@TEST> /

NAME TABLESPACE_NAME
-------------------------------------------------- ------------------------------
/u01/oracle/oradata/test/system01.dbf SYSTEM
/u01/oracle/oradata/test/undotbs01.dbf UNDOTBS1
TEMP
/u01/oracle/oradata/test/indx01.dbf INDX
/u01/oracle/oradata/test/users01.dbf USERS
/u01/oracle/oradata/test/oem_repository.dbf OEM_REPOSITORY

6 rows selected.


My temp tablespace does not have a datafile..

1)What do you think might be the reason for that.
2)Will it use system tablespace for sorting since temp datafile is not there.



Tom Kyte
January 30, 2004 - 7:25 pm UTC

you are right -- temporary tablespaces don't have datafiles, they have tempfiles.

look at v$tempfile.



Temp LMT and DMT

Alvin, February 10, 2004 - 10:51 pm UTC

You said and i quote :
"Followup:
You should be using true temporary tablespaces (you have a permanent tablespace
marked temporary -- you have "data files", not "temp files")"

1.) When you say permanent tablespace marked "temporary" does it mean Temp DMT ??

Our temporary tablespace is a DMT one. Initially ive enabled the autoextend clause and it has grown to 8 Gig. I've issued the autoextend off clause now i get in my db logs the following error.

ORA-1652: unable to extend temp segment by 8 in tablespace TEMP
Tue Feb 10 14:20:32 2004
ORA-1652: unable to extend temp segment by 8 in tablespace TEMP


2.) After a sorting operation of a session has ended are a.the resources consumed released immediately ?
b. or is it released when the user logs off ?
c. released when the db is shutdown ?

3.) How can i alleviate my system of the ORA-1652 error in TEMP tablespace ?
a. Do i enable autoextend clause ?
b. Make another Temp LMT ? and make it the default temp tablespace for all of my users ?

4.) If i created a TEMP LMT. When's a good time to drop the TEMP DMT ?
a. After a shutdown to be sure that nobody has a "hold" on any temp segments ?
b. After creating the Temp LMT ?


Tom Kyte
February 11, 2004 - 9:08 am UTC

1) no, I mean a "create temporary tablespace...." using LMTs.

You are out of space, you need more than 8gig of temp.


2) after a session has retrieved the rows, closed the cursor - the temp space is released.

when the cursor closes, the space is released. this'll happen at db shutdown, at logoff OR when the application simply closes the cursor

3) add more space?

making it an LMT won't let it store more stuff - just store it more efficiently and make your backups faster (you'll be using TEMPFILES, not DATAFILES, tempfiles are not backed up)

4) anytime it lets you. just create a new temp, alter the users to point to it (or in 9i, alter the database to set the default temp tablespace and anyone with the default temp tablespace will be altered automagically)

then try to drop it, it won't let you if someone is using it.

1 user hogging all of the sort segment ?

Alvin, February 26, 2004 - 4:38 am UTC

I've been hitting the ora-01652 error and i queried the v$sort_usage to check who's using the sort segments and here are my findings.

1* select * from v$sort_usage
sql> /

USER SESSION_ SESSION_NUM SQLADDR SQLHASH TABLESPACE
------------------------------ -------- --------------- -------- --------------- -------------------
CONTENTS SEGTYPE SEGFILE# SEGBLK# EXTENTS BLOCKS SEGRFNO#
--------- --------- --------------- --------------- --------------- --------------- ---------------
SYS F3607A48 25714 E9F49A84 1705880752 TEMP
TEMPORARY SORT 4 4079362 499117 3992936 4

From 8 Gig my temp tablespace (DMT) has grown into 34 Gig. Does the user 'sys' need that much sorts ??

Some of my queries are hitting.

ORA-03232: unable to allocate an extent of 35 blocks from tablespace 3

And tablespace 3 points to the temporary tablespace.

Tom Kyte
February 26, 2004 - 10:08 am UTC

are you using sys for your own queries?

what is sys doing - what query(s) is it running.

Apparently the user 'sys' does need this space, now you need to figure out why. (no, by default this does not happen).

So, find out what query(s) sys is running and then we can look at the "why" part.

additional info on the above posts.

Alvin, February 26, 2004 - 4:57 am UTC

sql> select * from v$sort_segment;

TABLESPACE_NAME SEGMENT_FILE SEGMENT_BLOCK EXTENT_SIZE CURRENT_USERS TOTAL_EXTENTS
------------------------------- --------------- --------------- --------------- --------------- ----
TOTAL_BLOCKS USED_EXTENTS USED_BLOCKS FREE_EXTENTS FREE_BLOCKS ADDED_EXTENTS
--------------- --------------- --------------- --------------- --------------- --------------- ----
FREED_EXTENTS FREE_REQUESTS MAX_SIZE MAX_BLOCKS MAX_USED_SIZE MAX_USED_BLOCKS MA
--------------- --------------- --------------- --------------- --------------- --------------- ----
MAX_SORT_BLOCKS RELATIVE_FNO
--------------- ---------------
TEMP 4 153946 8 1 524287
4194296 499117 3992936 25170 201360 524287 3750490
0 0 524287 4194296 524287 4194296 499117
3992936 4



ORA-3232 or ORA-1652?

Dave Shui, February 26, 2004 - 8:34 pm UTC

Dear Tom,

Correct me if I am wrong.

I noticed that Alvin encountered different error from ORA-01652 when his temp tablespace grew to 34g:

ORA-03232: unable to allocate an extent of 35 blocks from tablespace 3.
(3 -> temporary tablespace)

This error indicates that the size of segment requested was larger than the "NEXT" extent of the temporary tablespace.

In this case it won't help even though there is more space.

He can either increase the value of NEXT for the temporary tablespace or decrease the value of HASH_MULTIBLOCK_IO_COUNT.







Tom Kyte
February 27, 2004 - 7:23 am UTC

ahh, very good eye. did not even think about that (not an issue in the current software, no more ora-3232 errors)

> oerr ora 03232
03232, 00000, "unable to allocate an extent of %s blocks from tablespace %s"
// *Cause: An attempt was made to specify a HASH_MULTIBLOCK_IO_COUNT value
// that is greater than the tablespace's NEXT value
// *Action: Increase the value of NEXT for the tablespace using
// ALTER TABLESPACE DEFAULT STORAGE or decrease the value of
// HASH_MULTIBLOCK_IO_COUNT.


How to size temp tablespace

Arun Gupta, February 27, 2004 - 3:10 pm UTC

Tom,
How should I determine the size of the temp tablespace? Client feels that 7GB temp tablespace in a DW is too big. The cubes still keep running out of temp segments. The largest table in the query is only 500MB, others are only few hundred rows each. How to justify a 7GB temp tablespace?
Thanks


Tom Kyte
February 27, 2004 - 3:13 pm UTC

client is wrong then if you keep running out.

disk is cheap, even expensive disk is cheap. cheaper then "waiting an hour for query to fail" over and over.

what is your db version? dbms_xplan in 9ir2 can be used to show you the estimated temp space needed by a query.

getting .. ORA-1652

Reader, April 14, 2004 - 2:20 am UTC

hello,
Sir, I have two users with same number of tables
and data within the same database.

My query is working fine for one user and giving error
for the other user.

Query is using the same path in both users .. I have checked it using tkprof.

I am using .. PGA_AGGREGATE_TARGET.

What could be the reason .. ?




Tom Kyte
April 14, 2004 - 7:47 am UTC

my car won't start this morning.

my wifes car, which is the same brand/model/year of car did start.

why?


I've given you the same amount of information you gave me.... and we are off....


seriously -- wanna give a hint as to what the error might be (i was out of gas by the way, should have told you "oh yeah, didn't fill the tank for weeks")

Temp Tablespace resizing problem

Rahul, May 18, 2004 - 1:51 am UTC

Hi Tom

I am working on 9ir2 and having temp tablespace sizing problem. After every 2 days my temp tablespace gets increased upto 5gb. This is a true temporary and LMT.

when I check my data dictionary..

system@VELOS.AITHENT.COM> select bytes/1024/1024 from v$tempfile;

BYTES/1024/1024
---------------
4752

system@VELOS.AITHENT.COM> select extent_size,current_users,total_extents,used_extents,free_extents
2 from v$sort_segment where tablespace_name='TEMP';

EXTENT_SIZE CURRENT_USERS TOTAL_EXTENTS USED_EXTENTS FREE_EXTENTS
----------- ------------- ------------- ------------ ------------
128 0 4751 0 4751

According to the Statistics we can easily shrink down the tablespace because all extents are free and no users are currently performing the sorts.
But when I tried to shrink..

system@VELOS.AITHENT.COM> alter database tempfile 'C:\ORACLE\ORADATA\VELOS\TEMP01.DBF' resize 500m
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

Why I got this error? What is wrong?
Please suggest something or explains if any internal reason of not doing this.

Thanks.

Tom Kyte
May 18, 2004 - 11:31 am UTC

the sizing problem is that apparently you need 5 gig but keep trying to get away with 500m.


so, why do you want to keep on dynamically allocating and reallocating that which is needed by your system??? You are not getting anything back here, we just grab it again right away.

in order to resize temp, just create a new small one and alter database to set the new default temporary tablespace to that and drop the old. but again, the problem is you NEED 5gig, not 500m!

A reader, May 18, 2004 - 1:12 pm UTC

Tom your reply on Apr 14 for Reader from India was hilarious. LOL

Resizing of tablespace

Rahul, May 20, 2004 - 12:22 am UTC

Hi Tom

OK, I'm agree with you
"in order to resize temp, just create a new small one and alter database to set
the new default temporary tablespace to that and drop the old."
But what if I want to resize the permanent tablespace. In oracle8i, i have used your script Maxshrink.sql and it shows the big amount of space we can shrink from datafile. But when try to resize it gives the same problem.
What is the solution for that?

Thanks.


Tom Kyte
May 20, 2004 - 10:58 am UTC

what "permanent tablespace"

not sure what you mean here.

exp and temp

A reader, May 20, 2004 - 4:22 pm UTC

Tom, do you see any situation where exp can cause an out-of-space situation (ora 1650, 1631, 1536) ? Is it possible that it uses TEMP, rollback or some kind of segment during its work ?

Tom Kyte
May 20, 2004 - 8:33 pm UTC

is this hypothetical or are you getting one?

v$sort_segment.freed_extents.

Sai, June 02, 2004 - 5:23 pm UTC

Tom,

Could you please explain the meaning of columns, FREED_EXTENTS and FREE_REQUESTS, in v$sort_segment. I always see them zero.

Documentation says extents in a sort segment never get freed, unless right after instance startup. Then, why would Oracle free up any extents in any sort segment. Does it mean that those 2 columns always show zero.

Thanks.

Tom Kyte
June 02, 2004 - 7:21 pm UTC

1* select free_extents, free_blocks from v$sort_segment
sys@ASKUS> /

FREE_EXTENTS FREE_BLOCKS
------------ -----------
298 38144



no, it won't always be zero on a real system.

it is the number of extents ALLOCATED but currently NOT USED by any sort operation.

Re: v$sort_segment.freed_extents.

Sai, June 02, 2004 - 11:45 pm UTC

Hi Tom,

Sorry for the confusion, the columns I am looking for are FREED_EXTENTS and FREE_REQUESTS, not FREE_EXTENTS/FREE_BLOCKS.

I would really appreciate your help.

Thanks.

Tom Kyte
June 03, 2004 - 7:55 am UTC

sorry -- that was my oversight.


In a clustered environment these come into play. Let us say that users SCOTT and JOE have their default temporary tablespace both set to TEMP. Now each connects to a different instance of the same OPS database. SCOTT sorts a very large table which creates a very large sort segment for instance A. He completes his sort. Now JOE wants to do a sort on instance B. Instance B will try to create its own sort segment in the same tablespace but quickly runs out of room. It can ask instance A to release some currently unused space.

If you see high freed/added counts, it would be an indication that you have undersized TEMP in that environment.

A reader, June 03, 2004 - 5:44 pm UTC


what am i doing wrong?

Antonio Sarcina, June 09, 2004 - 4:48 am UTC

Hi Tom,
i just tried to find out which sort is giving us
ORA-1652: unable to extend temp segment by 32 in tablespace TEMP
every night on our dwh, and i used the trigger u suggested:
sqlplus "/as sysdba"
create table user1.t ( msg varchar2(4000) );
create or replace trigger failed_to_extend_temp
after servererror on database
declare
l_sql_text ora_name_list_t;
l_n number;
begin
if ( is_servererror(1652) )
then
insert into user1.t values ( 'ora_sysevent = ' || ora_sysevent );
insert into user1.t values ( 'ora_login_user = ' || ora_login_user );
insert into user1.t values ( 'ora_server_error = ' || ora_server_error(1) );
l_n := ora_sql_txt( l_sql_text );
for i in 1 .. l_n
loop
insert into user1.t values ( 'l_sql_text(' || i || ') = ' || l_sql_text(i) );
end loop;
end if;
end;
/
But nothing...we had the same problems and i didn-t find anything in user1.t
What could have happened?
And then...about triggering every ORA- error, is there a way like that?
Thanx in advance


Tom Kyte
June 09, 2004 - 8:47 am UTC

never create stuff as "sys" or "sysdba", thats a special magical account not to be used by you or me.

review your alert log, see what it says and get that trigger out of sys.

just great!

Sam, June 12, 2004 - 6:37 pm UTC


Temp Segment problem

Yogesh B, June 25, 2004 - 11:20 am UTC


This is what I am getting when I query dba_free_space, here free space is 80 MB appx.

TABLESPACE_NAME, FILE_ID, BLOCK_ID, BYTES, BLOCKS, RELATIVE_FNO
TEMP,157,101122,10477568,1279,157
TEMP,10,101122,10477568,1279,10
TEMP,187,101122,10477568,1279,187
TEMP,202,2,8192,1,202
TEMP,9,113922,10477568,1279,9
TEMP,277,62722,10477568,1279,277
TEMP,8,101122,10477568,1279,8
TEMP,203,113922,10477568,1279,203
TEMP,276,62722,10477568,1279,276

But when I query v$sort_usage it is just showing me the details of my current session. Total Temp TBS size is 6000 MB ( which I feel is sufficient) and still when some users are trying to run some routine jobs they are getting

ORA-12801: error signaled in parallel query server P003
ORA-01652: unable to extend temp segment by

There are 4-5 users which are using some basic queries i.e. no joins or sorts.

What could be the reason ?

Oracle version 8.0.4

Tom Kyte
June 25, 2004 - 3:54 pm UTC



what statements are these user executing? create table as selects, create indexes, or what?

Parallel hint

Yogesh B, June 28, 2004 - 4:59 am UTC

Sorry I forgot to mention. This query was written 2-3 years back and since then there was no problem. So I did not bothered to check that first.

But when I saw the query, it is using /*+ PARALLEL */ hint, which is causing the problem. I removed the hint and re-executed the query. Error about parallel servers gone .. but second error still persists. Now it is even giving name of TBS i.e. TEMP.

ORA-01652: unable to extend temp segment by 1020 bytes in TBS TEMP

My init.ora parameters related to parallel servers are

parallel_min_servers 4
parallel_max_servers 16

query is using “SELECT /*+ parallel(TAH,4,4) */”. Size of TAH is 19470321 rows. Query has 4-5 outer joins …

What could have happened in last couple of days, which is creating this problem?


Tom Kyte
June 28, 2004 - 8:27 am UTC

you have insufficient temporary space allocated for the number of concurrent sorts going on.


allocate more temp space or kick all of the other users off the system -- what has most likely changes over the years is not this query, but the usage of your overall "system"

System Usage

Yogesh B, June 28, 2004 - 9:21 am UTC

Well I don't agree with you fully. This job is executed in the night when there are hardly any users. Yes, In terms of data volume it has grown.

I was trying to re-write the query, is there any way to force the optimizer to use a particular partition of table?

Scenario is, this table(s) has data from 99-till date. Data between 99-2001 is used for some trend analysis. Data after that is really being used in day-to-day life.

So if I can use the specific partition in join, the sort area usage will reduce ... I'm not too sure if this is possible, by using some optimizer hint?

my plan always give me

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=26 Card=1 Bytes=49)
1 0 PARTITION (CONCATENATED)
2 1 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'TAHIS
T' (Cost=26 Card=1 Bytes=49)

3 2 INDEX (RANGE SCAN) OF 'TAH_IDX' (NON-UNIQUE) (Cost=25
Card=1)

I guess if I can get rid of "PARTITION (CONCATENATED)" I'll get over this problem. Correct me if I'm wrong.

I know in 8i, I use to get specific partition name in the explain plan output… but I can’t see that in 8.0.4. Is it a version problem or something to do with query ?



Tom Kyte
June 28, 2004 - 10:28 am UTC

"in terms of volume it has grown"

need we say more? the fact that for 2/3 years the amount of temp you have allocated was sufficient doesn't mean anything -- the volume of data is *larger*.

If you did not need data from multiple partitions -- it would not do that step. Since the amount of data will not change, going after a specific partition will not affect the amount of temp space required for your query.

Autotrace never displayed partition information -- utlxpls.sql should if you use explain plan.

I don't see any steps in your plan that would require temp actually, are you sure that is the right one.

Index range scan (no temp) => table access by rowid (no temp) => partition concatenated (no temp)... I don't see any aggregates/sorts/hashes/etc that would use temp.



Actual Plan

Yogesh B, June 28, 2004 - 11:09 am UTC

--------------------------------------------------------------------------------

| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |

--------------------------------------------------------------------------------

| SELECT STATEMENT | | 1 | 238 |2176789 | | |

| NESTED LOOPS OUTER | | 1 | 238 |2176789 | | |

| NESTED LOOPS OUTER | | 1 | 206 |2176787 | | |

| NESTED LOOPS | | 1 | 175 |2176785 | | |

| MERGE JOIN | | 1 | 166 |2176784 | | |

| MERGE JOIN | | 706K| 84M| 157065 | | |

| SORT JOIN | | 683K| 31M| 22862 | | |

| PARTITION CONCATENA| | | | | 1 | 12 |

| TABLE ACCESS FULL |TAM | 683K| 31M| 17245 | 1 | 12 |

| SORT JOIN | | 9M| 677M| 134203 | | |

| TABLE ACCESS FULL |TEL | 9M| 677M| 21438 | | |

| SORT JOIN | | 175M| 6G|2019719 | | |

| PARTITION CONCATENAT| | | | | 1 | 12 |

| TABLE ACCESS FULL |ACT | 175M| 6G| 101576 | 1 | 12 |

| TABLE ACCESS BY INDEX |OUT | 1K| 17K| 1 | | |

| INDEX UNIQUE SCAN |OPR | 1K| | | | |

| TABLE ACCESS BY INDEX R|ITN | 1M| 45M| 2 | | |

| INDEX UNIQUE SCAN |ITP | 1M| | 1 | | |

| TABLE ACCESS BY INDEX RO|LCA | 13M| 414M| 2 | | |

| INDEX UNIQUE SCAN |LIM | 13M| | 1 | | |
--------------------------------------------------------------------------------


Tom Kyte
June 28, 2004 - 11:15 am UTC

geez, thats a tad different no?

just a bit.

the partition concatenation has nothing to do with it, the sorts and merge joins do.

Your volume of data increased.
Therefore your temp is undersize.

It is pretty clear.

Data / Temp Segment

Yogesh B, June 29, 2004 - 6:39 am UTC

I'm planning to add some more gigs of space to TEMP segment. But I have a question. I don't have enough space on the logical volume where temp TBS is located right now. Will it be a good idea to use some part of data segment as temp segment? Oracle doesn’t recommend this.

Secondly I just need to check if it is some index problem. Is there any way to check if the index is corrupt?

Another solution I'm thinking of is data archiving. The data, which is not being referred in the queries frequently, I'll move that to history tables.

Your comments please.



Tom Kyte
June 29, 2004 - 8:07 am UTC

i don't understand the first paragraph.

index corruption? how would that come into play here? you are running out of temp space during a large operation.


the last paragraph makes perfect sense -- partitioning would be beneficial to aid in the aging of data.

logical volumes

yogesh B, June 29, 2004 - 8:56 am UTC

In AIX box following are the logical volumes for TEMP

/dev/ora4lv 4096000 69168 99% 24 1% /oracle/db1/temp

/dev/ora5lv 2457600 125364 95% 20 1% /oracle/db1/temp2

But as you can see, the space available on these volumes is very less ... so I need to use some other logical volume which is right now being used for other permanent objects(TBS).

So will it be a good idea to create temporary segment in those logical volumes ?

Well I just wanted to confirm if the indexes are working fine. May be that is not a cause of this problem.



Tom Kyte
June 29, 2004 - 3:51 pm UTC


if you have the IO capacity (that is, adding some files from this other volume to your temp space won't impact other stuff already using those volumes in a disasterous fashion), go for it.

Only you know of those other LV's can handle the addition read/write load without causing something else to go awry.

How to determine the initial and next extent size of temp tablespace

A reader, July 19, 2004 - 2:21 pm UTC

Tom,
How are You doing? I need some clarifications on the temp tablespace.

1. How the following parameters are related to the temp tablespace.
- sort_area_size
- db_file_multiblock_read_count
- hash_multiblock_io_count
- hash_area_size (?)

2. How do I calculate the db_file_multiblock_read_count on Sun Platforms. (SPARC/solaris)

Thans in advance
San





Tom Kyte
July 19, 2004 - 2:49 pm UTC

1) the extents of temp should be nice multiples of your sort/hash area size.

if you had a sort area size of 1m and hash area of 2m, you might want 1 or 2 meg extent sizes on temp as that is what Oracle will "swap" with.


the multiblock stuff -- doesn't come into play really here.


2) let it default, unless and until you've identified a need to even look at it.

A reader, July 19, 2004 - 2:57 pm UTC

Thanks Tom. I have started getting ora 3232 error on the table. My understanding was to have sort area to be as close to the extent size. My sort area size was 4096K and extent size was 512K.

When I searched the google and metalink, I saw the other parameters mentioned by so many people and no one had the exact formula how these are related and related to ora3232 error.

Thanks a bunch. My problem got disappeared after I raised the extent size



Tom Kyte
July 19, 2004 - 4:36 pm UTC

> oerr ora 3232
03232, 00000, "unable to allocate an extent of %s blocks from tablespace %s"
// *Cause: An attempt was made to specify a HASH_MULTIBLOCK_IO_COUNT value
// that is greater than the tablespace's NEXT value
// *Action: Increase the value of NEXT for the tablespace using
// ALTER TABLESPACE DEFAULT STORAGE or decrease the value of
// HASH_MULTIBLOCK_IO_COUNT.



next time, if you post your issue, i can answer it "faster".

Same Problem of Extent Allocation

Mary W, July 21, 2004 - 2:25 pm UTC

My query returns me ORA-03232 error: unable to allocate an extent of 128 blocks in tablespace2

I assue this is my TEMP tablespace.

ITs initial extent size is 64 kb
next extent is 512 kb.

What should i change in order for my query to run?

Tom Kyte
July 21, 2004 - 6:44 pm UTC

the size of your temporary tablespace would be an obvious choice? you ran out of temp?

the extent sizes don't tell us how big temp is or can be.


Re-read your comments and solved my problem.

mary W, July 21, 2004 - 2:52 pm UTC

Thanks!

I re-read all the comments and based on those increased my initial and next extent to 1mb. Now everything is working again!

Thank you.

ORA-01652 Unable to extend temp segment by 128 in tablespace TEMP

Prafulla Dhonge, July 26, 2004 - 4:50 am UTC

Hi tom, When I run my Crystal Report Utility of MS for generating reports which involves sorting I got an above mentioned error ORA-01652 Unable to extend temp segment by 128 in tablespace TEMP
So it is not allowing me to run any report as well as any SQL query which involves sorting
I check sort_area_size also that is sifficient

How to overcome this problem


Tom Kyte
July 26, 2004 - 7:26 am UTC

add more space to temp? seems like an obvious plan of action perhaps.

ORA-01652 Unable to extend temp segment by 128 in tablespace TEMP

Prafulla Dhonge, July 26, 2004 - 8:53 am UTC

Hi Tom,
Whay you suggest I test that also I add one more file in Temp tablespace of size 500mb so toal now 2gb temp tablespace.

but still my problem is not solved.
Pls help me out do u want any more details to be provided by me



Tom Kyte
July 26, 2004 - 11:51 am UTC

well, if you have 9i -- dbms_xplan can be used to see about how much temp will be used by your statement.

otherwise -- look at your query, consider how much temp it might actually need - if you are hash joining 5 gig to 10gig and then aggregating -- well, it might take a bit right.

2gig is pretty small -- but then again, we cannot see your query, we do not know your volumes of data (and before you post all of that - please, i urge you to just "look at what you got there" and see if you cannot reasonably estimate it yourself)

ORA-01652 Unable to extend temp segment by 128 in tablespace TEMP

Prafulla Dhonge, July 27, 2004 - 7:43 am UTC

Hi tom
I alter one of the datafile of temp tablespace and set it to autoextend on and then gen the report this time there was no error but the size of datafile was growing and growing. It increases upto 14GB but the report output didn't came

Then I try with Exporting the schema and them import on my test server. Then I connect my Crystal Report utility to that database and this time it works absolutly fine
without any errors and that too the result within a seconds
So this means there is no problem with my SQL query
That means the same query on test DB working well and other DB the temp tablespace is giving problem

So is there any problem with my Live Database???

Tom Kyte
July 27, 2004 - 7:49 am UTC

see above, we cannot see your query, we cannot see the plans, we cannot see the volumes of data.

how big is there and did you export/import all of the DATA (not just the schema definition) from prod to test?

Temporary tablespace error

Prafulla Dhonge, July 28, 2004 - 4:19 am UTC

hi tom this is in conn with the error
the query is as follows
SELECT DISTINCT a.addrprovincenumber province, a.addrbigcity,c.datesent "CreateDate ", c.customernr, p1.psmarketsegment,c.decscnrsent,LTRIM ( rfirstname|| ' '|| a.addrsurname) custname,LTRIM (REPLACE (TO_CHAR (a.addrhousenrnumeric), '0', ' ')|| ' '|| LTRIM (a.addrhousealphanr || ' ' || a.addrstreet)) custaddress,
w.psnrcount, w.scno, w.wpsnr, a1.addrcustnr old_customernr, LTRIM ( a1.addrfirstname || ' ' || a1.addrsurname) old_cust_name,
LTRIM ( lco.addrfirstname|| ' ' || lco.addrsurname) lconame,
w.oldactivationdate oldactivationdate, p1.psaccnr accnr,
m.msuserkey lcokey, l.psstatusdescr status
FROM IBSADMIN.csisent c,
IBSADMIN.address a,
IBSADMIN.address a1,
IBSADMIN.suhistor s,
IBSADMIN.prodsubs p1,
IBSADMIN.prodsubs p,
IBSADMIN.address lco,
IBSADMIN.msegment m,
IBSADMIN.prstatus l,
(SELECT COUNT (DISTINCT y.shpsnr) psnrcount,
MIN (shdate) oldactivationdate,
MIN (DISTINCT y.shpsnr) wpsnr,
x.decodersmartcardnr scno
FROM IBSADMIN.dechistn x, IBSADMIN.suhistor y
WHERE x.shnr = y.shshnr
AND x.decscmodelnr = 1
AND y.shdate <= TODATE
AND y.shpsnr > 0
GROUP BY x.decodersmartcardnr) w
WHERE SUBSTR (sentunpacked, 49, 1) = 'C'
AND c.shnrsent = s.shshnr
AND s.shpsnr = p1.psnr
AND S.SHPRODUCTNR=1
AND LENGTH (sentunpacked) = 100
AND SUBSTR (receivedunpacked, 47, 1) = '1'
AND SUBSTR (receivedunpacked, 6, 4) = '00A5'
AND datesent >= FROMDATE
AND datesent <= TODATE
AND c.customernr = a.addrcustnr
AND a.addreventnr = 100
-- AND A.ADDRPROVINCENUMBER =CITY
AND a1.addreventnr = 100
AND a1.addrcustnr = p.pssubscribernr
AND p.psnr = w.wpsnr
AND w.scno(+) = RTRIM (c.decscnrsent)
AND p1.psmarketsegment = lco.addrcustnr
--AND A.ADDRPROVINCENUMBER=CITY
--AND A1.ADDRPROVINCENUMBER=CITY
AND a1.addrprovincenumber = a.addrprovincenumber
AND lco.addreventnr = 100
AND m.msmsegnrcustnr = lco.addrcustnr
AND s.shnewpsstatus = l.psstatus
UNION
SELECT aa.addrprovincenumber, aa.addrbigcity, sh.shdate, 0, 0, dd.decscnr, '',
'', 0, '', 0, 0, '', '', 0, 0, '', de.decscstatusdescr
FROM IBSADMIN.decoders dd,
IBSADMIN.suhistor sh,
IBSADMIN.dechistn dh,
IBSADMIN.address aa,
IBSADMIN.destatus de
WHERE dh.decscstatus IN ('B', 'C')
AND dd.decscnr = dh.decodersmartcardnr
AND dh.shnr = sh.shshnr
AND sh.shdate >= FROMDATE
AND sh.shdate <= TODATE
AND dd.decscmodelnr = 1
AND sh.sheventnr = 156
-- AND sh.shreason IN (116, 75)
AND dd.decscmodelnr = 1
AND dd.decsccurrentdepotn = aa.addrcustnr
AND dh.DECSCSTATUS = de.decscstatus
-- AND aa.addrprovincenumber = CITY
AND aa.addreventnr = 100;

can u suggest me the proper query as already this query is giving a problem to me which is extending the temp seg

Tom Kyte
July 28, 2004 - 8:21 am UTC

do you need union, do you know the difference (big difference) between UNION and UNION ALL?

I can say the outer join to W is not relevant -- only makes it hard for the optimizer to optimize:

AND p.psnr = w.wpsnr
AND w.scno(+) = RTRIM (c.decscnrsent)


if w.scno is outer joined to c, then w.wpsnr will be null and p.psnr cannot be equal to NULL so therefore, the (+) is not relevant, get rid of it.

Unfortunate you need the rtrim -- is c.decscnrsent a CHAR or a VARCHAR2? if varchar2 -- I don't see why the rtrim is needed?

statspack in servererror trigger?

A reader, August 01, 2004 - 10:00 pm UTC

ops$tkyte@ORA920> create or replace trigger failed_to_extend_temp
2 after servererror on database
3 declare
4 l_sql_text ora_name_list_t;
5 l_n number;
6 begin
7 if ( is_servererror(1652) )
8 then
...
end if;

perfstat.statspack.snap;
19 end;
20 /

What do you think about putting a call to statspack in the trigger above?

This way I can get a complete context of my entire database at the time the error happened and I can analyze at leisure later..

Thanks

Tom Kyte
August 02, 2004 - 7:39 am UTC

i'd be a little more focused -- grab just the data you think you'd need to analyze this.

statspack gives you system stuff, seems you'd want information about the *session* here. or specific v$sort* table information.

Question

PRS, September 03, 2004 - 11:22 pm UTC

Tom,
I have tablespace named PSTEMP defined as dictionary PERMANENT temporary tablespace on oracle 9.2.0.4 on solaris 9. I have physical standby database opened in read only mode for reporting purpose. But when people are trying to execute query on physical standby database, they get error "Unable to allocate temp segment in TABELSPACE PSTEMP for read only database". So in order to avoid this I am thinking of converting PSTEMP on primary database from
permanent temporary tablespace to true temporary tablespace using tempfile as shown below.

1. Create tablespace PSTEMP1 as shown below.
create temporary tablespace PSTEMP1
tempfile '/tmp/test_temp.dbf' size 4000M reuse
extent management local
uniform size 1M
2. Move all users to point to new temporay tablespace
PSTEMP1.
alter user <user name> temprary tablespace PSTEMP1

3. Drop old tablespace PSTEMP
DROP TABLESPACE PSTEMP including contents

4. Bounce the database

5. Rename datafile PSTEMP1 to PSTEMP ?????? Is this right
as I want to keep the temp tablespace name because of third party software.
6. point all users to tablespace PSTEMP

7. Bounce the database again

Is this right? I want to keep the temporary tablespace name PSTEMP.

Tom Kyte
September 04, 2004 - 11:02 am UTC

"I have tablespace named PSTEMP defined as dictionary PERMANENT temporary 
tablespace on oracle 9.2.0.4 on solaris 9."

yucky, you ought to correct that bad setup.

especially if you want to use Dataguard in that fashion!

it could be as easy as:

ops$tkyte@ORA9IR2> select username, temporary_tablespace from dba_users;
 
USERNAME                       TEMPORARY_TABLESPACE
------------------------------ ------------------------------
SYS                            TEMP
SYSTEM                         TEMP
LBACSYS                        TEMP
....
ODM_MTR                        TEMP
 
34 rows selected.
 
ops$tkyte@ORA9IR2> create temporary tablespace good;
 
Tablespace created.
 
ops$tkyte@ORA9IR2> alter database default temporary tablespace good;
 
Database altered.
 
ops$tkyte@ORA9IR2> drop tablespace temp;
 
Tablespace dropped.
 
ops$tkyte@ORA9IR2> select username, temporary_tablespace from dba_users;
 
USERNAME                       TEMPORARY_TABLESPACE
------------------------------ ------------------------------
SYS                            GOOD
SYSTEM                         GOOD
.........



if you are using the default temporary tablespace.  if you are not, just

a) create good
b) alter database
c) select 'alter user ' || username || ' temporary tablespace good;' from dba_users where temporary_tablespace <> 'GOOD';  and run the resulting SQL commands.
d) drop old tablespace


(why would 3rd party software affect this at all? -- but if you must have it be "PSTEMP" then


a) shutdown
b) startup restricted
c) drop 
d) create
e) turn off restricted session 

If I do this way

PRS, September 07, 2004 - 10:35 am UTC

I cannot bring down the database. If I use the first approach and use the following sequence, IS that right?

a) create good
b) alter database
c) select 'alter user ' || username || ' temporary tablespace good;' from
dba_users where temporary_tablespace <> 'GOOD'; and run the resulting SQL
commands.
d) drop old tablespace PSTEMP
e) create PSTEMP as shown below.
create temporary tablespace PSTEMP
tempfile '/data3/oradata/crmsysp/pstemp01.dbf' size 4000M
reuse extent management local uniform size 1M;
f) alter database
g)select 'alter user ' || username || ' temporary tablespace good;' from
dba_users where temporary_tablespace <> 'PSTEMP'; and run the resulting SQL
commands.
h) drop tablespace GOOD


Tom Kyte
September 07, 2004 - 12:07 pm UTC

sure, you'll need pauses around (d) and (h) as you'll have to wait for any outstanding operations to complete.

Why?

PRS, September 08, 2004 - 9:49 am UTC

Tom,
I have two oracle databases one 0n 8.1.5 and others
on 9.2.0.4. I issue following statement in both the
database.
SELECT COUNT(*) from audit_trail

Number of records are 50,000,000 in both databases.

8.1.5 database results comes back in 1 second. But
9.2.0.4 database result takes 65 to 70 seconds.

Both tables are analyzed in both databases.

Any reason why 9.2.0.4 is taking such a long time?

Thanks,


Tom Kyte
September 08, 2004 - 10:31 am UTC

alter session set events '10046 trace name context forever, level 12';


trace it, tkprof it, see what it says.

v$sort_usage and v$sort_segment

parag jayant patankar, October 26, 2004 - 8:14 am UTC

Hi Tom,

I am trying to learn concepts about v$sort_usage and v$sort_segment. For this reason in 1 sQL window I am running sql command with order by clause which is giving me following picture

select ( select username from v$session where saddr = session_addr) uname,
v.* from v$sort_usage v
/

UNAME USER SESSION_
------------------------------ ------------------------------ --------
SESSION_NUM SQLADDR SQLHASH TABLESPACE CONTENTS
----------- -------- ---------- ------------------------------- ---------
SEGTYPE SEGFILE# SEGBLK# EXTENTS BLOCKS SEGRFNO#
--------- ---------- ---------- ---------- ---------- ----------
OPS$ATLAS SYSTEM 3809D5BC
395 373A07A0 2203152678 TEMP PERMANENT
SORT 3 154301 229 14884 3

Now I had stopped the SQL querry running in 1st window still it is showing me same result. Why ? is it because "temp" tablespace is "permanent" not "temporary" ?

2. When I can see data in v$sort_segment ? While running SQL query I have done select * from v$sort_segment but it is not showing me any data.

regards & thanks
pjp



Tom Kyte
October 26, 2004 - 8:23 am UTC

1) you might have "stopped" fetching rows from the cursor -- but is the cursor in fact CLOSED on the session? temp space is released back for reuse only after you are truly done using it.

2) v$sort_segment would always have information in it -- if you are using TEMPORARY tablespaces. You are not (see the contents -- permanent)

v$sort_segment

Parag Jayant Patankar, October 26, 2004 - 10:09 am UTC

Hi Tom,

Thanks for your very precise and logical answers for v$sort_usage and v$sort_segment. In which situations or problems we should use information of v$sort_segment ? Pl explain, If possible with example.

regards & thanks
pjp

Tom Kyte
October 26, 2004 - 10:19 am UTC

umm, v$sort_segment tells you about your temporary tablespaces and how the space is allocated.

use it as you see fit? when you use temporary tablespaces?

How to find SQL which is causing sort ?

Parag Jayant Patankar, November 24, 2004 - 4:54 am UTC

Hi Tom,

I want to find out SQL which is causing sort in database by using sqladdress or sqlhash value from v$sort_usage view. How can I do this ?

For e.g.
v$sort_usage showing me following

15:12:01 system:atp1p1@spren012> select * from v$sort_usage;

USER SESSION_ SESSION_NUM SQLADDR SQLHASH
------------------------------ -------- ----------- -------- ----------
TABLESPACE CONTENTS SEGTYPE SEGFILE# SEGBLK# EXTENTS BLOCKS
------------------------------- --------- --------- ---------- ---------- ---------- ----------
SEGRFNO#
----------
SYSTEM 3809D5BC 1423 37828640 2249281901
TEMP PERMANENT SORT 3 110365 27 1754
3

SYSTEM 3809FFD8 432 37828640 2249281901
TEMP PERMANENT SORT 3 94961 27 1754
3

I was running following sqls which requires sort operation

15:23:08 system:atp1p1@spren012> select * from v$sqltext where sql_text like 'select * from tdc31%';

ADDRESS HASH_VALUE COMMAND_TYPE PIECE
-------- ---------- ------------ ----------
SQL_TEXT
----------------------------------------------------------------
36477378 546283287 3 0
select * from tdc31 order by a1020

3648F44C 198298676 3 0
select * from tdc31 order by a6010

Which field I choose from which data dictionary view to find which SQL is making sort operation ? ( Here address and hash_value are not matching what v$sort_usage is showing )

regards & thanks
pjp


Tom Kyte
November 24, 2004 - 7:44 am UTC


select a.username, b.sql_text
from v$sort_usage a, v$sql b
where a.sqladdr = b.address
and a.sqlhash = b.hash_value
/


however -- it'll show the CURRENT SQL of the session, which may or may not be the sql that "used the temp space".

too see what sql's sort lots, v$sql.

Multiple Temporary Tablespaces

K.Rajeev, January 13, 2005 - 1:21 am UTC

Thanks for a lot of information, Tom. However, how do we assign multiple temporary tablespaces to the users? What type of alter database command can we issue? We have one temporary tablespace for operations and one for production. None of them are being used by the database -Ora817-. Would decreasing the sort_area_size in init.ora result in the usage of temp tablespaces? Your views and advice appreciated in advance.

Tom Kyte
January 13, 2005 - 8:53 am UTC

Oracle 817 didn't have the concept of a user having multiple temporary tablespaces.


If you aren't actually "using" them -- not really sure why you would want to start? temp is used when we need to save intermediate results on (slow) disk instead of (faster) ram.

RE : TEMP Tablespaces

A reader, January 21, 2005 - 3:47 pm UTC

Hi Tom,

In our environment we have Oracle9i Rel2. I checked the V$PARAMETER for WORKAREA_SIZE_POLICY and PGA_AGGREGATE_TARGET. We have WORKAREA_SIZE_POLICY set to AUTO and PGA_AGGREGATE_TARGET set to 1GB. We are running a huge query but we still got the error "..Unable to allocate space in TEMP tablespace".
1. My understanding is that PGA_AGGREGATE_TARGET is used to allocate PGA memory for all connected sessions (ie. allocated to the instance as a whole and not to individual sessions). Also, if all of the sessions do not concurrently use this memory, then it can be used by other sessions. Also, this has been introduced in 9i to not allocate *_AREA_SIZE for each session and to reduce memory wastage if some sessions do not require that much PGA and also that allocations are not performed for each individual sessions, but for the instance as such. Is my understanding on this correct
2. If my understanding on Qn. 1 is correct, then why do we still get "Unable to allocate TEMP tablespace..." error. How do we identify that and resolve it
3. In the discussion above, you mentioned that TEMPORARY tablespace has been created in a PERMANENT tablespace and not as a "true" TEMPORARY tablespace. How can we identify if the temporary tablespace is a true temporary tablespace or not

Tom Kyte
January 21, 2005 - 8:01 pm UTC

1) pga_aggregate target is there to try and limit the amount of dynamic memory oracle uses in processes for sorting and hashing. a single process will get some percentable (typically 5 or less percent) of the target to use. if there is a single user running, they won't be using 100% of the target (because 20 other users could start a query at any moment).

2) because you are doing a BIG QUERY. I cannot imagine your temp is only a gig (that would be teeny tiny) and if you are doing something over a gig, even if you used the entire pga_aggregate target you would hit this! (you have 1gig of ram, the instance you went over 1gig you would have required temp -- you would have run out about then)



all this means is you have insufficient temp space allocated for your database.

cant get rid of ora-1652 error

Dan T., February 25, 2005 - 11:30 am UTC

I have the same problem as Singh had, near top of this thread.
Was trying to create a materialized view on a large table that uses a db link, and succeeded with some tables that weren't as large.
Running 9.2.0.6 on windows.
My temp tablespace is definitely temp-allocated, checked contents field of dba_tablespaces. (it's the default one created by dbca).
I tried all of the following:

ALTER DATABASE TEMPFILE 'C:\ORACLE\ORADATA\DEV\TEMP02.DBF'
RESIZE 2000M; -- initially 40M

ALTER TABLESPACE "TEMP" ADD TEMPFILE 'C:\ORACLE\ORADATA\DEV\temp02.dbf' SIZE 2000M;

alter tablespace temp default storage (initial 1M, next 1M);

ALTER DATABASE
TEMPFILE 'C:\ORACLE\ORADATA\DEV\TEMP01.DBF' AUTOEXTEND
ON;

Thanks for your help.


Tom Kyte
February 25, 2005 - 6:23 pm UTC

what tablespace was the error reported in (we use temp segments when creating objects that get turned into PERMANENT objects later...)

did you add space to the right thing?

Got it working

Dan T., February 28, 2005 - 3:33 pm UTC

This was the error:
ORA-01652: unable to extend temp segment by in tablespace

so I thought the problem was with the TEMP tablespace.
Because of your followup, I tried increasing the permanent tablespace where the m-view would be stored, and it worked.
I didn't think I would have to do this because tablespace is set for auto-allocation. Thanks!


v$sesstat, v$sort_usage difference

Jens, March 15, 2005 - 4:46 am UTC

Hi Tom,

I started a enormous group-by select to test the v$sort_usage-view. When I started the select I saw a growing block column in v$sort_usage. Ok, but why can't I see the growing disk sort usage in v$sesstat?

select s.sid,
b.name,
a.value as bytes,
round( a.value / 1024/1024) as megabytes,
s.module,
s.action,
su.tablespace,
su.segtype,
su.blocks
from v$sesstat a,
v$sysstat b,
v$session s,
v$sort_usage su
where b.statistic# = a.statistic#
and a.sid = 141
and s.sid = a.sid
and b.name like 'sort%disk%'
and su.session_addr = s.saddr



SID NAME BYTES MEGABYTES MODULE ACTION TABLESPACE SEGTYPE BLOCKS
---------- -------------- ---------- ---------- ---------- -------- ----------- --------- ----------
141 sorts (disk) 1 0 SQL*Plus TSTEMP SORT 14592

1 rows retrieved

Bye,

Jens




Tom Kyte
March 15, 2005 - 8:18 am UTC

sorts to disk is a count of the number of times you sorted to disk

not the number of bytes you sorted.

A reader, April 01, 2005 - 10:15 am UTC


A problem...

Mark, May 06, 2005 - 3:08 pm UTC

Hi Tom,
I've run into a problem with this temporary tablespace stuff. Maybe you can help...

Created user: RPT_USER
Create Role: RPT_ROLE

Assigned RPT_ROLE to RPT_USER. Logs in fine and dandy.

In LOGON trigger, check if USER has ROLE RPT_ROLE. If they do,
/* Because Dynamic SQL */
alter session set cursor_sharing=force;
/* Make objects visible to RPT_USER */
alter session set current_schema=HEALT4;

Now, RPT_USER has Temporary Tablespace set to RPT_TEMP which is a TEMPORARY TABLESPACE with 2 tempfiles of 128M and 1GB.

However, according to this thread, the CURRENT_SCHEMA will decide which temporary tablespace the user's session will use. So, since I set CURRENT_SCHEMA = HEALT4, it tries to use that tablespace, which is named: TEMP.

Unfortunately, that TEMP tablespace contains no tempfiles. It is normal tablespace marked temporary.

When I run a big query with huge sort, I get:

ORA-01682: read-only DB cannot allocate temporary space in tablespace TEMP

Is my solution to add a tempfile to TEMP? Will that work? Or can I not do that and have to reassign the HEALT4 Temporary Tablespace to RPT_USER?

Again, paranoid about messing with settings as this is a 24x7 OLTP App.

Thanks,
Mark

Tom Kyte
May 06, 2005 - 5:21 pm UTC

why not correct TEMP? it should be a true temporary tablespace with tempfiles.

arghh...

Mark, May 06, 2005 - 3:47 pm UTC

I meant to say

"...Or can I not do that and have to reassign the HEALT4 Temporary Tablespace to RPT_TEMP?"




Tom Kyte
May 06, 2005 - 5:26 pm UTC

see above, why isn't temp a true temporary tablespace?

Yes, why isn't it?

Mark, May 09, 2005 - 12:35 pm UTC

Well, i didn't create this database, but it is up to me to get it working tip top...

So I've decided to create tempfiles in RPT_TEMP that mimic the datafiles in TEMP, reassign HEALT4 user's temporary tablespace to RPT_TEMP, and watch for problems. So far so good. I can see the RPT_TEMP tablespace Used Bytes increasing by some amount occasionally, so it is looking ok.

Thanks again.

Great discussion!

Jon, June 23, 2005 - 9:53 am UTC

Your trigger that fires on servererror is going to help a lot for us. Thank you.

Now, how can I see how much space was being used by the offending query as well as the other sessions right when the error happens? I tried adding a cursor to the trigger that looks at v$sort_usage but it appears to be after the temp space is free again and it doesn't show how much space was being used when the error happens.

The reason why we want this is that we have adhoc users with their own tablespace and we are trying to determine if they need more space because we have more users or if one or two users are just writing "bad" SQL that is causing temp space to run out.

thanks again!



Tom Kyte
June 23, 2005 - 6:47 pm UTC

well, question to you, suppose the query that "ran out of space" was using 5k of temp?

because someone else was using 5 gigabytes of temp....

the guy that runs out isn't the guy using the most.

maybe you can look at "free" space in temp if they just gave it up?

creating true temp tablespace

steve, July 12, 2005 - 1:43 pm UTC

Hi Tom ,

1)On Aix 5.2 , what minimum size a tempfile of a true temp tablespace can have ?

2) i use raw files for other datafiles, can this tempfile be raw file , pls explain :


thanks ,

Tom Kyte
July 13, 2005 - 10:35 am UTC

1) no idea for you would never create one tiny, but a couple of meg should "work"

sys@ORA10GR1> create temporary tablespace t tempfile size 1m;
create temporary tablespace t tempfile size 1m
*
ERROR at line 1:
ORA-03214: File Size specified is smaller than minimum required


sys@ORA10GR1> create temporary tablespace t tempfile size 2m;

Tablespace created.


2meg seemed to "work" but would be "useless", so interesting factoid but not anything you can really "use"

2) yes, they can be raw. raw, cooked, asm, they all look the same to us, they are just file names.

steve, July 13, 2005 - 12:04 pm UTC

Thanks ,
Itried 5mb tempfile but itfailed , or may be that i don't have prmission to write in /dev , etc ,
for other datafiles i create raw files( with mklv) and then add this to database in the create tablespace or altert tablespace, is it true fro the temporary files for the temp tablespace ( if raw or otherwise) ? ( we have oracle 8.1.7.4 , aix 5.2)
Thank you,


Tom Kyte
July 13, 2005 - 1:02 pm UTC

error messages are always very useful.

but -- again, 5mb would be "less than useful", this falls into the category of "since you'd never create anything this small, why bother"?


you use raw with temp the same way you use raw with permanent.

I've never admin'ed AIX myself.

Temporary Tablespace vs. pga_aggregate_target

mAg, August 04, 2005 - 12:27 pm UTC

Hi Tom

This is my current settings, for PGA, it is a DW database, most of the tables has more than 10 Million rows, But still there is rapid increase in temp tablespace, 
I am not sure whether it is using memory or not?

Can give your valuable suggestion to this issue? 
Please correct me if i am wrong.

Thanks
mAg

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
CORE    9.2.0.6.0       Production
TNS for HPUX: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 - Production


SQL> show parameter sort_area_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sort_area_size                       integer     67108864
SQL> show parameter pga_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target                 big integer 2147483648
SQL> show parameter work

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy                 string      AUTO
SQL> select name,value from v$sysstat where name like 'workarea executions%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
workarea executions - optimal                                        110863
workarea executions - onepass                                           134
workarea executions - multipass                                           0

SQL> select * from v$pgastat;

NAME                                                                  VALUE UNIT
---------------------------------------------------------------- ---------- ------------
aggregate PGA target parameter                                   2147483648 bytes
aggregate PGA auto target                                        1916706816 bytes
global memory bound                                               104857600 bytes
total PGA inuse                                                    18112512 bytes
total PGA allocated                                                31262720 bytes
maximum PGA allocated                                             708257792 bytes
total freeable PGA memory                                           2097152 bytes
PGA memory freed back to OS                                      6926303232 bytes
total PGA used for auto workareas                                    311296 bytes
maximum PGA used for auto workareas                               385204224 bytes
total PGA used for manual workareas                                       0 bytes
maximum PGA used for manual workareas                               6365184 bytes
over allocation count                                                     0
bytes processed                                                  6.5048E+10 bytes
extra bytes read/written                                         3.0244E+10 bytes
cache hit percentage                                                  68.26 percent 

Tom Kyte
August 04, 2005 - 1:46 pm UTC

looks good to me? no multipass operations and very few onepass operations and most are done optimally?

But, what can be the cause of rapid use of TEMP tablespace...

mAg, August 04, 2005 - 2:40 pm UTC

Hi Tom,
Thanks a lot for your quick response.

I'm agree with you that the following output showing optimal result. When the user firing COUNT(1), DISTINCT like clause on query, a 10gb size temp tablespace going to fill rapidly. In this case can we minimize the "workarea executions - onepass" value to some extend low or is it the cause for use of temp tablespace???


Thanks,
mAg

SQL> select name,value from v$sysstat where name like 'workarea executions%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
workarea executions - optimal                                        110863
workarea executions - onepass                                           134
workarea executions - multipass                                           0

 

Tom Kyte
August 04, 2005 - 8:38 pm UTC

are you asking "how can a single user consume over 10gig of ram to perform a single query on an ongoing basis"?

what is the query for real, that they generate 10gig of temp that they will actually look at?

Query killing temp space..

mAg, August 05, 2005 - 4:25 pm UTC

No, like so,we allocated a temp tablespace of 10 gig. But table with 59384176 rows, going to eaten by 4.5Gb of temp tablespace.

The query is as :
"select distinct prod_code, geog_code, time_week from POS_FACT_NON_ADD_STG;"

But my question, though "onepass" value is optimal, instead of using sort in memory why it's going to access the TEMP?

Thanking you once again for your such help.

mAg


Tom Kyte
August 05, 2005 - 5:46 pm UTC

because an individual is only going to get about 5% of the pga_aggregate_target at any point in time (by design), the automatic stuff is designed to "share"

4.5 gig isn't typically going to be done in ram either, not with auto workareas.



If a user prosess require more than 5% of memory from PGA_AGG..

mAg, August 06, 2005 - 9:06 am UTC

Thank you, learn a lot from you.

Relate to your answer my understanding, an user process can share maximum of 5% of pga_aggregate_target at a time. And beyond of this will consume the TEMP space.

If a long running query or batch process need more than 5%, and I wants to allocate more memory from PGA avoiding to use TEMP. So, how it can achive where workarea_size_policy asigned to AUTO?

Thanks,
mAg


Tom Kyte
August 06, 2005 - 10:23 am UTC

No, a single OPERATION will use that. Perhaps see this:

</code> http://oracle.apress.com//betabooks/betabook.html?betabook_id=30&APRESSESSID=c5e74fa001f64357db462f5e677af1b1 <code>

(note to future readers, if that link doesn't work that is because it points to a beta version of a chapter in my book, after book is published in sep 2005, it won't be there anymore)

it explains how automatic pga memory management works.

tablespace usage and sort in memory

A reader, August 12, 2005 - 7:22 am UTC

Hi

I am keepping getting ORA-01652 in Temporary tablespace however in user session statistics it always shows sort in disk 0, how so?

Tom Kyte
August 12, 2005 - 8:47 am UTC

hash joins, global temporary tables, .......


other things use temp.

I am the only user logged in

A reader, August 12, 2005 - 9:06 am UTC

Hi

I am the only user logged in the server. The operation add a primary key on a 22gb table so it´s normal that it uses temporary tablespace what I dont understand is why it doesnt show in v$sesstat? It shows zero for sorts (disk).

Tom Kyte
August 12, 2005 - 9:24 am UTC

[tkyte@localhost tkyte]$ oerr ora 1652
01652, 00000, "unable to extend temp segment by %s in tablespace %s"
// *Cause: Failed to allocate an extent for temp segment in tablespace.
// *Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
// files to the tablespace indicated.


It is likely NOT your temporary tablespace then.

When we create a segment, it is done using temporary extents, at the end of the operation the temporary extents are converted into permanent ones. Why? So that if the instance fails 50% of the way through the create -- SMON will clean up the temp extents naturally.

So, look closer at your error message, note the NAME of the tablespace. You are likely running out of room in the tablespace you are creating the index in.



it is the temporary tablespace

A reader, August 13, 2005 - 9:57 am UTC

Hi

It is the temporary tablespace that is failing because it shows the tablespace name TEMPO (our temp tablespace), also from v$sort_segment it shows the usage goes up and finally also because I managed to validate the Primary Key constraint after increase the temp tablespace from 8gb to 20gb. I dont understand why this usage is not shown in system statistics.

And by the way the index of this PK was created already (should have mentioned that before sorry), I am simply adding the constraint and the validation of the constraint fails because the validation process runs this query

SELECT /*+ all_rows ordered */
"A".ROWID, 'EIS_ADMIN', 'TRANS_NOCUBO_REGALOS',
'PK_TRANS_NOCUBO_REGALOS'
FROM "EIS_ADMIN"."TRANS_NOCUBO_REGALOS" "A",
(SELECT /*+ all_rows */
"ANIO", "TRIMESTRE", "MES", "ENT_EXPLOT", "PAIS",
"PROVINCIA_ES", "ZONA", "REGALO", "ACCION", "MODO_REDENCION",
"TIPOLOGIA_1", "TIPOLOGIA_2", "N_CONCN"
FROM "EIS_ADMIN"."TRANS_NOCUBO_REGALOS" "A"
WHERE ( "ANIO" IS NOT NULL
AND "TRIMESTRE" IS NOT NULL
AND "MES" IS NOT NULL
AND "ENT_EXPLOT" IS NOT NULL
AND "PAIS" IS NOT NULL
AND "PROVINCIA_ES" IS NOT NULL
AND "ZONA" IS NOT NULL
AND "REGALO" IS NOT NULL
AND "ACCION" IS NOT NULL
AND "MODO_REDENCION" IS NOT NULL
AND "TIPOLOGIA_1" IS NOT NULL
AND "TIPOLOGIA_2" IS NOT NULL
AND "N_CONCN" IS NOT NULL
)
GROUP BY "ANIO",
"TRIMESTRE",
"MES",
"ENT_EXPLOT",
"PAIS",
"PROVINCIA_ES",
"ZONA",
"REGALO",
"ACCION",
"MODO_REDENCION",
"TIPOLOGIA_1",
"TIPOLOGIA_2",
"N_CONCN"
HAVING COUNT (1) > 1) "B"
WHERE ( "A"."ANIO" = "B"."ANIO"
AND "A"."TRIMESTRE" = "B"."TRIMESTRE"
AND "A"."MES" = "B"."MES"
AND "A"."ENT_EXPLOT" = "B"."ENT_EXPLOT"
AND "A"."PAIS" = "B"."PAIS"
AND "A"."PROVINCIA_ES" = "B"."PROVINCIA_ES"
AND "A"."ZONA" = "B"."ZONA"
AND "A"."REGALO" = "B"."REGALO"
AND "A"."ACCION" = "B"."ACCION"
AND "A"."MODO_REDENCION" = "B"."MODO_REDENCION"
AND "A"."TIPOLOGIA_1" = "B"."TIPOLOGIA_1"
AND "A"."TIPOLOGIA_2" = "B"."TIPOLOGIA_2"
AND "A"."N_CONCN" = "B"."N_CONCN"
)

This query is filling up the temporary tablespace.

Tom Kyte
August 13, 2005 - 10:06 am UTC

because some stats (like CPU time) are not reported until the end -- and this failed, it didn't work. it bombed. therefore, why should it report sort to disk, it is as if it never actually *happened*.


allocate more temp if you would like to complete this operation.

I dont agree

A reader, August 13, 2005 - 2:18 pm UTC

Hi

You are saying that if the operation fails then there is no point to report the statistics, I do not agree, in a real system the statistics should be reported even the operation fails if not how do we debug/trace/tune the query? Also if that is the case then statspack report cannot be very accurate, imagine we run statspack to tune a batch process and the two snaps shows same statistics for the data we are interested in how can we trust the system statistics then?


The thing is I did thought as you that the statistics is probably reported at the end so once the operation was completed I did check v$sesstat again but still sorts (disk) showed 1 !!!

Sorry I am not being anal just that it´s something very hard to explain, no statement whatsoever, I searched in the docs and nowhere explains this behaviour!

Tom Kyte
August 13, 2005 - 3:34 pm UTC

you and i don't have to agree as to "what you think should happen", we can only discuss what actually takes place, for real.  Speaking hypothetically is one thing (I never said if I thought this was the best way to do it or not, I'm just telling you "how it is")
 

You got to me, a very clear error message "we ran out of room in temp"
You observed v$sort views growing in allocations.

I'm not sure what more to say?  What is left to explain?  You ran out of temp, statement failed.  What is left to debug???  


If your batch FAILS, it didn't happen.  You got what I think is one of the more "clear" and to the point error messages.

You executed zero successful statements.  I don't know what else to say.



ops$tkyte@ORA10GR1> create temporary tablespace ttt
  2  tempfile size 2m
  3  /

Tablespace created.

ops$tkyte@ORA10GR1> alter user ops$tkyte temporary tablespace ttt;

User altered.

ops$tkyte@ORA10GR1> alter session set sort_area_size = 65536;

Session altered.

ops$tkyte@ORA10GR1> alter session set workarea_size_policy = manual;

Session altered.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> @mystat "sorts (disk)"
ops$tkyte@ORA10GR1> set echo off

NAME                                VALUE
------------------------------ ----------
sorts (disk)                            0

ops$tkyte@ORA10GR1> create index t_idx on big_table(owner,object_name,object_type,id,object_id);
create index t_idx on big_table(owner,object_name,object_type,id,object_id)
                      *
ERROR at line 1:
ORA-01652: unable to extend temp segment by 128 in tablespace TTT


ops$tkyte@ORA10GR1> @mystat2 "sorts (disk)"
ops$tkyte@ORA10GR1> set echo off

NAME                                    V DIFF
------------------------------ ---------- ----------------
sorts (disk)                            0                0


That is the way it works.  You had everything and more to diagnose this -- the error message, the diagnoses of temp via the v$ views -- EVERYTHING. 

what statistics are not updated

A reader, August 14, 2005 - 6:57 am UTC

Hi

Do you know what statistics are not updated if the operation fails or during real time (for example I/O is updated at real time)? Other than sorts and CPU Time :-?


Cheers

Tom Kyte
August 14, 2005 - 9:34 am UTC

the CPU time is reported at the end of the call. Here, the SORT TO DISK never really "happened", it was a "non-event", the statement "didn't happen".

Since we don't build systems to fail thousands of statements constantly.

And you need to fix the failed statement.

And everything needed and more to diagnose this particular issue was clearly presented.

That the sorts to disk wasn't incremented is, to me, a non-event as well.

Proof

A reader, August 17, 2005 - 3:29 am UTC

Hi here is the proof why I am saying sort to disks is not increased however the temporary tablespace size does:


set timi on
col name for a30
set line 140
set pagesize 100
col segment_name for a30
set trimspool on

create table test$sesstat
as select 1 as ID, a.*
from v$mystat a;

Tabla creada.

Transcurrido: 00:00:00.04

select * from v$sysstat where name like '%logon%'
or name like '%sort%';

STATISTIC# NAME CLASS VALUE
---------- ------------------------------ ---------- ----------
0 logons cumulative 1 18
1 logons current 1 7
245 sorts (memory) 64 800
246 sorts (disk) 64 0
247 sorts (rows) 64 3625

Transcurrido: 00:00:00.01

select file_name, bytes/(1024*1024) from dba_temp_files;

FILE_NAME BYTES/(1024*1024)
---------------------------------------- -----------------
/dwhome05/soft/oradata/DW/temp00.dbf 8172

Transcurrido: 00:00:00.00

alter session set sort_area_size = 32000000;

Sesión modificada.

Transcurrido: 00:00:00.00

ALTER TABLE TRANS_NOCUBO_REGALOS
ADD CONSTRAINT PK_TRANS_NOCUBO_REGALOS
PRIMARY KEY
(ANIO, TRIMESTRE, MES, ENT_EXPLOT, PAIS, PROVINCIA_ES, ZONA, REGALO, ACCION, MODO_REDENCION, TIPOLOGIA_1, TIPOLOGIA_2, N_CONCN)
USING INDEX LOCAL;

Tabla modificada.

Transcurrido: 01:29:22.87

select segment_name, bytes/(1024*1024) SIZE_T
from user_segments where segment_name = 'PK_TRANS_NOCUBO_REGALOS';

SEGMENT_NAME SIZE_T
------------------------------ ----------
PK_TRANS_NOCUBO_REGALOS 208
PK_TRANS_NOCUBO_REGALOS 256
PK_TRANS_NOCUBO_REGALOS 264
PK_TRANS_NOCUBO_REGALOS 360
PK_TRANS_NOCUBO_REGALOS 1408
PK_TRANS_NOCUBO_REGALOS 701
PK_TRANS_NOCUBO_REGALOS 664
PK_TRANS_NOCUBO_REGALOS 624
PK_TRANS_NOCUBO_REGALOS 696
PK_TRANS_NOCUBO_REGALOS 577
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 561
PK_TRANS_NOCUBO_REGALOS 377
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 505
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 505
PK_TRANS_NOCUBO_REGALOS 1345
PK_TRANS_NOCUBO_REGALOS 232
PK_TRANS_NOCUBO_REGALOS 280
PK_TRANS_NOCUBO_REGALOS 496
PK_TRANS_NOCUBO_REGALOS 280
PK_TRANS_NOCUBO_REGALOS 304
PK_TRANS_NOCUBO_REGALOS 616
PK_TRANS_NOCUBO_REGALOS 256
PK_TRANS_NOCUBO_REGALOS 312
PK_TRANS_NOCUBO_REGALOS 600
PK_TRANS_NOCUBO_REGALOS 344
PK_TRANS_NOCUBO_REGALOS 352
PK_TRANS_NOCUBO_REGALOS 728
PK_TRANS_NOCUBO_REGALOS 320
PK_TRANS_NOCUBO_REGALOS 352
PK_TRANS_NOCUBO_REGALOS 352
PK_TRANS_NOCUBO_REGALOS 317
PK_TRANS_NOCUBO_REGALOS 328
PK_TRANS_NOCUBO_REGALOS 304
PK_TRANS_NOCUBO_REGALOS 288
PK_TRANS_NOCUBO_REGALOS 248
PK_TRANS_NOCUBO_REGALOS 328
PK_TRANS_NOCUBO_REGALOS 328
PK_TRANS_NOCUBO_REGALOS 344
PK_TRANS_NOCUBO_REGALOS 336
PK_TRANS_NOCUBO_REGALOS 1472
PK_TRANS_NOCUBO_REGALOS 305
PK_TRANS_NOCUBO_REGALOS 304
PK_TRANS_NOCUBO_REGALOS 305

51 filas seleccionadas.

Transcurrido: 00:00:00.16

insert into test$sesstat
select 2, a.*
from v$mystat a;

268 filas creadas.

Transcurrido: 00:00:00.00

commit;

Validación terminada.

Transcurrido: 00:00:00.00

select * from v$sysstat where name like '%logon%'
or name like '%sort%';

STATISTIC# NAME CLASS VALUE
---------- ------------------------------ ---------- ----------
0 logons cumulative 1 18
1 logons current 1 7
245 sorts (memory) 64 816
246 sorts (disk) 64 1
247 sorts (rows) 64 92257085

select file_name, bytes/(1024*1024) from dba_temp_files;

FILE_NAME BYTES/(1024*1024)
---------------------------------------- -----------------
/dwhome05/soft/oradata/DW/temp00.dbf 19572

Transcurrido: 00:00:00.00

---------------------------------

select name, decode(b.id, 1, 'START', 'END') TIMING, VALUE
from v$statname a, eis_admin.test$sesstat b
where a.STATISTIC# = b.STATISTIC#
and a.name like '%sort%';

NAME TIMIN VALUE
----------------------------------- ----- ----------
sorts (memory) START 30
sorts (memory) END 46
sorts (disk) START 0
sorts (disk) END 1
sorts (rows) START 162
sorts (rows) END 92253622


As you can see, during the process only user is me (logons cumulative sysstat), the session sorted over 90 million rows , the size of temporary tablespace increased from 8gb to 19gb however sorts (disk) is one?! The operation was succesful so it cannot be a non-event!

Cheers




Tom Kyte
August 17, 2005 - 1:15 pm UTC

*I KNOW TEMP WAS USED*

that is so far beyond the point here.


YOUR STATEMENT FAILED.
IT NEVER HAPPENDED

ALL BETS OFF

Not relevant. I don't get the point. sort to disk would be reported at the end of a successful statement.

You didn't have one.

DONE.

uh my operation didnt fail

A reader, August 17, 2005 - 1:26 pm UTC

Hi

My operation did NOT fail!

Pleas look:

ALTER TABLE TRANS_NOCUBO_REGALOS
ADD CONSTRAINT PK_TRANS_NOCUBO_REGALOS
PRIMARY KEY
(ANIO, TRIMESTRE, MES, ENT_EXPLOT, PAIS, PROVINCIA_ES, ZONA, REGALO, ACCION,
MODO_REDENCION, TIPOLOGIA_1, TIPOLOGIA_2, N_CONCN)
USING INDEX LOCAL;

Tabla modificada.

Transcurrido: 01:29:22.87

NO ERROR!


Tom Kyte
August 17, 2005 - 2:06 pm UTC

sorry -- thought it was the same person with the statement that was failing:


ops$tkyte@ORA9IR2> set echo off

NAME                                VALUE
------------------------------ ----------
sorts (memory)                         63
sorts (disk)                            0
sorts (rows)                          423

ops$tkyte@ORA9IR2> 2
  2* object_id, object_name, owner, objec_type )
ops$tkyte@ORA9IR2> c/c_/ct_
  2* object_id, object_name, owner, object_type )
ops$tkyte@ORA9IR2> /

Table altered.

ops$tkyte@ORA9IR2> @mystat sort
ops$tkyte@ORA9IR2> set echo off

NAME                                VALUE
------------------------------ ----------
sorts (memory)                         77
sorts (disk)                            1
sorts (rows)                        28732





Your example certainly seems to show the same??? sorts to disk went up by one???



You did one sort, it was big, but it was ONE SORT to disk.


What is the issue?  It seems right and correct to me?

 

How come only one sort

A reader, August 17, 2005 - 5:30 pm UTC

Hi

So a sort disk can be of any size?! I mean to fill up a 20gb Temporary tablespace (went up from 8gb to 20gb) only a sort...?!

I am going to read the manuals what is the definition of sort disk :P

Tom Kyte
August 17, 2005 - 5:35 pm UTC

a sort to disk is a sort operation that wrote to temp.

You sorted one time, once.
Hence one sort to disk operation.


does not matter if you sorted 100000 gig to disk or 1 byte to disk, it was one sort, one sort to disk.



waste of temp

Marc, August 18, 2005 - 5:34 am UTC

hi tom,
I got an insert /*+ append */ ... select
statement which is utilizing 40 Gbyte of temp ...
the query is going to transfer about 90 mio records
and comes back immediately if I run it on its own , without inserting ...
dbmx_xplan( sqlid, childno ) doesnt show any temp usage !

why is it placing the data temporarily when it could write directly into the target table ???

here the plan :

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT|
----------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 31M(100)| | | |
| 1 | LOAD AS SELECT | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :Q1030 | 55G| 58T| 31M (1)|159:06:29 | Q1,30 | P->S |
|* 4 | HASH JOIN RIGHT OUTER BUFFERED | | 55G| 58T| 31M (1)|159:06:29 | Q1,30 | PCWP |
| 5 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 6 | PX RECEIVE | | 2452 | 53944 | 5 (0)| 00:00:01 | Q1,30 | PCWP |
| 7 | PX SEND BROADCAST | :Q0 | 2452 | 53944 | 5 (0)| 00:00:01 | | S->P |
|* 8 | TABLE ACCESS FULL | DIM_KNOTEN | 2452 | 53944 | 5 (0)| 00:00:01 | | |
|* 9 | HASH JOIN RIGHT OUTER | | 2265M| 2409G| 31M (1)|159:04:18 | Q1,30 | PCWP |
| 10 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 11 | PX RECEIVE | | 2452 | 53944 | 5 (0)| 00:00:01 | Q1,30 | PCWP |
| 12 | PX SEND BROADCAST | :Q0 | 2452 | 53944 | 5 (0)| 00:00:01 | | S->P |
|* 13 | TABLE ACCESS FULL | DIM_KNOTEN | 2452 | 53944 | 5 (0)| 00:00:01 | | |
|* 14 | HASH JOIN RIGHT OUTER | | 92M| 96G| 31M (1)|159:04:12 | Q1,30 | PCWP |
| 15 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 16 | PX RECEIVE | | 186 | 2046 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 17 | PX SEND BROADCAST | :Q0 | 186 | 2046 | 3 (0)| 00:00:01 | | S->P |
|* 18 | TABLE ACCESS FULL | DIM_FREMDWAEHRUNG | 186 | 2046 | 3 (0)| 00:00:01 | | |
|* 19 | HASH JOIN RIGHT OUTER | | 92M| 95G| 31M (1)|159:04:07 | Q1,30 | PCWP |
| 20 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 21 | PX RECEIVE | | 30 | 300 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 22 | PX SEND BROADCAST | :Q0 | 30 | 300 | 3 (0)| 00:00:01 | | S->P |
|* 23 | TABLE ACCESS FULL | DIM_BWGART_ZAHLEINGANG | 30 | 300 | 3 (0)| 00:00:01 | | |
|* 24 | HASH JOIN RIGHT OUTER | | 92M| 94G| 31M (1)|159:04:01 | Q1,30 | PCWP |
| 25 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 26 | PX RECEIVE | | 25 | 350 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 27 | PX SEND BROADCAST | :Q0 | 25 | 350 | 3 (0)| 00:00:01 | | S->P |
|* 28 | TABLE ACCESS FULL | DIM_ZAHLUNGSART | 25 | 350 | 3 (0)| 00:00:01 | | |
|* 29 | HASH JOIN RIGHT OUTER | | 92M| 93G| 31M (1)|159:03:56 | Q1,30 | PCWP |
| 30 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 31 | PX RECEIVE | | 8 | 72 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 32 | PX SEND BROADCAST | :Q0 | 8 | 72 | 3 (0)| 00:00:01 | | S->P |
|* 33 | TABLE ACCESS FULL | DIM_BWGART_AUSKEHR | 8 | 72 | 3 (0)| 00:00:01 | | |
|* 34 | HASH JOIN RIGHT OUTER | | 92M| 92G| 31M (1)|159:03:51 | Q1,30 | PCWP |
| 35 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 36 | PX RECEIVE | | 7 | 56 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 37 | PX SEND BROADCAST | :Q0 | 7 | 56 | 3 (0)| 00:00:01 | | S->P |
|* 38 | TABLE ACCESS FULL | DIM_SCHADSTOFFKLASSE | 7 | 56 | 3 (0)| 00:00:01 | | |
|* 39 | HASH JOIN RIGHT OUTER | | 92M| 91G| 31M (1)|159:03:45 | Q1,30 | PCWP |
| 40 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 41 | PX RECEIVE | | 4 | 32 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 42 | PX SEND BROADCAST | :Q0 | 4 | 32 | 3 (0)| 00:00:01 | | S->P |
|* 43 | TABLE ACCESS FULL | DIM_SONDFAKTURAKZ | 4 | 32 | 3 (0)| 00:00:01 | | |
|* 44 | HASH JOIN RIGHT OUTER | | 92M| 91G| 31M (1)|159:03:40 | Q1,30 | PCWP |
| 45 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 46 | PX RECEIVE | | 3 | 24 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 47 | PX SEND BROADCAST | :Q0 | 3 | 24 | 3 (0)| 00:00:01 | | S->P |
|* 48 | TABLE ACCESS FULL | DIM_NUTZERART | 3 | 24 | 3 (0)| 00:00:01 | | |
|* 49 | HASH JOIN RIGHT OUTER | | 92M| 90G| 31M (1)|159:03:34 | Q1,30 | PCWP |
| 50 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 51 | PX RECEIVE | | 3 | 27 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 52 | PX SEND BROADCAST | :Q0 | 3 | 27 | 3 (0)| 00:00:01 | | S->P |
|* 53 | TABLE ACCESS FULL | DIM_FINANZGSART | 3 | 27 | 3 (0)| 00:00:01 | | |
|* 54 | HASH JOIN RIGHT OUTER | | 92M| 89G| 31M (1)|159:03:29 | Q1,30 | PCWP |
| 55 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 56 | PX RECEIVE | | 3 | 27 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 57 | PX SEND BROADCAST | :Q0 | 3 | 27 | 3 (0)| 00:00:01 | | S->P |
|* 58 | TABLE ACCESS FULL | DIM_DIENST | 3 | 27 | 3 (0)| 00:00:01 | | |
|* 59 | HASH JOIN RIGHT OUTER | | 92M| 88G| 31M (1)|159:03:23 | Q1,30 | PCWP |
| 60 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 61 | PX RECEIVE | | 3 | 24 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 62 | PX SEND BROADCAST | :Q0 | 3 | 24 | 3 (0)| 00:00:01 | | S->P |
|* 63 | TABLE ACCESS FULL | DIM_GEWICHTSKLASSE | 3 | 24 | 3 (0)| 00:00:01 | | |
|* 64 | HASH JOIN RIGHT OUTER | | 92M| 88G| 31M (1)|159:03:18 | Q1,30 | PCWP |
| 65 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 66 | PX RECEIVE | | 3 | 24 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 67 | PX SEND BROADCAST | :Q0 | 3 | 24 | 3 (0)| 00:00:01 | | S->P |
|* 68 | TABLE ACCESS FULL | DIM_ACHSKLASSE | 3 | 24 | 3 (0)| 00:00:01 | | |
|* 69 | HASH JOIN RIGHT OUTER | | 92M| 87G| 31M (1)|159:03:13 | Q1,30 | PCWP |
| 70 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 71 | PX RECEIVE | | 2 | 18 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 72 | PX SEND BROADCAST | :Q0 | 2 | 18 | 3 (0)| 00:00:01 | | S->P |
|* 73 | TABLE ACCESS FULL | DIM_STRASSENBETREIBER | 2 | 18 | 3 (0)| 00:00:01 | | |
|* 74 | HASH JOIN RIGHT OUTER | | 92M| 86G| 31M (1)|159:03:07 | Q1,30 | PCWP |
| 75 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 76 | PX RECEIVE | | 2 | 16 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 77 | PX SEND BROADCAST | :Q0 | 2 | 16 | 3 (0)| 00:00:01 | | S->P |
|* 78 | TABLE ACCESS FULL | DIM_ZEITKLASSE | 2 | 16 | 3 (0)| 00:00:01 | | |
|* 79 | HASH JOIN RIGHT OUTER | | 92M| 86G| 31M (1)|159:03:02 | Q1,30 | PCWP |
| 80 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 81 | PX RECEIVE | | 2 | 16 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 82 | PX SEND BROADCAST | :Q0 | 2 | 16 | 3 (0)| 00:00:01 | | S->P |
|* 83 | TABLE ACCESS FULL | DIM_ORTSKLASSE | 2 | 16 | 3 (0)| 00:00:01 | | |
|* 84 | HASH JOIN RIGHT OUTER | | 92M| 85G| 31M (1)|159:02:56 | Q1,30 | PCWP |
| 85 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 86 | PX RECEIVE | | 43613 | 766K| 120 (3)| 00:00:03 | Q1,30 | PCWP |
| 87 | PX SEND BROADCAST | :Q0 | 43613 | 766K| 120 (3)| 00:00:03 | | S->P |
|* 88 | TABLE ACCESS FULL | DIM_DEBITOR | 43613 | 766K| 120 (3)| 00:00:03 | | |
|* 89 | HASH JOIN RIGHT OUTER | | 92M| 83G| 31M (1)|159:02:49 | Q1,30 | PCWP |
| 90 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 91 | PX RECEIVE | | 265 | 5300 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 92 | PX SEND BROADCAST | :Q0 | 265 | 5300 | 3 (0)| 00:00:01 | | S->P |
|* 93 | TABLE ACCESS FULL | DIM_TARIF | 265 | 5300 | 3 (0)| 00:00:01 | | |
|* 94 | HASH JOIN RIGHT OUTER | | 92M| 82G| 31M (1)|159:02:43 | Q1,30 | PCWP |
| 95 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 96 | PX RECEIVE | | 250 | 2500 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 97 | PX SEND BROADCAST | :Q0 | 250 | 2500 | 3 (0)| 00:00:01 | | S->P |
|* 98 | TABLE ACCESS FULL | DIM_LAND | 250 | 2500 | 3 (0)| 00:00:01 | | |
|* 99 | HASH JOIN RIGHT OUTER | | 92M| 81G| 31M (1)|159:02:38 | Q1,30 | PCWP |
| 100 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 101 | PX RECEIVE | | 192 | 1152 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 102 | PX SEND BROADCAST | :Q0 | 192 | 1152 | 3 (0)| 00:00:01 | | S->P |
| 103 | TABLE ACCESS FULL | LKUP_LAND | 192 | 1152 | 3 (0)| 00:00:01 | | |
|*104 | HASH JOIN RIGHT OUTER | | 92M| 80G| 31M (1)|159:02:32 | Q1,30 | PCWP |
| 105 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 106 | PX RECEIVE | | 44 | 352 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 107 | PX SEND BROADCAST | :Q0 | 44 | 352 | 3 (0)| 00:00:01 | | S->P |
|*108 | TABLE ACCESS FULL | DIM_FAKTURAGRP | 44 | 352 | 3 (0)| 00:00:01 | | |
|*109 | HASH JOIN RIGHT OUTER | | 92M| 80G| 31M (1)|159:02:27 | Q1,30 | PCWP |
| 110 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 111 | PX RECEIVE | | 24 | 576 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 112 | PX SEND BROADCAST | :Q0 | 24 | 576 | 3 (0)| 00:00:01 | | S->P |
|*113 | TABLE ACCESS FULL | DIM_DEBITORGRP | 24 | 576 | 3 (0)| 00:00:01 | | |
|*114 | HASH JOIN RIGHT OUTER | | 92M| 78G| 31M (1)|159:02:22 | Q1,30 | PCWP |
| 115 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 116 | PX RECEIVE | | 6 | 48 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 117 | PX SEND BROADCAST | :Q0 | 6 | 48 | 3 (0)| 00:00:01 | | S->P |
|*118 | TABLE ACCESS FULL | DIM_QUELLSYSTEM | 6 | 48 | 3 (0)| 00:00:01 | | |
|*119 | HASH JOIN RIGHT OUTER | | 92M| 77G| 31M (1)|159:02:16 | Q1,30 | PCWP |
| 120 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 121 | PX RECEIVE | | 6 | 30 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 122 | PX SEND BROADCAST | :Q0 | 6 | 30 | 3 (0)| 00:00:01 | | S->P |
|*123 | TABLE ACCESS FULL | DIM_QUELLSYSTEM | 6 | 30 | 3 (0)| 00:00:01 | | |
|*124 | HASH JOIN RIGHT OUTER | | 92M| 76G| 31M (1)|159:02:11 | Q1,30 | PCWP |
| 125 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 126 | PX RECEIVE | | 5 | 70 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 127 | PX SEND BROADCAST | :Q0 | 5 | 70 | 3 (0)| 00:00:01 | | S->P |
|*128 | TABLE ACCESS FULL | DIM_HERKUNFTTABELLE | 5 | 70 | 3 (0)| 00:00:01 | | |
|*129 | HASH JOIN RIGHT OUTER | | 92M| 75G| 31M (1)|159:02:05 | Q1,30 | PCWP |
| 130 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 131 | PX RECEIVE | | 5 | 70 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 132 | PX SEND BROADCAST | :Q0 | 5 | 70 | 3 (0)| 00:00:01 | | S->P |
|*133 | TABLE ACCESS FULL | DIM_BUCHUNGSART | 5 | 70 | 3 (0)| 00:00:01 | | |
|*134 | HASH JOIN RIGHT OUTER | | 92M| 74G| 31M (1)|159:02:00 | Q1,30 | PCWP |
| 135 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 136 | PX RECEIVE | | 3 | 24 | 3 (0)| 00:00:01 | Q1,30 | PCWP |
| 137 | PX SEND BROADCAST | :Q0 | 3 | 24 | 3 (0)| 00:00:01 | | S->P |
|*138 | TABLE ACCESS FULL | DIM_BETRIEBSART | 3 | 24 | 3 (0)| 00:00:01 | | |
| 139 | VIEW | | 92M| 73G| 31M (1)|159:01:55 | Q1,30 | PCWP |
| 140 | NESTED LOOPS OUTER | | 92M| 39G| 31M (1)|159:01:55 | Q1,30 | PCWP |
|*141 | HASH JOIN RIGHT OUTER | | 92M| 36G| 242K (1)| 01:12:49 | Q1,30 | PCWP |
| 142 | BUFFER SORT | | | | | | Q1,30 | PCWC |
| 143 | PX RECEIVE | | 451K| 26M| 1416 (1)| 00:00:26 | Q1,30 | PCWP |
| 144 | PX SEND HASH | :Q0 | 451K| 26M| 1416 (1)| 00:00:26 | | S->P |
| 145 | TABLE ACCESS FULL | DIM_VERTRAGSKONTO | 451K| 26M| 1416 (1)| 00:00:26 | | |
| 146 | PX RECEIVE | | 92M| 31G| 240K (1)| 01:12:18 | Q1,30 | PCWP |
| 147 | PX SEND HASH | :Q1029 | 92M| 31G| 240K (1)| 01:12:18 | Q1,29 | P->P |
|*148 | HASH JOIN RIGHT OUTER | | 92M| 31G| 240K (1)| 01:12:18 | Q1,29 | PCWP |
| 149 | BUFFER SORT | | | | | | Q1,29 | PCWC |
| 150 | PX RECEIVE | | 566 | 6226 | 4 (0)| 00:00:01 | Q1,29 | PCWP |
| 151 | PX SEND BROADCAST | :Q0 | 566 | 6226 | 4 (0)| 00:00:01 | | S->P |
| 152 | TABLE ACCESS FULL | LKL_ZSD_SONDERFAKTUR | 566 | 6226 | 4 (0)| 00:00:01 | | |
| 153 | PX BLOCK ITERATOR | | 92M| 30G| 240K (1)| 01:12:12 | Q1,29 | PCWC |
|*154 | TABLE ACCESS FULL | LKL_ZSDMAUT_FAKT | 92M| 30G| 240K (1)| 01:12:12 | Q1,29 | PCWP |
| 155 | TABLE ACCESS BY INDEX ROWID| LKL_ZCA_ZLEF | 1 | 29 | 2 (0)| 00:00:01 | Q1,30 | PCWP |
|*156 | INDEX UNIQUE SCAN | IDX_LKL_ZLEF | 1 | | 1 (0)| 00:00:01 | Q1,30 | PCWP |
----------------------------------------------------------------------------------------------------------------------------------------------------


Tom Kyte
August 18, 2005 - 4:00 pm UTC

thanks for many pages of wrapped input.

why do you consider it a "waste", temp is there to be used is it not?

A query that returns 90 million rows "instantly"? Or is it that the query returns the first row really fast and you never waited to see how long to get to the last row?




temp usage & NL / Hash - Join

Marc, August 19, 2005 - 3:39 am UTC

well that has been a waste of plan then :)

I was tricked by toad a little bit - the query must have gotten the first rows out of the buffer.

But I then came to the point I still dont understand.
Why is Oracle writing an intermediate result ( query broken at 50 Gbyte , the query produces about 65 Gbyte result ) to temp space when it is joining like :

further hash joins
view <-- probably materialized although dbms_xplan isnt
showing any estimated temp usage
NL outer
hash join
90Mio_table
400k_table
30Mio_table

whereas this plan ( forced by /*+ first_rows */ )
is not using any temp :

further hash joins
view
NL outer
NL outer
90Mio_table
400k_table
30Mio_table

What is the "data-flow" in / out of a hash join doing to cause materializing intermediate results ?

These kind of questions came across over and over in the development of heavy sql (large data + compley SQL) - the essence is like :
why should I go for hash + writing gigabyte of temp ( having fights with stupid DBA's and probably end up with a slow ETL process ) when I got almost unique index access justifying a first_rows approach to produce the result without using temp space !!
Do you think a query producing 60 Gbyte of data , which is starting with writing 50 Gbyte to temp could finish faster doing that way compared with a first_rows plan ??
Oracle optimizer is very often :)

thanks a lot for your time in adavance tom.
cheers
marc

Tom Kyte
August 20, 2005 - 3:58 pm UTC

<quote>
I was tricked by toad a little bit - the query must have gotten the first rows
out of the buffer.</quote>

toad STOPS RUNNING THE QUERY. it does first rows and stops.


read:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894#6760861174154 <code>

to do 90,000,000 INDEX LOOKUPS is a rather expensive task.

Brat, October 11, 2005 - 10:51 pm UTC

How to determine the size of the temp tablespace (in mb or gb) ?? I am trying to find out how much temp space we have as my process blows out because of ORA-1654 error

Tom Kyte
October 12, 2005 - 7:11 am UTC

you can always let is autoextend to some predefined max size and see what you need.


Brat, October 13, 2005 - 8:53 am UTC

Thanks for the info. But i need to find out how much temp(max size) is available in my development instance and compare it to my production instance..

Tom Kyte
October 13, 2005 - 10:49 am UTC

not sure what you mean by that, if you mean the "max that temp can grow to", you can use

</code> http://asktom.oracle.com/Misc/free.html <code>

sort in disks

A reader, October 20, 2005 - 6:14 am UTC

Hi

I have a index creation process which uses 40GB temporary tablespace but in v$sesstat it shows zero sort in disks. How can that be?!

Tom Kyte
October 20, 2005 - 8:30 am UTC

because when you create an index, we use "temporary extents" in your permanent tablespace to hold the newly created index.

At the VERY END of the create index, we turn the temporary extents into real, permanent ones.

This is so that if the create index fails for whatever reason, the system will just normally clean up the orphaned temporary extents in the background.

sort in disks

A reader, October 20, 2005 - 10:15 am UTC

"we use "temporary extents" in your permanent tablespace to hold the newly created index."

that cant be true in my case, I understand that temporary extents are created in my permanenet tablespace during index creation but the 40GB of space I am talking is my tempfile temporary tablespace!

Tom Kyte
October 20, 2005 - 4:32 pm UTC

sorts to disk are recorded at the end of the call. until the index is done (successfully) there won't be anything recorded.

sort in disks

A reader, October 21, 2005 - 4:12 am UTC

Hi

It showed one disk sort at end of the index creation... and it used 40gb temporary tablespace?! How does Oracle count sorts :-?

Tom Kyte
October 21, 2005 - 8:18 am UTC

it did one sort (for the index) that spilled to disk.

The count is not a function of SIZE, it is a function of "how many times we sorted and it went to disk"


If you sort 100m and we sort to disk

versus

If you sort 100g and we sort to disk....


You have still just sorted once and it sorted to disk.


We count sorts using numbers, 1, 2, 3, 4, .... ;)

query on temp tablespace.

vivek, November 01, 2005 - 3:34 am UTC

Tom,

I got 1652 error in one of our production environment. Please confirm whether my understanding is correct about temp tablespace or not.
if one process is using temp space for the user request,then on completion that space is freed and same space will be used by other process. If enough space is not available oracle will allocate next extent for the request and If oracle can't do that ora-1652 may come up.
and further i want to ask from you is
1. Do i need to increase the size of temp straight away or wait for next time as it may be for that particular request temp ran out of space. What i mean is on repeated 1652, you know that there is not enough space left and one has to increase the size but on one instance can we just wait and watch.
2.Will it make any difference if we resize the datafile rather than adding another datafile to temp tablespace (datafile files are temporary).

Thanks
Vivek


Tom Kyte
November 01, 2005 - 10:37 am UTC

1) it would sound like, if you get repreated 1652's - unable to allocate space, then yes, you would need to either decrease your need for temp (look at queries) or increase temp.

2) that is up to you - it'll just be "more space" either way

Cant create trigger

syed, November 03, 2005 - 7:32 am UTC

Hi Tom

I am trying to create the on server error trigger to capture ORA-01652. Using you example, I am trying to capture the sql text of all sessions rather than just the one that triggered the error. 
However, I am unable to get the trigger to compile. This is 8.1.7 by the way.


SQL> create table temp_failure ( uname varchar2(30), sql_text varchar2(4000), extents number(6), sqlhash number(12),timestamp date);

Table created.

SQL> create or replace trigger failed_to_extend_temp after servererror on database
 declare
 begin
  if ( is_servererror(1652) )
   then

    insert into temp_failure select ( select username from v$session where saddr=v.session_addr) uname,
        vs.sql_text,
        v.extents,
        v.sqlhash,
        sysdate
    from
        v$sql vs,
        v$sort_usage v
    where
        v.sqlhash=vs.hash_value
    and
        v.extents > 1;
  end if;
 end;
/

Warning: Trigger created with compilation errors.

SQL> show err
Errors for TRIGGER FAILED_TO_EXTEND_TEMP:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/39     PLS-00103: Encountered the symbol "SELECT" when expecting one of
         the following:
         ( - + mod not null others <an identifier>
         <a double-quoted delimited-identifier> <a bind variable> avg
         count current exists max min prior sql stddev sum variance
         execute forall time timestamp interval date
         <a string literal with character set specification>
         <a number> <a single-quoted SQL string>

6/98     PLS-00103: Encountered the symbol "UNAME" when expecting one of
         the following:
         ; return returning and or
         The symbol "return" was substituted for "UNAME" to continue.

11/5     PLS-00103: Encountered the symbol "FROM" when expecting one of
         the following:
         . ( , * @ % & - + / at mod rem <an identifier>
         <a double-quoted delimited-identifier> <an exponent (**)> as
         into || bulk
         The symbol ". was inserted before "FROM" to continue.

14/5     PLS-00103: Encountered the symbol "WHERE" when expecting one of
         the following:
         , into bulk

SQL>

If I run the insert into statement from SQL it works fine.


SQL>insert into temp_failure select ( select username from v$session where saddr=v.session_addr) uname,
        vs.sql_text,
        v.extents,
        v.sqlhash,
        sysdate
    from
        v$sql vs,
        v$sort_usage v
    where
        v.sqlhash=vs.hash_value
    and
        v.extents > 1

SQL> /

0 rows created.

SQL>
 

Tom Kyte
November 04, 2005 - 2:17 am UTC

8.1.7 did not like scalar subqueries in PLSQL (or analytics, or some other sql syntax).


either

a) remove the scalar subquery (join to v$sesion)
b) use native dynamic sql to hide the scalar subquery from plsql.
c) get to 9i or 10g :)

temporary tablespace

David Prabhakar, November 04, 2005 - 4:41 pm UTC

Dear Tom,

I am using oracle 9i and running a procedure which populates a table with data.  I've assigned temporary tablespace with enough space.  Yet i keep getting this same error.

Here is what I've done so far....

SQL> create temporary tablespace immpact_temp tempfile
  2  'C:\ORACLE\ORADATA\IM2DEV\IMMPACT_TEMP01.DBF' SIZE 300M
  3  extent management local uniform size 5M
  4  /

Tablespace created.

SQL> ALTER USER IMMPACT TEMPORARY TABLESPACE IMMPACT_TEMP;

User altered.

Here is the procedure that populates only 10 rows into the table. (rownum <11)

/* PROJECT NAME: IPHIS IMMPACT DATAMART 
   SAMPLE INITIAL LOAD PROCEDURE NAME: PROC_SRCE_TGT_VXUV_IL
   PROCEDURE CREATED_BY: SUDHAKARAN SURULIRAJ
   PROCEDURE CREATED_DATE: OCT 26,2005
   PROCEDURE MODIFIED_BY: SUDHAKARAN SURULIRAJ
   PROCEDURE MODIFIED_DATE: NOV 4,2005
   VXU_VACCINATION STATES THAT VACCINE RECORDS POPULATED FROM SOURCE TO TARGET */

CREATE OR REPLACE PROCEDURE PROC_SRCE_TGT_VXUV_IL 
AS
-- EXPLICIT CURSOR DECLARATION 
CURSOR VXUV_CUR IS
   SELECT A.CVX_CODE,
          B.LOCATION_ID,
          E.NAME,
          C.ADDRESS_LINE,
          C.EXTRA_ADDRESS_LINE,
          C.CITY,
          C.STATE,
          C.ZIP,
          C.COUNTRY,
          C.POSTAL_CODE,   
          B.PROVIDER_NAME,
          B.SITE_ID,
          B.DOSE_NUMBER,
          B.ADMINISTRATION_DATE,
          B.LOT_NUMBER,
          D.MANUFACTURER,
          D.EXPIRATION_DATE,
          B.PROVIDER_ID,
          B.VACCINE_ADMINISTRATION_ID,
          B.ROUTE_ID
  FROM VACCINE_TYPE A,
       VACCINE_ADMINISTRATION B,
       ADDRESS C,
       INVENTORY_ITEM D,
       LOCATION E,
       PATIENT F,
       PATIENT_1TON_VACC_ADMIN G
   WHERE A.VACCINE_TYPE_ID = B.VACCINE_TYPE_ID AND
         B.LOCATION_ID = E.LOCATION_ID AND
         D.VACCINE_TYPE_ID = B.VACCINE_TYPE_ID AND
         E.ADDRESS_ID = C.ADDRESS_ID AND
         A.VACCINE_TYPE_ID = D.VACCINE_TYPE_ID AND
         F.PATIENT_ID = G.PATIENT_ID AND
         B.VACCINE_ADMINISTRATION_ID = G.VACCINE_ADMINISTRATION_ID AND
       F.REGISTRY_ENROLLMENT_FLAG = 'Y'AND
         ROWNUM <= 11;
VXUV_REC VXUV_CUR%ROWTYPE;
BEGIN
  OPEN VXUV_CUR;
   LOOP
  FETCH VXUV_CUR INTO VXUV_REC;
     EXIT WHEN VXUV_CUR%NOTFOUND;
-- SQL IN-BUILT CONVERSION TO_CHAR FUNCTION USED FOR CVX_CODE COLUMN
-- SQL IN-BUILT CONVERSION TO_CHAR FUNCTION USED FOR SITE_ID COLUMN
-- SQL IN-BUILT CONVERSION TO_CHAR FUNCTION USED FOR ROUTE_ID COLUMN 
   INSERT INTO VXU_VACCINATION
         (VACCINATION_ID,
      CVX_CODE,
          LOCATION_ID,
          LOCATION_NAME,
          PROVIDER_STREET_ADDRESS,
          PROVIDER_STREET_ADDRESS_2,
          PROVIDER_CITY,
          PROVIDER_STATE,
          PROVIDER_ZIP,
          PROVIDER_COUNTRY,
          PROVIDER_POSTAL_CODE,
          PROVIDER_LAST_NAME,
          SITE_ID,
          DOSE_NUMBER,
          ADMINISTRATION_DATE,
          LOT_NUMBER,
          MANUFACTURER,
          EXPIRATION_DATE,
          PROVIDER_ID,
          PROVIDER_FIRST_NAME,
          MESSAGE_ID,
          ROUTE_CODE,
          ADMINISTERED_AMOUNT,
          ADMINISTERED_AMOUNT_UNITS)
   VALUES(VXUV_VACCID.NEXTVAL,
      TO_CHAR(VXUV_REC.CVX_CODE),
          VXUV_REC.LOCATION_ID,
          VXUV_REC.NAME,
          VXUV_REC.ADDRESS_LINE,
          VXUV_REC.EXTRA_ADDRESS_LINE,
          VXUV_REC.CITY,
          VXUV_REC.STATE,
          VXUV_REC.ZIP,
          VXUV_REC.COUNTRY,
          VXUV_REC.POSTAL_CODE,
          VXUV_REC.PROVIDER_NAME,
          TO_CHAR(VXUV_REC.SITE_ID),
          VXUV_REC.DOSE_NUMBER,
          VXUV_REC.ADMINISTRATION_DATE,
          VXUV_REC.LOT_NUMBER,
          VXUV_REC.MANUFACTURER,
          VXUV_REC.EXPIRATION_DATE,
          VXUV_REC.PROVIDER_ID,
          ' ',
          VXUV_REC.VACCINE_ADMINISTRATION_ID,
          TO_CHAR(VXUV_REC.ROUTE_ID),
          0,
          ' ');
   END LOOP;
DBMS_OUTPUT.PUT_LINE ('TWO HUNDRED ROWS ARE INSERTED');
  CLOSE VXUV_CUR;
END;
/

This what i been getting, even after allocating more space, adding tempfile, increasing sort_area_size

SQL> EXEC PROC_SRCE_TGT_VXUV_IL 
BEGIN PROC_SRCE_TGT_VXUV_IL; END;

*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 1280 in tablespace IMMPACT_TEMP
ORA-06512: at "IMMPACT.PROC_SRCE_TGT_VXUV_IL", line 45
ORA-06512: at line 1


SQL> select * from v$sort_usage
  2  /

no rows selected


Please help, I cant think of a way to solve this further.

Looking forward to hear from you tom,

thanks and regards,
david.

 

Tom Kyte
November 04, 2005 - 5:39 pm UTC

5m is *tiny*, teeny teeny small tiny.


v$sort_usage will be empty for you after a "failed sort" (by definition, you are NO LONGER SORTING!)

if you first rows hint that, what then?

Temporary Tablespace

David Prabhakar, November 07, 2005 - 10:23 am UTC

Hi Tom, 

I read your response, I am not able to understand it.  I got the point that you mentioned about 5m being very small.  So now, I've added another Temp file with 500m size.
and I keep getting the same error.  here is what I've done.

SQL> select file_name, tablespace_name from dba_temp_files;

FILE_NAME                                                               
------------------------------------------------------------------
C:\ORACLE\ORADATA\IM2DEV\TEMP01.DBF                                     
C:\ORACLE\ORADATA\IM2DEV\TEMP02.DBF                                     
C:\ORACLE\ORADATA\IM2DEV\IMMPACT_TEMP01.DBF                             

SQL>  alter tablespace immpact_Temp add tempfile
  2   'C:\ORACLE\ORADATA\IM2DEV\IMMPACT_TEMP02.dbf' size 500m
  3  /

Tablespace altered.

SQL> select tablespace_name, sum(bytes)/1024/1024 as "MB" 
from dba_temp_files group by tablespace_name;

TABLESPACE_NAME                        MB
------------------------------ ----------
IMMPACT_TEMP                          800
TEMP                                  540

SQL> exec PROC_SRCE_TGT_VXUV_IL 
BEGIN PROC_SRCE_TGT_VXUV_IL; END;

*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 1280 in tablespace IMMPACT_TEMP
ORA-06512: at "IMMPACT.PROC_SRCE_TGT_VXUV_IL", line 45
ORA-06512: at line 1


SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------

-------------------------------------------------------------------------
| Id  | Operation            |  Name            | Rows  | Bytes | Cost  |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                  |     1 |   361 |     2 |
|   1 |  TABLE ACCESS FULL   | VXU_VACCINATION  |     1 |   361 |     2 |
-------------------------------------------------------------------------

Note: cpu costing is off

9 rows selected.

Is there something that I am missing?  Right now, I am just loading the table with 9 rows only.  
This table is rolling out to production by this week end.  

Please tell me how to work around this problem tom.

Thank you,
regards,
david. 

Tom Kyte
November 07, 2005 - 12:19 pm UTC

I don't know what the purpose of the dbms_xplan call was? I didn't see you explain anything.

Apparently - something at line 45 of your procedure is using gobs of temp, what is on that line. (please don't post the entire routine, find line 45 and see what it is.

removing scalar worked

Syed, November 08, 2005 - 4:33 am UTC

Thanks tom

I know its somewhat off topic, but can you explain why
this query would be better with the scalar as opposed
to just joining v$session ?

Many thanks

syed

Tom Kyte
November 08, 2005 - 10:02 pm UTC

you could, i didn't for whatever reason

temp segments still showing using

Ram, November 10, 2005 - 9:56 pm UTC

V$sort_usage does not return any rows, still temp tablespace is almost full (98%). how does it happen ? how to free up temp tablespace ?



Tom Kyte
November 12, 2005 - 8:03 am UTC

that is perfect! excellent, that means future sorts will be most efficient as they won't have the overhead of allocating extents again.

The temporary tablespace should always appear "full" in a system that has been up and running - anything else would not be good.


There is one temp segement
This one segment is made up of extents
This one segment is shared by all sessions
Each extent will be used by ONE session

When that session is done with it - that session will give the extent back to the "list of free extents for other sessions to use" in the temporary segment - but the temp segment will KEEP the extent so it need not reallocate it over and over and over again.


So, everything is working perfectly and if you achieved your stated goal of "free up temp tablespace", you would only negatively impact performance.

how to find out creation time of tempfile

deba, November 23, 2005 - 5:52 pm UTC

Hi,

I must know the creation time of a tempfile in my database. Is it possible to know this creation time ? If any dictionary table does not store this , is it possible to get it thruogh any other procedure like dumping the header block ? Pls let me know the process. It is very very urgent.

Thanks
Deba

Tom Kyte
November 23, 2005 - 8:09 pm UTC

select * from v$tempfile


Creation Time --is null in v$tempfile

N.Venkatagiri, November 24, 2005 - 12:31 am UTC

We have cloned the database to another server with RMAN duplicate option.

Created one temporary tablespace.

From the original database the temporary tablespaces are copied in the dictionary

As per the correct behaviour the temporary files are not copied.

SQL> select * from dba_tablespaces where contents like 'T%';

TABLESPACE_NAME                BLOCK_SIZE INITIAL_EXTENT NEX
------------------------------ ---------- -------------- ---
TEMP1                                8192        1048576    
TEMP2                                8192        1048576    
TEMP3                                8192        1048576    
TEMP4                                8192        1048576    
TEMP5                                8192        1048576    
TEMP7                                8192        1048576    



SQL>  select * from v$tempfile;

     FILE# CREATION_CHANGE# CREATION_        TS#     RFILE# STATUS 
---------- ---------------- --------- ---------- ---------- -------
         2                0                  105          1 ONLINE 


We will drop the other unused temporary tablespaces.

Please clarify
1. The creation time is null. What could be the reason.

2. One user aborted his forms session. 
   Some temporary tables are used.
   Next time when he lookup for the temporary table for the same record, it is saying so and so record is used by the <<username>>user.  This error is captured in forms itself.
This may be due to the table lookup with rowid in select queries.
When we restart the databse this problem is solved.
How to clear the usage of the temporary table after the user abort his session. 

Thanks very much
 

Tom Kyte
November 24, 2005 - 8:47 am UTC

1) please contact support of the tempfiles actually exist.

2) if these are global temporary tables, there is no way this could be - the temporary tables are specific to a session, if his session is "gone", so is the data and even if his session is still there, from some other session there is no way to see what is in the other sessions global temporary tables...

so, something is missing from the description here.

how to find out creation time of tempfile

deba, November 24, 2005 - 5:43 am UTC

Hi,

I can not find out for v$tempfile. I have given some data from my database;

SQL> select tablespace_name,status,contents,extent_management,allocation_type,
  2  segment_space_management from dba_tablespaces where tablespace_name like '%TEMP%'
  3  ;

TABLESPACE_NAME                STATUS    CONTENTS  EXTENT_MAN ALLOCATIO SEGMEN
------------------------------ --------- --------- ---------- --------- ------
TEMP2                          ONLINE    TEMPORARY LOCAL      UNIFORM   MANUAL
SDW_TEMP                       ONLINE    TEMPORARY LOCAL      UNIFORM   MANUAL

SQL> select file#,ts#,name from v$tempfile;
     FILE#        TS# NAME
---------- ---------- ----------------------------------------------
         1         16 /nas/oradata/SDWLIV/TEMP2_01.dbf
         2         16 /nas/oradata/SDWLIV/TEMP2_02.dbf
         3         28 /nas/oradata/SDWLIV/SDW_TEMP1.dbf
         4         28 /nas/oradata/SDWLIV/SDW_TEMP2.dbf
         5         28 /nas/oradata/SDWLIV/SDW_TEMP3.dbf
         6         28 /nas/oradata/SDWLIV/SDW_TEMP4.dbf

6 rows selected.
SQL> select file#,ts#,CREATION_CHANGE#,CREATION_TIME,STATUS,CREATE_BYTES,NAME from v$tempfile;

     FILE#        TS# CREATION_CHANGE# CREATION_ STATUS  CREATE_BYTES NAME
---------- ---------- ---------------- --------- ------- ------------ ------------------------------
         1         16                0           ONLINE     524288000 /nas/oradata/SDWLIV/TEMP2_01.dbf
         2         16                0           ONLINE     524288000 /nas/oradata/SDWLIV/TEMP2_02.dbf
         3         28                0           ONLINE    2147483648 /nas/oradata/SDWLIV/SDW_TEMP1.dbf
         4         28                0           ONLINE    2147483648 /nas/oradata/SDWLIV/SDW_TEMP2.dbf
         5         28                0           ONLINE    2147483648 /nas/oradata/SDWLIV/SDW_TEMP3.dbf
         6         28                0           ONLINE    2147483648 /nas/oradata/SDWLIV/SDW_TEMP4.dbf

6 rows selected.

You can see the creation_time is null. 

I tried to use "alter system dump tempfile '<filename>' block 1;" but in the trace there is no creation time. 

I still don't know how to find out creation time of tempfile. Creation time of datafile resides , not the tempfile.

Please help . It is very urgent.

Thanks
Deba 

Tom Kyte
November 24, 2005 - 8:56 am UTC

please utilize support for this.

(I'm curious though, what possible reason could there be for urgency of when a tempfile was created - perhaps if we know that we can suggest something)

Temp.

A reader, December 01, 2005 - 4:14 pm UTC

Why is v$sort_segment showing max blocks of 4275968

select tablespace_name, total_blocks, free_blocks, max_blocks from v$sort_segment
TABLESPACE_NAME TOTAL_BLOCKS FREE_BLOCKS MAX_BLOCKS
------------------------------- ------------ ----------- ----------
TEMP 2549504 2549504 4275968


when v$dba_temp_files shows a total of 4096000+2048000

TABLESPACE_NAME BLOCKS
------------------------------ ----------
TEMP 4096000
TEMP 2048000

Does it means its not allocated yet?

Thanks

Tom Kyte
December 02, 2005 - 10:42 am UTC

yes.

Temp Space

Suzanne, December 12, 2005 - 1:59 pm UTC

I've been trying to track down a 1652 error that seems to happen just about every night at 3:00 AM. I used your very helpful trigger at the top of this thread. I created the trigger and message table on Friday. I tested the trigger just as you did (with a very small temp tablespace and a large sort on a big table) and it returned the information that I expected. Come Monday morning (today) I was all set to deal with my temp tablespace full error. When I looked at my message table it was empty! All of my database connections are part of a connection pool and they have been logoned on for more then a month. Do they have to log off and log back on for this trigger to take affect?

Tom Kyte
December 12, 2005 - 2:25 pm UTC

I do not believe so - for I just tested in 9205

a) create a session...

b) in another session, install trigger, table, etc...
c) in that other session, verify that table T gets populated

d) in the session from a) blow out sort space and see that T was populated by that session as well.


Are you sure the trigger and all are valid? did you force a "fake error" like I did to test it out?

LOB_DATA for ordinary sort?

Sami, February 14, 2006 - 3:40 pm UTC

Dear Tom,

Our statspack reports shows that the Av Read for temporary tablespace is too high as indicated below. So we started looking at v$sort_usage view. This views shows that there are many LOB_DATA segment type. However when I looked at the SQL_TEXT (from v$sort_usage.SQLHASH), none of the query or underlying objects have LOB data type.

Could you please tell us what could be wrong here?

Tablespace
------------------------------
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TEMP
            94       0 ######     2.0          200        0          0    0.0



SQL> select username,SEGTYPE,count(*) from gv$sort_usage group by username,SEGTYPE;
 
USERNAME                       SEGTYPE     COUNT(*)
------------------------------ --------- ----------
ATG56                          LOB_DATA         310
 
SQL> select sql_text from v$sql where hash_value in (select distinct SQLHASH from gv$sort_usage);
 

Tom Kyte
February 14, 2006 - 4:35 pm UTC

and what was the time for reads on temp? You say "too high", I say "totally unknown"

Just takes one abnormally long read given you have *so so so few* of them.

A reader, February 14, 2006 - 4:50 pm UTC

Tom,

Thanks for your quick response.
I mentioned "too high" because the statspack reported the value as ###### (I did cut & paste in my previous post), so I interpreted as 5 digit number which is too high. All other my tablespaces show the value less than 10 msec.


Tom Kyte
February 14, 2006 - 4:57 pm UTC

col atpr format 990.0 heading 'Av|Rd(ms)' just c;


it is not a 5 digit number, and there are *94* of them at most.

default Temporary tablespace..

Jaya Kumar, March 02, 2006 - 11:22 am UTC

It is fun going thru your site... we get to know the aspects of db admn... that we might not have think of.. and the followup and reviews are like game of cricket...batting,.. bowling.. and we wait for next ball..

Q- why default temporary tablespace is not a parameter in the init<sid>.ora file.

(Q- how u make sure that in a a discussion the topics..donot get repeated.) just curiosity sake.. for it ..??? amazing....thanks for all the help..



Tom Kyte
March 02, 2006 - 1:10 pm UTC

answer: because it takes a dictionary update to make it "stick". We'd have to potentially update everyones temporary tablespace upon each restart.

It is a database thing.




default temporary tablespace...

Jaya Kumar, March 03, 2006 - 11:20 am UTC

what happens in temporary tablespace..does not have much to do with data dic...

what made me ask the question was the fact that undo_tablespace.. is a parameter in init<sid>.ora.. so why 'default temporary tablespace' cannot be....

sorry .. but i could not make much out of the followup.. may be i need to learn more..

Tom Kyte
March 03, 2006 - 11:40 am UTC

the default temporaray tablespace is used to populate the data dictionary, it has everything to do with it.


an undo tablespace - it is not assigned to users, it in fact is assigned to an INSTANCE (not even a database!!!!). It is an instance, not database, parameter (in fact, with RAC - every instance needs it's own unique, personal one).



A default temporary tablespace is assigned to a user when you create them.

If you change the default temporary tablespace - we must update every user account that refers to the old one.

The default temporary tablespace is a DATABASE thing, not a session thing, not an instance thing.

default temporary tablespace...

Jaya Kumar, March 05, 2006 - 6:47 am UTC

a database has system tablspace as its default temporary tablespace or some other tablespace assigned as for that purpose.

so when u say .. >>
.. it has sth to do with data dictionary.. <<

it means..>>
that if it our instance has not sufficient memory ..it will use default temporary tablespace to update data dictionary..
so it turns out that oracle has its own way to manage memory requirements and it is so smart that it proactively does so before bothering OS to do swapping /paging.. etc.

and wat problem we will encounter . if we can have common default temporary tablesp for an instance. (going by Undo tablespace analogy that it is not assigned to user and it is an parameter.)..

If there is not need for the undo tablespace to get specially assigned to the user .. the same same can go for default temporary tablespace ..then we can have it as instance parameter. And I m just trying to find the reason why this is not so. ??
It seems i am back to where we started..


Tom Kyte
March 05, 2006 - 1:49 pm UTC

no, the default temporary tablespace is the tablespace assigned to a user when you create them (without specifying something else)

A temporary tablespace (any temporary tablespace assigned to a user) will be used to "swap to" as needed.


This is just the default tablespace name assigned to a newly created user.

It is also modified for all users assigned to it if you switch the databases default temporary tablespace (everyone assigned to the old one will be assigned to the new one).


ou are missing the basic "concept" here - this is the DEFAULT NAME assigned to a newly created user. Period.

Every single user in the database could have a different temporary tablespace if you want.

A database has a SINGLE "name" it uses by default for newly created users.

default temporary tablespace... ..

Jaya Kumar, March 06, 2006 - 8:21 am UTC

i am sorry...
bt.. this is all the basic usual stuff about default temporary tablespace.

You see there is lot of mismatch to wat has been asked and wat has been answered.

1- EARLIER in first followup u mentioned data-dictionary.. I tried to put up my inference on use of temporary tablespace w.r.t to data dic...Donot know if my conclusion were correct or not..
Did not find the same in the followup.

2- I tried to find the reason for the difference when it comes to Undo tablespace and default temporary tablespace..
[(where Undo tablespace is common for all the users for an instance) while (temporary tablespace are assigned to users)]...

the question is simple..why oracle ppl decided that it is good to have one UNDO TABLESPACE for ALL the users.. but we can have MORE options for the user when it comes to temporary tablespace...(they could have a default temporary tablespace or some other temporary tablespace.)

--I UNDERSTAND that this seems more OUT OF SCOPE ..might be that's why not answered..

Thanks TOM for your time..






Tom Kyte
March 08, 2006 - 3:58 pm UTC

wat?
u?
w.r.t.?
dic?

anyway.....


The question has come around from "default temporary tablespaces" to "why just one undo tablespace" all of a sudden. I don't follow your "the question is simple" thing here. This is a new "question" really.

ppl?

My question: why after reading "I will make fun of you if you use instant message speak", people still do?

sigh.

And undo tablespace is fill with lots and lots of undo segments. Individual transactions are assigned to separate segments within that one tablespace. Now, a tablespace contains many files typically and these files are typically stripped over many physical devices. So, what is the problem with one? Since it is buffered IO (you put undo blocks into the buffer cache, you don't typically wait for the IO like you do with temp).

default temporary tablespace and undo tablespace compared

Jaya Kumar, March 13, 2006 - 3:09 am UTC

Sir..
So it comes out that--
* to minimize IO we have ONE (COMMON) "undo tablespace",
(please tell if i got it correct or not..)

Guess this all game of IO, buffered IO, direct IO etc..
I got to know about this for the very first time from your followup and knowing about them will really clear the concepts.

so
1>
is it like oracle has different way of treating Undo tablespace and temporary tablespace WHEN we talk of IO.

2> what do you mean by when you say "temporary tablespace has everything to do with data dictionary"

can u provide some links to clear the basics.
Seems like I am misusing the facility I repeat the question one way or the other.

Thanks for your precious time..



Tom Kyte
March 13, 2006 - 10:01 am UTC

1) yes, 100%.

undo is buffered in the cache. transactions do not write to undo, dbwr does.

temp data is read/written using direct IO - no cache, transactions do write to temp.

2) I am trying to say "the temporary tablespace set up is a DATABASE thing, not an INSTANCE thing". Since it is about the database (like global_name for example) we store it in the dictionary. It does not make sense to make it a parameter.

temporary files location

jason, March 13, 2006 - 1:50 pm UTC

/dev/vx/rdsk/meudg/volu07

when i query v$tempfile I am getting above location and now I refresh enviromens so How can I specify tempfile location in new enviroment?

Tom Kyte
March 14, 2006 - 9:51 am UTC

I don't know what you mean by "refresh environments" - do you mean "I've restored to a new machine" or what?

default temporary and undo tablespace compared..

Jaya Kumar, March 15, 2006 - 12:50 pm UTC

sir ..
to quote you from your previous followup===>
@----------------------------------------------------@
"undo is buffered in the cache. transactions do not write to undo, dbwr does. temp data is read/written using direct IO - no cache, transactions do write to temp."
@---------------------------------------------------@

So when u mention "cache" in above here it means "Oracle cache" and not "system cache"..right ??

so if we conclude...

1- To minimize IO it is recommended that we keep number of "TEMPORARY TABLEPSPACE" tablespace and "TABLESPACE' (that are used to assign to users when they are created in a particular database ) to a minimum...that means there is no need to spread userS data in many tablespaces.. and that will improve performance..

2- That you are there for us kind of people to clarify doubts..when we find nowhere to go.

Thanks tom


Tom Kyte
March 15, 2006 - 5:37 pm UTC

where did "U" come into the conversation?


Yes, when I say things - "Oracle" is sort of implied.


1) how did you come to that conclusion??

how to calculate free space in temp tablespace

Lily, March 15, 2006 - 2:49 pm UTC

Tom,
We are using oracle 9.2.0.4 and temporary tablespaces created by tempfiles.

We know how to join dba_data_files and dba_free_space to get
free MB for each data file. But it won't show how much free space for temporary tablespace which created by tempfile.
I was wondering which view should join with dba_temp_files
to get the freespace(MB) for temp tablespaces. Please provide the query if you have.

Many thanks.

Tom Kyte
March 15, 2006 - 5:39 pm UTC

default tablespace and undo tablespace compared

Jaya kumar, March 15, 2006 - 6:43 pm UTC


#1------------
IF I QUOTE YOU FROM YOUR EARLIER FOLLOWUP
"And undo tablespace is fill with lots and lots of undo segments. Individual
transactions are assigned to separate segments within that one tablespace. Now,
a tablespace contains many files typically and these files are typically
stripped over many physical devices. So, what is the problem with one? Since
it is buffered IO (you put undo blocks into the buffer cache, you don't
typically wait for the IO like you do with temp). "


NEXT IN NEXT REVIEW I CONCLUDED :---
Sir..
So it comes out that--
* to minimize IO we have ONE (COMMON) "undo tablespace",
(please tell if i got it correct or not..)
---------------------------------------------

#2----FROM YOUR NEXT FOLLOWUP
"undo is buffered in the cache. transactions do not write to undo, dbwr does.
temp data is read/written using direct IO - no cache, transactions do write to
temp.


--I CONCLUDED THE FOLLOWING--- (FROM discussion #1 AND #2 combined...)

.........>>> "To minimize IO it is recommended that we keep number of "TEMPORARY
TABLEPSPACE" tablespace and "TABLESPACE' (that are used to assign to users when
they are created in a particular database ) to a minimum...that means there is
no need to spread userS data in many tablespaces.. and that will improve
performance.."




Tom Kyte
March 16, 2006 - 7:48 am UTC

How would that MINIMIZE IO.

the amount of IO generated to temp is the same regardless of the cardinality of temp tablespaces.

default temporary tablespace and undo tablespace compared..

Jaya Kumar, March 16, 2006 - 10:39 am UTC

Sir..Thanks for all the patience.

it is like if you have more datafiles/tempfiles to write, then more resources are required and if they are less then obviously less resources are required...so I tried to sum up by saying that "it will minimize the IO.."..

so it is difficult to understand :-
WHY "amount of IO generated to temp is the same regardless of the cardinality of temp tablespaces " -taking into consideration the fact that "temp data is read/written using direct IO"






Tom Kyte
March 16, 2006 - 2:39 pm UTC

... it is like if you have more datafiles/tempfiles to write, then more resources
are required and if they are less then obviously less resources are
required.....

really - why do you say that? It is not obvious to me at all?



We either generate "X gig of temp per hour" or not. It doesn't matter if there is one temporary tablespace or 100 - we generate "X gig of temp per hour" regardless of the number of tablespaces.


In fact, having many files on DIFFERENT physical devices (striping of a sort) could spread IO out and therefore improve performance.


default temporary tablespace and undo tablespace

Jaya Kumar, March 18, 2006 - 8:42 pm UTC

You said "X gig of temp" --means-- i guess X gigabyte.. (I could do it with the help of google.)


sir .. I am finding it difficult to interpret the following explanation in one of yours' earlier followup in the above context( - "the latest followup by you"-)..-

"undo tablespace is fill with lots and lots of undo segments. Individual transactions are assigned to separate segments within that one tablespace. Now, a tablespace contains many files typically and these files are typically
stripped over many physical devices. So, what is the problem with one? Since it is buffered IO (you put undo blocks into the buffer cache, you don't typically wait for the IO like you do "

----------------&&&&&&&&&&&&&&&&&&&&------------------
Sometimes we just keep guessing .. which might lead to wrong interpretation, so I thought better ask you.
&&------------ THANKS FOR YOUR TIME ------------&&&


Tom Kyte
March 19, 2006 - 6:48 am UTC

I don't know what your question is anymore - can you rephrase it.


You said having more temporary tablespaces would lead to less IO. It won't, you will do the same amount of IO regardless of the number of temporary tablespaces.

You said having less datafiles would obviously be less resource intensive. I asked "why do you say that"



When I was talking about undo tablespaces - you asked "why does oracle only permit one". I was describing how one undo tablespace per instance of Oracle was effective and efficient.

default temporary tablespace and undo tablespace

Jaya Kumar, March 20, 2006 - 11:45 am UTC

Many thanks !

I will like to put things in this way..
1--
Because "direct IO " is used for temp files "it will do the same amount of IO regardless of the number of temporary tablespaces".

2
And if having "one undo tablespace per instance of Oracle
is effective and efficient."...then having less number of tablespaces that belongs to users will certainly be a better scenario.That is what i meant when I said "less datafiles would obviously be less resource intensive."

3
But you have also mentioned stripe set and spreading IO in different files, So from the talk of "less IO is better" (w.r.t your comments on Undo tablespace) you seemed to have moved to "MOre IO is not bad" (talking of stripe set).

Can you please make it all easy for me. Hope this time I am very clear what I am asking.




Tom Kyte
March 22, 2006 - 12:42 pm UTC

2) I don't see how you are making the leap from cause to effect here.

You say "if having "one undo tablespace per instance of Oracle
is effective and efficient."...then having less number of tablespaces that
belongs to users will certainly be a better scenario."

I say "no, not at all - there is no evidence of that and I have no idea WHY you leap to that conclusion"

There is no relationship to be found here, you are making one for whatever reason.


3) same thing here. having 1 or having 1000 tablespace would generate the same amount of IO. There is the crux of the discussion. Having fewer tablespaces DOES NOT REDUCE IO GENERATED. You said "it would of course reduce it", I'm saying over and over - 1 or 1,000 - I don't care. You have the same amount going out to disk.



tablespace and undo tablespace

Jaya Kumar, March 26, 2006 - 1:51 am UTC

Many thanks..

The thing that applies for tablespaces will not apply for undo tablespaces for it is the Oracle Database that is using "Undo tablespace" to maintain undo data.. so it preferrable that we keep it at minimum.

What confused me was that same did not apply for Temporary tablespace. But you told the reason mentioning about "Buffered IO" and "direct IO".

Thanks for clearing all the queries and confusions.
Regards



SORT_AREA_SIZE & NEXT EXTENT

RK, April 04, 2006 - 3:16 pm UTC

My temporary tablespace is as follows: (ver 8.1.7.4)

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE
 '/u01/oradata/PRD/temp_01.dbf' SIZE 4000M AUTOEXTEND OFF
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K

SQL> show parameter sort_area_size
sort_area_size integer 10000000

Lately I have witnessed ORA-1652: Unable to extend TEMP by 8
errors. Sometimes they are stray and no complains from user.

If I read correctly, the UNIFORM SIZE should be a nice multiple of SORT_AREA_SIZE.

Is the mismatch between SORT_AREA_SIZE and UNIFORM_SIZE reason for ORA-1652 errors ?

 

Tom Kyte
April 04, 2006 - 7:40 pm UTC

it is not the reason for the unable to extend

lack of space is.


but your temp tablespace isn't sized nicely for your sort area size, consider fixing that.

Optimal next

A reader, April 05, 2006 - 12:34 am UTC

So the NEXT extent of the TEMP tablespace should be 10000000 ? ( i.e 1 TIMES SORT_AREA_SIZE ).

Correct

Tom Kyte
April 05, 2006 - 5:39 pm UTC

sort area size should really probably be 10m (not 10 million)

and the extent size should be something "much nicer" 10m would do - 1m would do (10 1m extents for a given sort area size allocation that needs swapping)

Optimal next

A reader, April 05, 2006 - 12:44 am UTC

So the NEXT extent of the TEMP tablespace should be 10000000 ? ( i.e 1 TIMES SORT_AREA_SIZE ).

Correct

Running out of TEMP tablespace on EXPLAIN PLAN

RK, April 05, 2006 - 1:50 pm UTC

I am trying to run following query on PLAN_TABLE:

select lpad(' ', 2*(level-1)) || Operation||' '|| decode(id, 0,'Cost = '||position) "Operation",
options, object_name
from plan_table
start with Id= 0
connect by prior id = parent_id
order by id

My temp tablespace is as follows:

CREATE TEMPORARY TABLESPACE TEMP03 TEMPFILE '/d01/oradata/DEV/temp_03.dbf' SIZE 6000M
AUTOEXTEND OFF
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10240K
SEGMENT SPACE MANAGEMENT MANUAL

The session setting is as follows:

alter session set sort_area_size=10485760;

I had earlier read that UNIFORM SIZE should be a good multiple of SORT_AREA_SIZE, which it is here.

And my query still keeps running out of TEMP space, after using up almost 6Gb of TEMP space


Tom Kyte
April 06, 2006 - 9:15 am UTC

has nothing to do with the extent size, has everything to do with "6g isn't enough for what you are doing"

and precisely how many rows are in your plan table?

and are you sure that is the right query for you to be using? what are you trying to do.


do you know about dbms_xplan (tons easier than trying your own query)

CREATE TEMPORARY TABLESPACE...

Maurice, May 17, 2006 - 10:56 am UTC

Hi Tom,

In oracle 8i I'm trying to create a temporary tablespace using:
CREATE TEMPORARY TABLESPACE LVSTEMP
TEMPFILE 'd:\oracle\oradata\limsprod\limss_temp.f'
SIZE 500M reuse
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 32M;

When I perform an IMPort I get the error:
IMP-00003: ORACLE error 1652 encountered
ORA-01652: unable to extend temp segment by 4096 in tablespace LVSTEMP

Now I have figured out that the error is because my datafile is not on autoextent. (I changed this in DBA Studio and now it works fine.)

But shouldn't the statement create a datafile that is also autoextent (or locally managed)?

Kind regard,
Maurice

Tom Kyte
May 17, 2006 - 11:48 pm UTC

no, it will not create an autoextend datafile unless and until you ask it to. By default, they are fixed size (500m in this case)

Multiple DBWRs and tempfiles ...

VKOUL, May 22, 2006 - 2:09 pm UTC

Hi Tom,

Does multiple DBWRs has performance benefits on tempfiles ?

Does checkpoint in any way applies to tempfiles ?

Thanks


Tom Kyte
May 22, 2006 - 4:08 pm UTC

temporary data for sorting/hashing and such is written using direct IO - you write it as it exceeds the workareas allocated for it, not dbwr.



Multiple DBWRs and tempfiles ...

VKOUL, May 22, 2006 - 7:57 pm UTC

Thanks for your response ...

During our application testing with 2K concurrent users we saw a lot of file header waits, our application is using a lot of temporary tablespaces (with very few tables having indexes too) ...

*********************************************************
Buffer wait Statistics for DB: PERF Instance: perf Snaps: 12 -13
-> ordered by wait time desc, waits desc

Tot Wait Avg
Class Waits Time (s) Time (ms)
------------------ ----------- ---------- ---------
file header block 19,956 57,019 2,857
1st level bmb 14,054 3,179 226
data block 22,081 2,872 130
undo header 9,106 387 43
undo block 722 60 83
segment header 171 9 50
system undo header 10 0 27
2nd level bmb 8 0 13
bitmap index block 53 0 1
*********************************************************

We queried the files having the waits on file header block, we narrowed it down to TEMPFILE for TEMPORARY TABLESPACE (2nd block)

We thought it might be becuase there is just one file used for TEMPORARY TABLESPACE, so we split the TEMPORARY TABLESPACE into a number of files and results were same. The TEMPORARY TABLESPACE was on a disk on the server with minimal disk queue length.

Next,
1. We moved the TEMPORARY TABLESPACE to SAN on raw partition.
2. Setup multiple DBWRs.

After the tests, the results were very promising ...

Due to shortage of time we couldn't test individually actually what made the difference. I had a feeling multiple DBWRs will not make any difference on TEMPFILES, so the only thing it seemed to me was moving the TEMPORARY TABLESPACE to SAN. But if there was not much disk queue for TEMPORARY TABLESPACE when it was on a server hard drive, then was it TEMPORARY TABLESPACE contention/problems with File system ???

Oracle9i On SUSE Linux ...

Your thoughts will be appreciated ...

Thanks

Tom Kyte
May 23, 2006 - 7:25 am UTC

if it is taking an average of 2 seconds - something is wrong there. 2 seconds is "glacial" in terms of response time.

sounds like something with the file system there wasn't right.

"selected columns" affect the size of temporary tablespace usage?

A reader, June 05, 2006 - 9:05 am UTC

Hi Tom,

Is the number of selected columns affect the size of temporary tablespace usage? e.g.

1) select a, b, c, d, e from table order by z
2) select a from table order by z

Assume a, b, c, d and e each has 1000 characters. Is (1) query needs more temporary space for the sorting then (2)?

Thanks
David

Tom Kyte
June 05, 2006 - 9:40 am UTC

absolutely yes.

A reader, June 05, 2006 - 10:58 am UTC

Hi Tom,

Does this mean that Oracle will put column a, b, c, d, e and z into the temporary space, and then use z for the sorting for query 1? Why Oracle not put column z and rowid only into the temporary space for the sorting, and then use rowid to retrieve column a, b, c, d and e?

Thanks
David


Tom Kyte
June 05, 2006 - 11:31 am UTC

because that would kill the performance of any query of any "large size".

to have to go back and get the rows again - that would kill it.


That is why a query like:

select * from t order by z;

will NOT use an index on Z to retrieve the data!!! It will full scan and sort it, it would be painful to read the table rowid by rowid.

Buffer waits on temorary tablespace

Yoav, June 07, 2006 - 2:59 am UTC

Hi Tom,
At the last few days i saw alot of Buffer waits on the temorary tablespace.
The following results are from 1 houre statspack reports.
what could be the reason for buffer waiting in the temporary tablespace, and what is the right way to treat it ?
Regards


Av Av Av Av Buffer
Tablespace Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits
---------- ---- ------- ------ ------- ------------ -------- ----------
DATA 3,357,740 933 4.4 1.5 131,578 37 5
TEMP 1,625,571 452 4.8 15.6 21,316 6 4,534
IDX 273,844 76 8.8 3.2 37,125 10 560

Tom Kyte
June 07, 2006 - 7:09 am UTC

true temp? do you use global temporary tables? nothing permanent in there right - I mean "temp" is just a name after all, it infers nothing about the contents.

Comment on the TEMP tablespace and multiple DBWR post

Tanel Poder, June 08, 2006 - 8:32 am UTC

I think poster VKOUL experienced the file level write lock issue.
Everybody tried to write to temporary tablespace with direct writes (no DBWR involved) and as there is no concurrent IO option on Linux filesystems, only one write per file could happen at any time.

That's why moving to RAW devices helped, writes could start happening concurrently without serialization on file level write locks.

And the reason why user's IO queues didn't show anything was that the writes get to IO queues after they've obtained the file write lock and actually submitted the write to corresponding block device...


Tom Kyte
June 08, 2006 - 9:26 am UTC

Tanel - Thanks much!

How accurate is using “explain plan for” for estimating TEMP segment size?

Sasmit, June 21, 2006 - 6:44 am UTC

Explain plan for select a.object_name from t1 a, t2 b where a.object_id=b.object_id order by 1;
estimates temp size far more than actually required. Is there any other way to estimate temp usage?


Tom Kyte
June 22, 2006 - 10:23 am UTC

not entirely - it is very very VERY much dependent on the pga workarea allocated, and that varies these days based on the load.

It is a guess-timate, but the guess could be right sometimes, wrong others - based on the current server workload.

Temp usage

Sasmit, June 22, 2006 - 11:52 pm UTC

Many thanks Tom.
Per my observation guess-timate is always at higher side, so it could be taken as minimum space requirement while designing TEMP tablespace.



Buffered io on temporary tablespace ???

Andr, July 17, 2006 - 4:27 am UTC

Hi Tom,
13 Mar 2006 here you said "
undo is buffered in the cache. transactions do not write to undo, dbwr does.
temp data is read/written using direct IO - no cache, transactions do write to temp".

But sometimes I see in the v$session_wait :
1)
EVENT -> "free buffer waits"
P1TEXT P2TEXT P3TEXT
file# block# set-id#
P1 P2 P3 seconds_in_wait state
0201 307735 4 2 WATING
Why process wait free buffer for block from temporary file ?
(db_files=200) ?????

2) sometimes I see
"direct path write" with file number = 201 but sometimes -
"write complete waits" with file# = 0201 (not direct write!,
waits for dbwr ?) ??????



Tom Kyte
July 17, 2006 - 1:33 pm UTC

do you use global temporary tables.



Buffered io on temporary tablespace ???

Andr, July 18, 2006 - 4:15 am UTC

Hi, Tom
"do you use global temporary tables"
Yes.
Are the temporary tables blocks writed by dbwr ?
But how I can see the temporary blocks in the buffer cache ? I don't see these blocks in x$bh. In x$bh there are only blocks with file# -> v$datafile. - ?

Buffered io on temporary tablespace

Andr, July 18, 2006 - 8:53 am UTC

Hi, Tom
Thank you very much !!!!
I'm sorry for my inattention.

And column ts# -> temporary tablespace.

Buffered io on temporary tablespace

Andr, July 19, 2006 - 5:30 am UTC

Hi Tom
Could you explain why the session waits "write complete waits" for temporary tables ? For what purpose?


Tom Kyte
July 19, 2006 - 9:17 am UTC

because the session needs a buffer block and that block has to be written out first.

Buffered io on temporary tablespace

Andr, July 19, 2006 - 10:39 am UTC

"Followup:
because the session needs a buffer block and that block has to be written out first."

But for what purpose the session has to wait writing of the temp block ? I don't understand this. Why it can't change the temporary table buffer in the memory only ? For an ordinary data block, for example, if the block is in the writing process, the session creates clone and doesn't wait writing.

Tom Kyte
July 20, 2006 - 7:44 am UTC

you need the block for something else.

write complete wait on sort

Roderick, July 19, 2006 - 4:36 pm UTC

What might be happening is the extent may changed ownership from being used for a temp table to being reused for a sort. Sort will attempt to get direct access to the block but needs to wait for any dirty version in cache to be flushed first (write complete wait) to avoid a race condition between a direct write operation and non-direct flush (by dbw?) that would result in potentially garbled sort data.

what's the difference between MAX_BLOCKS and MAX_USED_BLOCKS

jianhui, July 19, 2006 - 7:54 pm UTC

Hello Tom,
In v$sort_segment, quoted from oracle9iR2 document.
<quote
MAX_BLOCKS NUMBER Maximum number of blocks ever used
MAX_USED_BLOCKS NUMBER Maximum number of blocks used by all sorts
/quote>

Could you tell the difference between these two?

Sincerely,

Tom Kyte
July 22, 2006 - 4:26 pm UTC

max_used_blocks is reset during a restart (for the life of the instance), the other survives restarts (comes from the temp segment itself)

I've figured it out

jianhui, July 19, 2006 - 8:52 pm UTC

Hi Tom,
It seems I have figured out the difference, if there are global temporary tables created, these two columns are different.

Maybe there are something else as well, could you list a few?

Sincerely,


write complete waits on temporary tablespace

Andr, July 20, 2006 - 2:16 am UTC

Hi, Tom

I asked for what purpose the session has to wait writing of the temp block ? Why it can't change the temporary table buffer in the memory only ?

Roderick said "Sort will attempt to get direct access
to the block but needs to wait for any dirty version in cache to be flushed first (write complete wait) to avoid a race condition between a direct write operation and non-direct flush". My supposition was similar.
But I see the session waits writing of its own temp block.
(v$sort_usage displays this block is created by this session). And it confuses me.

Buffered io on temporary tablespace

A reader, July 21, 2006 - 2:28 am UTC

Hi, Tom
You said "because the session needs a buffer block and that block has to be written out first"
Could you explain it in more detail ? (for the temp block)

Tom Kyte
July 22, 2006 - 5:52 pm UTC

see above, Roderick gave a "for example"

next size in tablespace

don, August 29, 2006 - 8:47 am UTC

Hi Tom,

We are using followin synatx
create tablespace s_ts_sf_data datafile
'/oracle08/oradata/KOOTT/s_ts_sf_data01.dbf' size 2000m reuse autoextend on next 100m maxsize 4000m ,
do you think next should be 10m instead of 100m?


Tom Kyte
August 29, 2006 - 3:21 pm UTC

why?

you tell me, do you want the file to grow 100m at a time or 10m?

also, you should add 64k (128k if you have a 32k blocksize) to the initial size of the datafile - to accomodate the bitmap file header - if you are using uniform sizes.


I guess I could say this:

if you are using uniform sizes, make the next be some multiple of your extent size (and absolutely add 64k to the initial - resize the existing file to be 64k larger so as to not 'waste' an extent)



temporary tablespace

Raghav, September 18, 2006 - 7:33 am UTC

Hi Tom,

Gone through the link and it is very useful.
when we have created a temporary tablespace using...

create temporary tablespace test_temp
tempfile '/oradata/test.dbf' size 512k reuse
extent management local
uniform size 64k

it creates a physical file "test.dbf" in the specified folder.

1. If we are creating the temporary tablespace from enterprise manager, how can we specify the "reuse" command?

2. "Followup: You should be using true temporary tablespaces (you have a permanent tablespace marked temporary -- you have "data files", not "temp files")"

In the above example, we have created the tablespace as "test.dbf". Here in the file name the extention is DBF (database file).
For similarity, the temp tablespace created by the system along with other tablespaces (non-user defined) when the database is created is also having the ".dbf" file as extention.

How can we treat this as a temporary file?

3. When ever we run a dbms_job, the temporary tablespace is running out of space. We have increased the size from 4Gb to 6Gb and got the same error. We again increased size from 6gb to 8Gb. Even after that also, we are getting this error. When there is no job running, the space used in temp tablespace is zero. We are unable to identify how much space it requires to complete the process. We can increase the tablespace to another 5 GB. But, still,
Keeping the hard disk as a constraint,
How can we identify / sort out this issue?
What alternatives we can do for the same for not getting this error?

4. I request you to clarify.... when we create a temporary tablespace using

create temporary tablespace "test" .....
create tablespace "test" ...... temporary.

As I understand from the link, if I am not wrong, (correct me if I am) the first one will create a tempfile, and for the second one, we need to give datafile even if we give "temporary" at the end. ritht? If, so, Other than the file name which we give, is there any ohter difference between the two statements above?
(performance/treatment/or .....??)

Thanks in advance
Raghav

modelling temp usage

Ryan, November 14, 2006 - 3:03 pm UTC

If I have a given query with an explain plan is stable with X data set. Is there a way to get a reliable estimate of how much temp space I will need as data grows assuming the plan does not change?

Is it always a linear increase?

Is there a way to estimate how big a hash table will be if I use a certain plan given a certain data set?

Tom Kyte
November 15, 2006 - 6:46 am UTC

no, it would not be linear, it depends on whether it would be a single pass or a multi-pass operation, how much pga you are allowed to use (eg: it could use different amounts of temp given the SAME INPUT data at different times in the day)

not using pga aggregate target

Ryan, November 15, 2006 - 2:25 pm UTC

I am working on a batch reporting and batch processing system. We have times when we have just a few sessions running taht will do very large sorts. Does not appear that 5% of PGA will be optimal for us which is all I believe we can use with pga_aggregate_target. There are times where we may have one process running and we would want it to basically use all of the pga available.

Do you know anyone going back to sort_area_size for systems like this?

I thought about opening a TAR and asking about the undocumented parameter _pga_max_size and see what support thinks. Have you used this? I am not a fan of undocumented parameters.

Tom Kyte
November 16, 2006 - 8:15 am UTC

don't touch it, if you think you know better, just ALTER YOUR SESSION and set the workarea policy to manual and set your sort/hash_area sizes - you can use both auto pga and manual memory management.

modelling for single and multi-pass pga usage

Ryan, November 15, 2006 - 2:27 pm UTC

Is there a way to develop a model for this? We do not have production data. If I have a data set of 'X' size and I have 'X' memory, etc...

I know in production I will have 'Y' data and 'Y' memory and I know what my explain plan will be.

Is it possible to try to determine a model so I can to some degree predict PGA usage? We are developing code and designing our system without a full set of data. It has not been converted yet. I can't do anything about this.

Tom Kyte
November 16, 2006 - 8:16 am UTC

not that I am aware of, and remember, it'll change from release to release as well.

Using auto, it does a "best fit", it is adaptive and dynamic.

Temp Space Monitoring

A reader, November 17, 2006 - 3:36 am UTC

Hi Tom,

1. Even I faced a similar situation posted by here onTemp Tablespace Resizing, posted on May 18 2004.

In my case the tempfile was allocated 5g and i thried to resize it to 2g, while no active transactions were going on. Still go the error.
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

Can you explain whay this happens.


2. Free blocks information on v$temp_space_header and v$sort_segment differs

SQL> select a.tablespacE_name,a.blocks_free,b.free_blocks
  2  from v$temp_space_header a,v$sort_segment b
  3  where a.tablespace_name=b.tablespace_name;

TABLESPACE_NAME                BLOCKS_FREE FREE_BLOCKS
------------------------------ ----------- -----------
TEMP1                           37376         896
TEMP_DATA                           0      262016
TEMP                                0      639872

Can you explain this.

Thanks in advance for both the answeres.

Regards, 

Tom Kyte
November 17, 2006 - 3:43 am UTC

temp space, once allocated - is just that, allocated.

It is then managed internally, so we don't have to keep allocating/deallocating/allocating/deallocating.

You want to make it small?

a) create new small temp
b) alter the database to point to this as the default temporary tablespace
c) drop old.


the "free" stuff is stuff that is allocated to temp but not currently in use.


It seems like a waste of time most of the time to "deallocate" temp and make it smaller - it'll just grow again (why pay the price of growing again, it isn't like you GAIN anything)

Temp Space Monitoring

A reader, November 17, 2006 - 3:52 am UTC

Tom,

Thanks a lot for an immediate response.

Now, coming to my requirement- I want to monitor space available in the temporary tablespace (including the max space to which it can grow). This monitoring is required when the user runs some batch jobs (and at times it fails due to lack of space in temp tablespace. Is it enough if I query v$sort_segment for free space available, along with max size on dba_tablespaces? Do you have any script for this?

thanks in advance

Tom Kyte
November 17, 2006 - 8:01 am UTC

I do not have your particular script no, but it would not be hard to write (dba_temp_files is what you want for max growth)

Index Tablespace with 95% Fragementation !

Maulesh Jani`, November 18, 2006 - 5:22 am UTC

Hi Tom,
Alway thanks for the for your valuble support . Rghtnow I am monitoring the production DB server from Remot place. I have two problems from my monitoring , I need your comments on them .

1) We have mixed DB-server,which performs some what as DSS at night batch jobs and OLTP at Day time . Now Under Each schema mostly tables are seperted between these two. I realize that in lot of tables which usually part of OLTP the parameter PCTUUSED,PCTFREE,FREELIST are default .We have LMT with Segment Space Mgmt as Manual. Now I think that its better to make all these tables with ASSM . As per my thinking using ASSM in LMT for OLTP tables will improve our performance .
What's your comment ,should I go for it ,or it will not become the as benificial as its effort & risk if any.
2) I am having few Index tablespace with Fragementation above 90 %. I tried to collapse it ,but it didnt work. Can you please give me your ideas on it ,that how much such tablespace can make damage (In terms of Performance & Space).And resolution method for it.

We are having oracle 9i R2 on HP-ux.
Regards
MJani

can or can't add temprary tablespace group to oracle 8i ?

A reader, December 19, 2006 - 11:37 am UTC

Hi Tom,

I tried to add a temporary tablespace group to an existing datbase. It works great with oracle 10g but does not work with oracle 8i.

I did not get the related information online and I think it might save my time posting it here:

Oracle 8i does not support multiple temprary tablespaces by creating tablespace group ?

Thank you very much!




Tom Kyte
December 19, 2006 - 12:50 pm UTC

that was a new 10g feature

Marc, January 23, 2007 - 11:18 pm UTC

Tom, we are planning to move our temporary tablespace from our NAS to the local disk to increase performance. We are going to create a new tablespace and just assign the users the new tablespace. My question is, do I need to disconnect the affected users to change their default temp tablespace? Or will this take affect on the fly?

Normal?

A reader, February 28, 2007 - 12:25 am UTC

Greetings Tom

The following does not appear to be normal..

  select  a.file#,
          a.phyrds,
          a.phyblkrd,
          round(a.readtim/decode(a.phyrds,0,1,a.phyrds),3)    readtim,
          a.phywrts,
          a.phyblkwrt,
          round(a.writetim/decode(a.phywrts,0,1,a.phywrts),3) writetim
  from v$tempstat a
  order by file#
 /


    File        Reads      Blks_Rd     Avg_Time       Writes     Blks_wrt     Avg_Time
-------- ------------ ------------ ------------ ------------ ------------ ------------
       1      279,891      281,382            0      378,277      378,277            1
       2       70,033    1,157,074            4       10,692    1,290,880          222
       3       27,491    1,114,104            9       11,686    1,197,601          323
       4       99,165    1,313,360            4       13,284    1,380,219          257
       5      151,387    1,473,155            4       13,732    1,533,735          362
       6       77,311    1,159,334            5       70,459    1,219,550           39
       7      136,733    1,227,192            3       10,711    1,282,271          311
       8       87,413    1,230,690            6       10,749    1,285,902          440
       9       85,144    1,188,610            6       10,357    1,242,280          365
      10           10           10            1            0            0            0
      11       84,258      990,325            7        8,749    1,043,224          297
      12       70,942      813,023            5        7,285      865,793          276
      13       28,634      865,670            8       26,182      903,974           58
      14       77,246    1,054,535            6       11,410    1,056,548          223
      15      115,913    1,156,369            4       78,262    1,175,799           44
      16           10           10            0            0            0            0
      17       11,205      886,452           26        7,434      893,405          176
      18      132,498    1,040,935            3        9,372    1,083,722          170
      19      191,877    1,413,154            5      127,029    1,509,655           39
      20       41,459      966,304            6        8,590    1,051,339          169
      21      119,886      912,662            4        8,301      996,285          199
      22      155,465    1,091,724            3      126,816    1,189,755           22
      23       43,668    1,058,791           10        9,344    1,125,890          318
      24       73,210      803,715            5        7,368      882,087          260
      25      163,368      869,135            3      122,712      965,889           26
      26       13,780      809,438           18        7,027      854,650          244
      27      196,967    1,116,985            3      100,756    1,123,554           34
      28      144,314      988,227            4      169,814    1,056,887           26
      29       16,276      783,501           17       32,227      801,956           96
      30       84,191      944,587            5        8,106      951,679          374
      31      166,717    1,249,694            4       83,830    1,398,465           62
      32       23,745      931,939           16       24,096    1,109,298          100
      33      108,990      948,418            4       45,972    1,078,325           57
      34      199,202    1,322,186            3      104,199    1,378,361           46
      35       79,224    1,249,676            7       62,844    1,404,363           77
      36      101,064    1,204,011            4       18,143    1,223,323          149
      37      122,104    1,106,852            4       71,167    1,239,589           40
      38       25,724    1,296,455           19       13,501    1,421,807          245
      39      160,682    1,327,691            3       12,262    1,456,099          342
      40      136,888    1,214,210            3       72,436    1,361,084           49
      41       22,191    1,059,094           11        9,882    1,197,627          270
      42       26,825    1,341,196           12       14,663    1,466,815          196
      43      103,395    1,437,977            4       72,141    1,584,413           59

Tom Kyte
February 28, 2007 - 3:01 pm UTC

I see a query followed by output.

I expect to see output following a query.

Hence it looks normal to me.

Perhaps if you shared with us what concerns you - instead of having us guess

A reader, March 01, 2007 - 7:18 am UTC

Apologies Tom. These are the statistics collected for a large batch job. The batch job uses both sorting and hashing and obviously real memory is much less so single pass is required for both sorting and hashing.

My question relates to the average wait time for the temp writes. Is it high because of RAID 5 (though the batch job was not CPU bound), or would you consider this as normal?

Tom Kyte
March 02, 2007 - 11:31 am UTC

you are doing large IO's, are you sure you think that is "high" time wise

dbms_xplan and temp space usage

Suresh, March 28, 2007 - 11:25 am UTC

Tom,

As you have mentioned
Quote """"dbms_xplan in 9ir2 can be used to show you the estimated temp space needed by a query."""""""

Is this true for hash joins also, if the temp space is used for hash joins. For Example, I had a query which filled up 90 GB of temp space (ran for 2 hours and gave unable to extend error without giving any records) and all it does is hash joins. But I cannot see any temp space mentioned using dbms_xplan. Is there any other way to find the temp space before running the query or am I doing something wrong. thanks a lot

SQL>explain plan for
SELECT /*+ USE_HASH(sd od cx1 cx2) */
px.GSC_PRODUCT_ID, 
px.GSC_UOM_ID, 
lx2.GSC_LOCATION_CODE, 
px.SECTOR_ID, 
sd.LOAD_SOURCE, 
sd.ORDER_SHIP_ID, 
od.ORDER_NUMBER, 
od.ORDER_LINE_NUM, 
od.ORDER_TYPE_ID, 
od.LINE_TYPE_ID, 
od.REQ_SHIP_DATE, 
sd.ACT_SHIP_DATE, 
od.REQ_DELIVERY_DATE, 
sd.BRANCH_ID, 
od.QUANTITY_ORDERED, 
sd.QUANTITY_SHIPPED,  
od.QUANTITY_CANCELLED
FROM 
ODS_SHIP_DETAIL sd, 
ODS_ORDER_DETAIL od, 
ODS_GSC_PRODUCT_XREF px, 
ODS_GSC_LOCATION_XREF lx1, 
ODS_GSC_LOCATION_XREF lx2,
 ODS_GSC_COMMON_XREF cx1, 
ODS_GSC_COMMON_XREF cx2
WHERE sd.LOAD_SOURCE = od.LOAD_SOURCE AND sd.ORDER_ID = od.ORDER_ID AND sd.LOAD_SOURCE IN ('US','CP','AN','FE')
AND sd.PROD_ID = px.SOURCE_PROD_ID  AND od.SOURCE = px.SOURCE AND sd.UOM_ID = px.SOURCE_UOM_ID 
AND lx1.SOURCE_LOCATION_TYPE ='BRANCH'  AND od.SOURCE = lx1.SOURCE  AND sd.BRANCH_ID = lx1.SOURCE_LOCATION_CODE AND px.SECTOR_ID = lx1.SOURCE_SECTOR_ID
AND lx1.GSC_LOCATION_CODE = lx2.SOURCE_LOCATION_CODE AND lx1.SOURCE_SECTOR_ID = lx2.SOURCE_SECTOR_ID AND lx2.SOURCE = 'SCDP' AND lx2.SOURCE_LOCATION_TYPE IN ('DIST_CENTER','FIELD')
AND od.ORDER_TYPE_ID = cx1.SOURCE_CODE AND od.SOURCE = cx1.SOURCE AND cx1.COMMON_XREF_TYPE = 'ORDER_TYPE' AND cx1.DMGSC_TARGET_ID = 'ORDER_SHIP_DETAIL'
AND DECODE(od.SOURCE,'US','ALL',sd.BRANCH_ID) = cx2.SOURCE_CODE AND od.SOURCE = cx2.SOURCE AND cx2.COMMON_XREF_TYPE = 'BRANCH' AND cx2.DMGSC_TARGET_ID = 'ORDER_SHIP_DETAIL'
AND DECODE(od.SOURCE,'CP', od.OVERALL_DELIVERY_STATUS, 'XX') IS NOT NULL AND od.REQ_SHIP_DATE > SYSDATE - 400



SQL> select * from table( dbms_xplan.display );

PLAN_TABLE_OUTPUT

 
---------------------------------------------------------------------------------------------------------
| Id  | Operation                    |  Name                    | Rows  | Bytes | Cost  | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                          |     1 |   298 | 87888 |       |       |
|   1 |  NESTED LOOPS                |                          |     1 |   298 | 87888 |       |       |
|*  2 |   HASH JOIN                  |                          |     1 |   270 | 87887 |       |       |
|*  3 |    HASH JOIN                 |                          |   282 | 52734 | 61152 |       |       |
|*  4 |     TABLE ACCESS FULL        | ODS_GSC_LOCATION_XREF    |    31 |   713 |     7 |       |       |
|*  5 |     HASH JOIN                |                          |  3870 |   619K| 61144 |       |       |
|*  6 |      TABLE ACCESS FULL       | ODS_GSC_LOCATION_XREF    |    93 |  2139 |     7 |       |       |
|*  7 |      HASH JOIN               |                          |   746 |   102K| 61136 |       |       |
|*  8 |       INDEX FULL SCAN        | ODS_GSC_COMMON_XREF_PK   |    24 |   840 |     1 |       |       |
|*  9 |       HASH JOIN              |                          |   572 | 60632 | 61134 |       |       |
|* 10 |        INDEX FULL SCAN       | ODS_GSC_COMMON_XREF_PK   |    24 |   840 |     1 |       |       |
|  11 |        PARTITION RANGE INLIST|                          |       |       |       |KEY(I) |KEY(I) |
|* 12 |         TABLE ACCESS FULL    | ODS_ORDER_DETAIL         | 61305 |  4250K| 61132 |KEY(I) |KEY(I) |
|  13 |    PARTITION RANGE INLIST    |                          |       |       |       |KEY(I) |KEY(I) |
|* 14 |     TABLE ACCESS FULL        | ODS_SHIP_DETAIL          |    22M|  1779M| 26059 |KEY(I) |KEY(I) |
|* 15 |   TABLE ACCESS BY INDEX ROWID| ODS_GSC_PRODUCT_XREF     |     1 |    28 |     1 |       |       |
|* 16 |    INDEX UNIQUE SCAN         | ODS_GSC_PRODUCT_XREF_PK  |     1 |       |       |       |       |
---------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("SD"."LOAD_SOURCE"="OD"."LOAD_SOURCE" AND "SD"."ORDER_ID"="OD"."ORDER_ID" AND 
              "SD"."BRANCH_ID"="LX1"."SOURCE_LOCATION_CODE")
       filter("CX2"."SOURCE_CODE"=DECODE("OD"."SOURCE",'US','ALL',"SD"."BRANCH_ID"))
   3 - access("LX1"."GSC_LOCATION_CODE"="LX2"."SOURCE_LOCATION_CODE" AND 
              "LX1"."SOURCE_SECTOR_ID"="LX2"."SOURCE_SECTOR_ID")
   4 - filter("LX2"."SOURCE"='SCDP' AND ("LX2"."SOURCE_LOCATION_TYPE"='DIST_CENTER' OR 
              "LX2"."SOURCE_LOCATION_TYPE"='FIELD'))
   5 - access("OD"."SOURCE"="LX1"."SOURCE")
   6 - filter("LX1"."SOURCE_LOCATION_TYPE"='BRANCH')
   7 - access("OD"."SOURCE"="CX2"."SOURCE")
   8 - access("CX2"."COMMON_XREF_TYPE"='BRANCH' AND "CX2"."DMGSC_TARGET_ID"='ORDER_SHIP_DETAIL')
       filter("CX2"."COMMON_XREF_TYPE"='BRANCH' AND "CX2"."DMGSC_TARGET_ID"='ORDER_SHIP_DETAIL')
   9 - access("OD"."ORDER_TYPE_ID"="CX1"."SOURCE_CODE" AND "OD"."SOURCE"="CX1"."SOURCE")
  10 - access("CX1"."COMMON_XREF_TYPE"='ORDER_TYPE' AND "CX1"."DMGSC_TARGET_ID"='ORDER_SHIP_DETAIL')
       filter("CX1"."COMMON_XREF_TYPE"='ORDER_TYPE' AND "CX1"."DMGSC_TARGET_ID"='ORDER_SHIP_DETAIL')
  12 - filter(DECODE("OD"."SOURCE",'CP',"OD"."OVERALL_DELIVERY_STATUS",'XX') IS NOT NULL AND 
              "OD"."REQ_SHIP_DATE">SYSDATE@!-400 AND ("OD"."LOAD_SOURCE"='AN' OR "OD"."LOAD_SOURCE"='CP' OR 
              "OD"."LOAD_SOURCE"='FE' OR "OD"."LOAD_SOURCE"='US'))
  14 - filter("SD"."LOAD_SOURCE"='AN' OR "SD"."LOAD_SOURCE"='CP' OR "SD"."LOAD_SOURCE"='FE' OR 
              "SD"."LOAD_SOURCE"='US')
  15 - filter("PX"."SECTOR_ID"="LX1"."SOURCE_SECTOR_ID")
  16 - access("SD"."PROD_ID"="PX"."SOURCE_PROD_ID" AND "SD"."UOM_ID"="PX"."SOURCE_UOM_ID" AND 
              "OD"."SOURCE"="PX"."SOURCE")
 
Note: cpu costing is off

Tom Kyte
March 28, 2007 - 12:33 pm UTC

it would show a guess - it is guessing "none"

look at your estimated card= values, are they even close, they are very very small....



ops$tkyte%ORA9IR2> create table t1 as select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA9IR2> create table t2 as select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> exec dbms_stats.set_table_stats( user, 'T1', numrows => 1000000000, numblks => 1000000);

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> exec dbms_stats.set_table_stats( user, 'T2', numrows => 1000000000, numblks => 1000000);

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> @plan "select /*+ USE_HASH(t1,t2) */ * from t1, t2 where t1.object_id = t2.object_id"
ops$tkyte%ORA9IR2> delete from plan_table;

4 rows deleted.

ops$tkyte%ORA9IR2> explain plan for &1;
old   1: explain plan for &1
new   1: explain plan for select /*+ USE_HASH(t1,t2) */ * from t1, t2 where t1.object_id = t2.object_id

Explained.

ops$tkyte%ORA9IR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------

------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes |TempSpc| Cos
------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |  1000M|   186G|       |  15
|*  1 |  HASH JOIN           |             |  1000M|   186G|   104G|  15
|   2 |   TABLE ACCESS FULL  | T1          |  1000M|    93G|       | 961
|   3 |   TABLE ACCESS FULL  | T2          |  1000M|    93G|       | 961
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")

Note: cpu costing is off

16 rows selected.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> exec dbms_stats.set_table_stats( user, 'T1', numrows => 1000, numblks => 100);

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> exec dbms_stats.set_table_stats( user, 'T2', numrows => 1000, numblks => 100);

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> @plan "select /*+ USE_HASH(t1,t2) */ * from t1, t2 where t1.object_id = t2.object_id"
ops$tkyte%ORA9IR2> delete from plan_table;

4 rows deleted.

ops$tkyte%ORA9IR2> explain plan for &1;
old   1: explain plan for &1
new   1: explain plan for select /*+ USE_HASH(t1,t2) */ * from t1, t2 where t1.object_id = t2.object_id

Explained.

ops$tkyte%ORA9IR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |  1000 |   195K|    25 |
|*  1 |  HASH JOIN           |             |  1000 |   195K|    25 |
|   2 |   TABLE ACCESS FULL  | T1          |  1000 |    97K|    11 |
|   3 |   TABLE ACCESS FULL  | T2          |  1000 |    97K|    11 |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")

Note: cpu costing is off

16 rows selected.



dbms_xplan and temp space usage

Suresh, March 30, 2007 - 3:14 pm UTC

Tom,

That's a very nice explanation.
I have tried getting real explain plan in order to avoid guesses.
SQL> set autotrace traceonly;
SQL> SELECT /*+ USE_HASH(sd od cx1 cx2) */
  2  px.GSC_PRODUCT_ID, 
  3  px.GSC_UOM_ID, 
  4  lx2.GSC_LOCATION_CODE, 
  5  px.SECTOR_ID, 
  6  sd.LOAD_SOURCE, 
  7  sd.ORDER_SHIP_ID, 
  8  od.ORDER_NUMBER, 
  9  od.ORDER_LINE_NUM, 
 10  od.ORDER_TYPE_ID, 
 11  od.LINE_TYPE_ID, 
 12  od.REQ_SHIP_DATE, 
 13  sd.ACT_SHIP_DATE, 
 14  od.REQ_DELIVERY_DATE, 
 15  sd.BRANCH_ID, 
 16  od.QUANTITY_ORDERED, 
 17  sd.QUANTITY_SHIPPED,  
 18  od.QUANTITY_CANCELLED
 19  FROM 
 20  ODS_SHIP_DETAIL sd, 
 21  ODS_ORDER_DETAIL od, 
 22  ODS_GSC_PRODUCT_XREF px, 
 23  ODS_GSC_LOCATION_XREF lx1, 
 24  ODS_GSC_LOCATION_XREF lx2,
 25   ODS_GSC_COMMON_XREF cx1, 
 26  ODS_GSC_COMMON_XREF cx2
 27  WHERE sd.LOAD_SOURCE = od.LOAD_SOURCE AND sd.ORDER_ID = od.ORDER_ID AND sd.LOAD_SOURCE IN 
 28  ('US','CP','AN','FE')
 29  AND sd.PROD_ID = px.SOURCE_PROD_ID  AND od.SOURCE = px.SOURCE AND sd.UOM_ID = px.SOURCE_UOM_ID 

 30  AND lx1.SOURCE_LOCATION_TYPE ='BRANCH'  AND od.SOURCE = lx1.SOURCE  AND sd.BRANCH_ID = 
 31  lx1.SOURCE_LOCATION_CODE AND px.SECTOR_ID = lx1.SOURCE_SECTOR_ID
 32  AND lx1.GSC_LOCATION_CODE = lx2.SOURCE_LOCATION_CODE AND lx1.SOURCE_SECTOR_ID = 
 33  lx2.SOURCE_SECTOR_ID AND lx2.SOURCE = 'SCDP' AND lx2.SOURCE_LOCATION_TYPE IN 
 34  ('DIST_CENTER','FIELD')
 35  AND od.ORDER_TYPE_ID = cx1.SOURCE_CODE AND od.SOURCE = cx1.SOURCE AND cx1.COMMON_XREF_TYPE = 
 36  'ORDER_TYPE' AND cx1.DMGSC_TARGET_ID = 'ORDER_SHIP_DETAIL'
 37  AND DECODE(od.SOURCE,'US','ALL',sd.BRANCH_ID) = cx2.SOURCE_CODE AND od.SOURCE = cx2.SOURCE AND 

 38  cx2.COMMON_XREF_TYPE = 'BRANCH' AND cx2.DMGSC_TARGET_ID = 'ORDER_SHIP_DETAIL'
 39  AND DECODE(od.SOURCE,'CP', od.OVERALL_DELIVERY_STATUS, 'XX') IS NOT NULL AND od.REQ_SHIP_DATE >
 
 40  SYSDATE - 400
 41  
SQL> 
SQL> /
ODS_SHIP_DETAIL sd,
*
ERROR at line 20:
ORA-01652: unable to extend temp segment by 320 in tablespace BATCH_TEMP


ODS_SHIP_DETAIL is a partitioned 101M records table with latest stats and all the tables in the query are analyzed.
Atleast in this case, DBMS_XPLAN estimates/guesses did not help in giving real temp space usage.
So, I cheked stats gathering process and I found
dbms_stats.gather_table_stats(p_owner, p_name,
                             estimate_percent=>0.1,  block_sample=>TRUE,
                            cascade=>FALSE);

Do you think the amount of estimate is going to give wrong estimates to this extent that I cannot notice ORA-01652 coming towards me. Thanks a lot
Tom Kyte
March 30, 2007 - 4:43 pm UTC

can you look at your plan...

can you tell us if the estimated row counts are *close* to reality


I'm going to guess "no, not close at all" - in which case - I would look at the statistics. you are doing a very very small BLOCK sample - which can be not very representative of what is really there.

dbms_xplan and temp space usage

A reader, March 30, 2007 - 6:43 pm UTC

Tom,
Atleast the query output should not return 1 row as given by the estimated Explain Plan. It is not even close.It is way off. They wanted to minimize the time consumption for stats gathering process and they have reduced the estimated percentage to 0.1% to achieve that.
Thanks a lot Tom.


dbms_xplan and temp space usage

Suresh, April 02, 2007 - 8:30 pm UTC

Tom,
I have been sleeping over this from the past few days. I have tried another query, where it does disk sorts but dbms_xplan fails to show it. I have compared both guessed explain plan using dbms_xplan and real explain using auto trace. Both explain plans are same, but disk sorts (of course using temp space) do not show up using dbms_xplan. As always I might be wrong, you are the GURU
SQL> explain plan for SELECT /*+ USE_HASH(i p px lx1 lx2) */ 
  2   px.GSC_PRODUCT_ID, px.GSC_UOM_ID, lx2.GSC_LOCATION_CODE, 
  3  i.INVENTORY_DATE, 
  4  i.SOURCE, px.SECTOR_ID,
  5   i.QUANTITY_ON_HAND,
  6   i.QUANTITY_ALLOCATED, 
  7  i.QUANTITY_AVAILABLE, 
  8  i.QUANTITY_AVAILABLE_90, 
  9   i.QUANTITY_NONALLOCATABLE, 
 10  i.QUANTITY_SOFT_COMMITTED,
 11   i.QUANTITY_INTRANSIT,
 12   i.QUANTITY_TRANSFER,
 13   i.QUANTITY_UNRESTRICTED, 
 14  i.QUANTITY_RESTRICTED, 
 15  i.QUANTITY_BLOCKED,
 16    i.QUANTITY_BLOCKED_RETURNS,
 17   i.QUANTITY_QUAL,
 18   i.QUANTITY_FIXED_ASSET,
 19   i.QUANTITY_POPENDING
 20  FROM ODS_INVENTORY_DAY_SNAPSHOT i, ODS_PRODUCT p, ODS_GSC_PRODUCT_XREF px, ODS_GSC_LOCATION_XRE
F lx1,
 21   ODS_GSC_LOCATION_XREF lx2
 22  WHERE i.LOAD_SOURCE in ('US', 'CP', 'AN', 'FE')
 23  AND i.PROD_ID = p.PROD_ID  AND i.SOURCE = p.SOURCE  
 24  AND i.PROD_ID = px.SOURCE_PROD_ID  AND i.SOURCE = px.SOURCE AND p.STANDARD_UOM_ID = px.SOURCE_U
OM_ID 
 25  AND lx1.SOURCE_LOCATION_TYPE ='BRANCH'  AND i.SOURCE = lx1.SOURCE  AND i.BRANCH_ID = lx1.SOURCE
_LOCATION_CODE 
 26  AND px.SECTOR_ID = lx1.SOURCE_SECTOR_ID
 27  AND lx1.GSC_LOCATION_CODE = lx2.SOURCE_LOCATION_CODE AND lx1.SOURCE_SECTOR_ID = lx2.SOURCE_SECT
OR_ID 
 28  AND lx2.SOURCE = 'SCDP' AND lx2.SOURCE_LOCATION_TYPE IN ('DIST_CENTER','FIELD')
 29  AND i.INVENTORY_DATE > SYSDATE - 400
 30  ORDER BY px.GSC_PRODUCT_ID, px.GSC_UOM_ID, lx2.GSC_LOCATION_CODE, i.INVENTORY_DATE;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------

| Id  | Operation                    |  Name                       | Rows  | Bytes | Cost  | Pstart| Pstop |

------------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |                             |     3 |   552 | 38241 |       |       |

|   1 |  SORT ORDER BY               |                             |     3 |   552 | 38241 |       |       |

|*  2 |   HASH JOIN                  |                             |     3 |   552 | 38238 |       |       |

|*  3 |    HASH JOIN                 |                             |   171 | 28557 | 37442 |       |       |

|*  4 |     HASH JOIN                |                             |  5909 |   767K| 37400 |       |       |

|*  5 |      HASH JOIN               |                             |     7 |   392 |    15 |       |       |

|*  6 |       TABLE ACCESS FULL      | ODS_GSC_LOCATION_XREF       |    31 |   868 |     7 |       |       |

|*  7 |       TABLE ACCESS FULL      | ODS_GSC_LOCATION_XREF       |    93 |  2604 |     7 |       |       |

|   8 |      PARTITION RANGE ITERATOR|                             |       |      |       |   KEY |    15 |

|*  9 |       TABLE ACCESS FULL      | ODS_INVENTORY_DAY_SNAPSHOT  |  1265K| 92M| 37367 |   KEY |    15 |

|  10 |     TABLE ACCESS FULL        | ODS_GSC_PRODUCT_XREF        | 88584 |  2941K|    19 |       |       |

|  11 |    TABLE ACCESS FULL         | ODS_PRODUCT                 |  1464K| 23M|   789 |       |       |

------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("I"."PROD_ID"="P"."PROD_ID" AND "I"."SOURCE"="P"."SOURCE" AND
              "P"."STANDARD_UOM_ID"="PX"."SOURCE_UOM_ID")

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
   3 - access("I"."PROD_ID"="PX"."SOURCE_PROD_ID" AND "I"."SOURCE"="PX"."SOURCE"
 AND

              "PX"."SECTOR_ID"="LX1"."SOURCE_SECTOR_ID")
   4 - access("I"."SOURCE"="LX1"."SOURCE" AND "I"."BRANCH_ID"="LX1"."SOURCE_LOCA
TION_CODE")

   5 - access("LX1"."GSC_LOCATION_CODE"="LX2"."SOURCE_LOCATION_CODE" AND
              "LX1"."SOURCE_SECTOR_ID"="LX2"."SOURCE_SECTOR_ID")
   6 - filter("LX2"."SOURCE"='SCDP' AND ("LX2"."SOURCE_LOCATION_TYPE"='DIST_CENT
ER' OR

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

              "LX2"."SOURCE_LOCATION_TYPE"='FIELD'))
   7 - filter("LX1"."SOURCE_LOCATION_TYPE"='BRANCH')
   9 - filter(("I"."LOAD_SOURCE"='AN' OR "I"."LOAD_SOURCE"='CP' OR "I"."LOAD_SOU
RCE"='FE' OR

              "I"."LOAD_SOURCE"='US') AND "I"."INVENTORY_DATE">SYSDATE@!-400)

Note: cpu costing is off

35 rows selected.

SQL> set autotrace traceonly;

SQL> SELECT /*+ USE_HASH(i p px lx1 lx2) */ 
  2   px.GSC_PRODUCT_ID, px.GSC_UOM_ID, lx2.GSC_LOCATION_CODE, 
  3  i.INVENTORY_DATE, 
  4  i.SOURCE, px.SECTOR_ID,
  5   i.QUANTITY_ON_HAND,
  6   i.QUANTITY_ALLOCATED, 
  7  i.QUANTITY_AVAILABLE, 
  8  i.QUANTITY_AVAILABLE_90, 
  9   i.QUANTITY_NONALLOCATABLE, 
 10  i.QUANTITY_SOFT_COMMITTED,
 11   i.QUANTITY_INTRANSIT,
 12   i.QUANTITY_TRANSFER,
 13   i.QUANTITY_UNRESTRICTED, 
 14  i.QUANTITY_RESTRICTED, 
 15  i.QUANTITY_BLOCKED,
 16    i.QUANTITY_BLOCKED_RETURNS,
 17   i.QUANTITY_QUAL,
 18   i.QUANTITY_FIXED_ASSET,
 19   i.QUANTITY_POPENDING
 20  FROM ODS_INVENTORY_DAY_SNAPSHOT i, ODS_PRODUCT p, ODS_GSC_PRODUCT_XREF px, ODS_GSC_LOCATION_XRE
F lx1,
 21   ODS_GSC_LOCATION_XREF lx2
 22  WHERE i.LOAD_SOURCE in ('US', 'CP', 'AN', 'FE')
 23  AND i.PROD_ID = p.PROD_ID  AND i.SOURCE = p.SOURCE  
 24  AND i.PROD_ID = px.SOURCE_PROD_ID  AND i.SOURCE = px.SOURCE AND p.STANDARD_UOM_ID = px.SOURCE_U
OM_ID 
 25  AND lx1.SOURCE_LOCATION_TYPE ='BRANCH'  AND i.SOURCE = lx1.SOURCE  AND i.BRANCH_ID = lx1.SOURCE
_LOCATION_CODE 
 26  AND px.SECTOR_ID = lx1.SOURCE_SECTOR_ID
 27  AND lx1.GSC_LOCATION_CODE = lx2.SOURCE_LOCATION_CODE AND lx1.SOURCE_SECTOR_ID = lx2.SOURCE_SECT
OR_ID 
 28  AND lx2.SOURCE = 'SCDP' AND lx2.SOURCE_LOCATION_TYPE IN ('DIST_CENTER','FIELD')
 29  AND i.INVENTORY_DATE > SYSDATE - 400
 30  ORDER BY px.GSC_PRODUCT_ID, px.GSC_UOM_ID, lx2.GSC_LOCATION_CODE, i.INVENTORY_DATE
 31  ;

12430865 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=38241 Card=3 Bytes=552)

   1    0   SORT (ORDER BY) (Cost=38241 Card=3 Bytes=552)
   2    1     HASH JOIN (Cost=38238 Card=3 Bytes=552)
   3    2       HASH JOIN (Cost=37442 Card=171 Bytes=28557)
   4    3         HASH JOIN (Cost=37400 Card=5909 Bytes=785897)
   5    4           HASH JOIN (Cost=15 Card=7 Bytes=392)
   6    5             TABLE ACCESS (FULL) OF 'ODS_GSC_LOCATION_XREF' (Cost=7 Card=31 Bytes=868)

   7    5             TABLE ACCESS (FULL) OF 'ODS_GSC_LOCATION_XREF' (Cost=7 Card=93 Bytes=2604)

   8    4           PARTITION RANGE (ITERATOR)
   9    8             TABLE ACCESS (FULL) OF 'ODS_INVENTORY_DAY_SNAPSHOT' (Cost=37367 Card=1265111 Bytes=97413547)

  10    3         TABLE ACCESS (FULL) OF 'ODS_GSC_PRODUCT_XREF' (Cost=19 Card=88584 Bytes=3011856)

  11    2       TABLE ACCESS (FULL) OF 'ODS_PRODUCT' (Cost=789 Card=1464580 Bytes=24897860)





Statistics
----------------------------------------------------------
          0  recursive calls
         78  db block gets
     396197  consistent gets
     794054  physical reads
         60  redo size
  516213730  bytes sent via SQL*Net to client
   30663043  bytes received via SQL*Net from client
     828726  SQL*Net roundtrips to/from client
          0  sorts (memory)
          1  sorts (disk)   
   12430865  rows processed


Please check this
1 sorts (disk)

If everything else is same why not dbms_xplan show this temp space usage in it's result
thanks a lot
Tom Kyte
April 03, 2007 - 11:05 pm UTC

are the ESTIMATED CARDINALITIES (the numbers explain plan uses to figure out what the estimated resources would be) close to reality...

I'll ask that over and over and over again....

Because I think "no, they are not"

What was using TEMP ?

Reader, April 03, 2007 - 5:53 pm UTC

If I wish to know that who and what was using the temp tablespace ? Is there a way to find out using Statspack .
If yes ,then where should I be looking in the report?
Thanks

dbms_xplan and temp space usage

Suresh, April 04, 2007 - 12:51 pm UTC

Tom,

Please correct me if I am understanding this correctly.
Auto trace result says----
12430865 rows processed
But, explain plan cardinality about final output says 3-----
0      SELECT STATEMENT Optimizer=CHOOSE (Cost=38241 Card=3 Bytes=552)


not even close. So , can I conclude that Cardinality is way off.

Or, is there a better way of looking at each and every operation and its corresponding cardinality's reality. If you could provide a simple example that would be great
Thanks a lot.
Tom Kyte
April 04, 2007 - 3:39 pm UTC

tkprof - use sql_trace=true and tkprof will show the actual card= values for each step. that'll be useful to compare to the explain plan to see where it first goes really wrong.

I would guess "invalid or stale statistics", but it depends on the nature of the query.

dbms_xplan and temp space usage

Suresh, April 04, 2007 - 1:28 pm UTC

Tom,

I have checked these.

SQL> select count(*) from ODS_INVENTORY_DAY_SNAPSHOT;

  COUNT(*)
----------
  56735827

SQL> 
SQL> select count(*) from  ODS_PRODUCT;

  COUNT(*)
----------
   1411604

SQL> 
SQL> select count(*) from  ODS_GSC_PRODUCT_XREF; 

  COUNT(*)
----------
     88584

SQL> 
SQL> select count(*) from ODS_GSC_LOCATION_XREF;
  COUNT(*)
----------
     560


but the cardinalities are ( I have formatted )
ODS_INVENTORY_DAY_SNAPSHOT  | 1265K|
ODS_PRODUCT                 | 1464K| 
ODS_GSC_PRODUCT_XREF        | 88584|  
ODS_GSC_LOCATION_XREF       |    31|  
ODS_GSC_LOCATION_XREF       |    93|  



Now is it safe to assume that for ODS_INVENTORY_DAY_SNAPSHOT (all the partions have to be accessed as it is partioned by inventory date and has 11 partitions with data corresponding to query predicate) cardinality is way off. But for the rest of the tables it seems O.K.

thanks a lot

ORA-01652: unable to extend temp segment by 64 in tablespace TEMP

Gowtham Sen, April 16, 2007 - 12:49 pm UTC

Hi Tom,

I am getting this error most often.
My temp table space is of >1GB.

While I am running the PL/SQL package, I ended up with warning
"Cursor Fetch Error"
"ORA-01652: unable to extend temp segment by 64 in tablespace TEMP"

I couldn't able to solve this problme.

How can I resolve this problem.
Could you please explain me what might be the reason?

Thank you,
Regards,

Gowtham Sen.
Tom Kyte
April 16, 2007 - 1:39 pm UTC

add more space to temp?

How to estimate the required table space ?

Gowtham Sen, April 18, 2007 - 10:25 am UTC

Thank you very much Tom.

As you suggested, I increaed the tablespace. It solved my problem as of now.

But,Could you please explain me how to estimate the temp table space required?

If I consider my real data, it would be in 30 to 50 GB.
On the data, I am applying join, Minus operator.

So, if we take this scenario, how much space we need in temp table space?

Please suggest me, how should we approach for this.

Thanks in Advance.

Thank you,
Regards,

Gowtham Sen
Tom Kyte
April 18, 2007 - 12:43 pm UTC

use dbms_xplan


big_table%ORA10GR2> delete from plan_table;

3 rows deleted.

big_table%ORA10GR2> explain plan for &1;
old   1: explain plan for &1
new   1: explain plan for select * from big_table order by 1,2,3,4,5

Explained.

big_table%ORA10GR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1472477105

----------------------------------------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |  6700K|   600M|       |   149K  (2)| 00:29:49 |
|   1 |  SORT ORDER BY     |           |  6700K|   600M|  1610M|   149K  (2)| 00:29:49 |
|   2 |   TABLE ACCESS FULL| BIG_TABLE |  6700K|   600M|       |  3616  (11)| 00:00:44 |
----------------------------------------------------------------------------------------

9 rows selected.


the "best guess" is in there - it is not exact, only hindsight will be exact

Estimating temp tablespace

Gowtham Sen, May 08, 2007 - 6:18 am UTC

Thanks Tom.

Yes, you are right. The explain plan give the temp table space required if the data is present . But, in my case, I don't have the data of 30 to 50 GBs.

Could you please suggest me the possible ways to calculate temp tablespace.

Thank you,
Regards,

Gowtham Sen.
Tom Kyte
May 10, 2007 - 7:54 pm UTC

what you just said does not make sense.

you say:

a) explain plan shows me the estimate of temp I'll need
b) I don't have that space
c) tell me how to estimate temp

well, C) is answered by A) so, infinite loop results.

background process responsible for writing to TEMP File

A reader, June 27, 2007 - 9:15 am UTC

Thomas, greetings.

I understand that DBWR writes to datafiles.
LGWR writes to redo logs.

i want to know which processes are responsible for writing to the tempfiles, undo tablespace files & control files.

i know that SMON does the clean up job for temp tablespaces.

can you pls clarify this basic doubt.
Tom Kyte
July 02, 2007 - 10:08 am UTC

undo tablespaces consist of datafiles, so that is DBWR

control files will be written to by things like CKPT (checkpoint process) and other background processes as needed (eg: rman keeps it's stuff in there now too)

temp files are written to by your server process (shared servers, dedicated servers). In fact, datafiles can be written to by these processes as well (think direct path operations - dbwr doesn't do the writing in that case)


Excellent

A reader, July 02, 2007 - 11:13 am UTC

Thanks Thomas.
Excellent as always.

Worth waiting for your reply.

dbms_xplan cannot caculate how many temp space our statement will use

A reader, July 04, 2007 - 10:06 pm UTC

Hi Tom, it seems dbms_xplan cannot caculate how many temporary space our sql statement might use. There is a group by in our statement, but explain plan cannot cacualte temporary tablespace usage for us. Why is that? Thank you in advance.

SQL> delete from plan_table;

4 rows deleted.

SQL> explain plan for
  2  select  REV_APPRAISAL_NO ,                                                
  3                CLIENT_ID ,                                                       
  4                rev_RCS_CATEGORY ,                                                
  5                t2.rowid row_id ,                                                 
  6                t2.REV_FROM_DATE,                                                 
  7                t2.REV_to_DATE,                                                   
  8                MAX(T1.APP_APPRAISAL_NO) app_no                                   
  9        from    AMI.RC_REVIEW_TS T2 ,                                             
 10                AMI.APPRAISAL  t1                                                 
 11        where   t2.CLIENT_ID = t1.APP_CLIENT_ID                                   
 12        and     t2.OUTLET_ID  = t1.APP_OUTLET_ID                                  
 13        AND     T2.FAC_APPRAISAL_NO IS NULL                                       
 14        AND     t1.APP_APPRAISAL_FROM_DATE =                                      
 15                   (select  MAX(T5.APP_APPRAISAL_FROM_DATE) APP_Date              
 16                    from    ami.appraisal  t5                                     
 17                    where   t5.APP_CLIENT_ID = t2.CLIENT_ID                       
 18                    and     t5.APP_OUTLET_ID = t2.OUTLET_ID                       
 19                    AND     t2.compare_date >= t5.APP_APPRAISAL_FROM_DATE         
 20                    and     substr(t5.APP_RCS_CATEGORY,1,1) = 'S'                 
 21                    and     t5.APP_REVIEW_DATE is null                            
 22                    and     t5.APP_APPRAISAL_FROM_DATE is not null)               
 23        and     substr(t1.APP_RCS_CATEGORY,1,1) = 'S'                             
 24        and     t1.APP_REVIEW_DATE is null                                        
 25        and     t1.APP_APPRAISAL_FROM_DATE is not null                            
 26        group by REV_APPRAISAL_NO ,                                               
 27                 CLIENT_ID,                                                       
 28                 rev_RCS_CATEGORY ,                                               
 29                 t2.rowid ,                                                       
 30                 t2.REV_FROM_DATE,                                                
 31                 t2.REV_to_DATE 
 32  ;

Explained.

SQL> select * from table(dbms_xplan.display());
Plan hash value: 3422571755

------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |     1 |    63 |  1332   (3)| 00:00:16 |
|   1 |  HASH GROUP BY                     |                 |     1 |    63 |  1332   (3)| 00:00:16 |
|   2 |   VIEW                             |                 |     1 |    63 |  1332   (3)| 00:00:16 |
|*  3 |    FILTER                          |                 |       |       |            |          |
|   4 |     HASH GROUP BY                  |                 |     1 |   176 |  1332   (3)| 00:00:16 |
|*  5 |      TABLE ACCESS BY INDEX ROWID   | APPRAISAL       |     1 |    54 |     0   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                 |                 |     1 |   176 |  1331   (3)| 00:00:16 |
|   7 |        NESTED LOOPS                |                 |     1 |   122 |  1331   (3)| 00:00:16 |
|*  8 |         TABLE ACCESS FULL          | RC_REVIEW_TS    |     1 |    86 |  1325   (3)| 00:00:16 |
|*  9 |         TABLE ACCESS BY INDEX ROWID| APPRAISAL       |     1 |    36 |     6   (0)| 00:00:01 |
|* 10 |          INDEX RANGE SCAN          | I_APP_CLIENT_ID |     5 |       |     2   (0)| 00:00:01 |
|* 11 |        INDEX RANGE SCAN            | I_APP_CLIENT_ID |     5 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------


Tom Kyte
July 05, 2007 - 12:57 pm UTC

sure it did.

it says "in order to process 5 rows, I will need approximately NO TEMP SPACE"

umm, did you see and read the numbers there???!@?!?!?!

max extents limitation

Victor, July 04, 2007 - 11:41 pm UTC

Hi Tom,

I know there are some max extents limitation for normal segments. For example, if the bloke size is 8K, the limitation is 505. Is there the same limitations on temporary tablespace?

Thanks.
Tom Kyte
July 05, 2007 - 1:06 pm UTC

that was true before my daughter was born perhaps...

and she is 12 now.


there has not been a restriction on extents (unless you call 2 billion of them a hard limit) for more releases than most people that read this site have been using Oracle!!!!

SVRMGR> create tablespace test datafile '/tmp/test.dbf' size 1m default storage( maxextents unlimited );
Statement processed.


SVRMGR> select tablespace_name,  max_extents from dba_tablespaces;
TABLESPACE_NAME                MAX_EXTENT
------------------------------ ----------
SYSTEM                                505
RBS_TS_01                             505
RBS_TS_02                             505
RBS_TS_03                             505
RBS_TS_04                             505
TEMP                                  505
USERS                                 505
TEST                           2147483645
8 rows selected.

SVRMGR> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle7 Server Release 7.3.4.0.1 - Production
PL/SQL Release 2.3.4.0.0 - Production
CORE Version 3.5.4.0.0 - Production
TNS for Solaris: Version 2.3.4.0.0 - Production
NLSRTL Version 3.2.4.0.0 - Production
5 rows selected.
SVRMGR>


(rollback segments are different, they do have a max extents, of 32k)

create table as select *

Dheeraj, July 05, 2007 - 3:40 am UTC

Tom,

When I do create table <tablename> as select *from <other_table>...Is there anyway that the indexes and the paritions also get copied to the newly created table similar to the table being select from.

Thanks,
Dheeraj
Tom Kyte
July 05, 2007 - 1:06 pm UTC

no, not unless you use a "tool" front end that does that.

The default max extent limitation

Victor, July 05, 2007 - 8:30 pm UTC


Thank you, Tom.

Maybe I should ask you in this way, if dont specify 'maxextents unlimit' when creating datafile, then the default limitation 505 will apply, is it right?

The reason why I ask this question is our database constantly gets the "ORA-1652, unable to extend temp segment by 128 in tablespace TEMP". Our DBA said he created the TEMP tablespace as 20G, it should be big enough. But if the TEMP tablespace has the 505 max extents limitation, then we can only use around 505*128*8K=500M temp space, is it correct?
Tom Kyte
July 06, 2007 - 12:27 pm UTC

no it is in general NOT RIGHT, with locally managed tablespaces the default and in fact ONLY setting is unlimited.

and your dba should be using locally managed true temporary tablespaces in which case, the discussion would be "not relevant"

but your DBA would be able to answer this question immediately without us guessing, they already know why temp cannot extent, you only need to ask them.

If they do not, they are not a DBA :)

temp tablespace maxextents

Roderick, July 06, 2007 - 12:18 am UTC

You can see what max extents is set to by querying DBA_TABLESPACES.

If you hit max extents you will get a different error:
% oerr ora 1630

ORA-1630 max # extents (%s) reached in temp segment in tablespace %s
// *Cause: A temp segment tried to extend past max extents.
// *Action: If maxextents for the tablespace is less than the the system
// maximum, you can raise that. Otherwise, raise pctincrease for the
// tablespace

Actually, another valid (and maybe better) action would be to increase the extent size or find out why so much temp space is needed by the query (or global temp table).

Why does the DBA think 20GB is enough?

DBMS_XPLAN TempSpc Estimate

Brian Burton, March 05, 2008 - 10:55 am UTC

I understand the TempSpc value in the explain plan is a best guess, can it be said that if that value is wildly higher than what is available in the database TEMP tablespace the query is just not likely to run to completion?

Also, the explain plan does not seem to factor in differing values of PGA_AGGREGATE_TARGET. Is the TempSpc value in the explain plan a total sort space estimate?

SORT query becomes expensive after each run

Abhi, April 16, 2008 - 4:01 am UTC

Hi,
I have 2 tables, DESIGNFEATURES and TEMPMAPPING as below.
CREATE TABLE DESIGNFEATURES
( design_id varchar2(30),
ref_feature_number number,
feature_id varchar2(30),
feature_name varchar2(500)
);

and

CREATE TABLE TEMPMAPPING
( design_id varchar2(30),
ref_feature number,
out_feature number,
column_number number);

In DESIGNFEATURES I have say 3 records for available feature numbers from 1 to 10, for given design_id
i.e.
insert into DESIGNFEATURES values ('DID1026',2,'FID19','FEATURE19');

insert into DESIGNFEATURES values ('DID1026',7,'FID1','FEATURE1');

insert into DESIGNFEATURES values ('DID1026',4,'FID9','FEATURE9'); --Note, ref_feature_number will always be unique for each record, for given design_id.

While in TEMPMAPPING I have all 10 records (which means I CAN have total 10 features for given design, identified by their ref_feature_number)
out_feature is some other number given for actual ref_feature_number by <some_order>(say, if you count 1-10 backwards, 10 becomes 1, 9 becomes 2 and so on...)

i.e.
insert into TEMPMAPPING values('DID1026',1,10,1);
insert into TEMPMAPPING values('DID1026',2,9,2);
.
.
insert into TEMPMAPPING values('DID1026',10,1,10);

Now, I have a left outer join and an Order by query over these tables as follows,

select featureMap.*, tp.REF_FEATURE,tp.OUT_FEATURE, tp.COLUMN_NUMBER from TEMPMAPPING tp LEFT OUTER JOIN ( select df.ref_feature_number, df.feature_id, df.feature_name from DESIGNFEATURES df where df.design_id = 'DID1026' ) featureMap on featureMap.ref_feature_number = tp.ref_feature and featureMap.design_id = tp.design_id
order by tp.out_feature

The inserts in DESIGNFEATURES happen only once while inserts in TEMPMAPPING are multiple times (depending upon the <some_order> for out_feature as I said earlier, i.e. now suppose you want to number each increasing even ref_feature_number from 1 to 5 first and then odd ones with 6 to 10, so essentially, each time out_feature will change for given <some_order>)

This same process happens for each distinct design (i.e. with different design_id) but in separate sessions.

SO at first when I run the given left-outer-join-order-by query, it runs very quickly and returns the results in fractions of second. But as the usage increases, the query takes longer and longer....reaches upto 10hrs.

For each distinct design(whenever my processing is completed), I delete whatever is inserted in both the tables, and that too, records in TEMPMAPPING are deleted before they are inserted for each new <some_order>.

Can you please suggest what is happening here and why I am getting a degraded performance after each run of the query?

I also have indexes over both the tables on design_id, ref_feature and out_feature.

Only difference in the test scenario given here and actual is, max available features here is 10 and in actual it can be upto 244000 (which means design can have upto 244000 features)

Your reply will be most valuable.

Temp segments in temporary tablespace and IO overheads of using global temporary tables

A reader in a desperate need of help, July 03, 2008 - 10:54 pm UTC

Tom, We converted a very complex nightly batch process which used to take 1/2 hr, to run in real time and that to return the results in less a second. We achieved this, and for that we used global temporary tables in a database package returning ref cursor to web application written in dot net. It works fine with production size data and multiple users accessing the web application at the same time returing the results in less than a second.
But, our DBA is suggesting us to rewrite the whole complex database packages without using global temporary tables. Per him global temporary tables has IO overheads and it creates temporary segments in temporary tablespace so it can only be used in batch environment and suited for web applications. He is not even giving us a chance to stress test the application. Could you please help us understand what is correct and what we would need to do ?




Tom Kyte
July 07, 2008 - 10:57 am UTC

he is not correct.

first of all, so what that they create temporary segments - so does sorting, hashing - a MILLION things do that. It is, well, expected - normal - and ok.

second - physical IO will only happen if the need for physical IO arises (insufficient memory).

ops$tkyte%ORA10GR2> create global temporary table gtt on commit delete rows
  2  as
  3  select * from all_users where 1=0;

Table created.

Elapsed: 00:00:00.03
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> connect /
Connected.
ops$tkyte%ORA10GR2> @mystat physical
ops$tkyte%ORA10GR2> set echo off

NAME                                               VALUE
--------------------------------------------- ----------
physical read total IO requests                        0
physical read total multi block requests               0
physical read total bytes                              0
physical write total IO requests                       0
physical write total multi block requests              0
physical write total bytes                             0
physical reads                                         0
physical reads cache                                   0
physical reads direct                                  0
physical read IO requests                              0
physical read bytes                                    0
physical writes                                        0
physical writes direct                                 0
physical writes from cache                             0
physical write IO requests                             0
physical write bytes                                   0
physical writes non checkpoint                         0
physical reads cache prefetch                          0
physical reads prefetch warmup                         0
physical reads retry corrupt                           0
physical reads direct (lob)                            0
physical reads direct temporary tablespace             0
physical writes direct (lob)                           0
physical writes direct temporary tablespace            0
physical reads for flashback new                       0

25 rows selected.

Elapsed: 00:00:00.01
ops$tkyte%ORA10GR2> insert into gtt select * from all_users;

41 rows created.

Elapsed: 00:00:00.01
ops$tkyte%ORA10GR2> select count(*) from gtt;

  COUNT(*)
----------
        41

Elapsed: 00:00:00.01
ops$tkyte%ORA10GR2> @mystat physical
ops$tkyte%ORA10GR2> set echo off

NAME                                               VALUE
--------------------------------------------- ----------
physical read total IO requests                        0
physical read total multi block requests               0
physical read total bytes                              0
physical write total IO requests                       0
physical write total multi block requests              0
physical write total bytes                             0
physical reads                                         0
physical reads cache                                   0
physical reads direct                                  0
physical read IO requests                              0
physical read bytes                                    0
physical writes                                        0
physical writes direct                                 0
physical writes from cache                             0
physical write IO requests                             0
physical write bytes                                   0
physical writes non checkpoint                         0
physical reads cache prefetch                          0
physical reads prefetch warmup                         0
physical reads retry corrupt                           0
physical reads direct (lob)                            0
physical reads direct temporary tablespace             0
physical writes direct (lob)                           0
physical writes direct temporary tablespace            0
physical reads for flashback new                       0

25 rows selected.

Elapsed: 00:00:00.01
ops$tkyte%ORA10GR2> insert into gtt select all_users.* from all_users, (select level from dual connect by level <= 100 );

4100 rows created.

Elapsed: 00:00:00.02
ops$tkyte%ORA10GR2> select count(*) from gtt;

  COUNT(*)
----------
      4141

Elapsed: 00:00:00.01
ops$tkyte%ORA10GR2> @mystat physical
ops$tkyte%ORA10GR2> set echo off

NAME                                               VALUE
--------------------------------------------- ----------
physical read total IO requests                        0
physical read total multi block requests               0
physical read total bytes                              0
physical write total IO requests                       0
physical write total multi block requests              0
physical write total bytes                             0
physical reads                                         0
physical reads cache                                   0
physical reads direct                                  0
physical read IO requests                              0
physical read bytes                                    0
physical writes                                        0
physical writes direct                                 0
physical writes from cache                             0
physical write IO requests                             0
physical write bytes                                   0
physical writes non checkpoint                         0
physical reads cache prefetch                          0
physical reads prefetch warmup                         0
physical reads retry corrupt                           0
physical reads direct (lob)                            0
physical reads direct temporary tablespace             0
physical writes direct (lob)                           0
physical writes direct temporary tablespace            0
physical reads for flashback new                       0

25 rows selected.

Elapsed: 00:00:00.01
ops$tkyte%ORA10GR2> insert into gtt select all_users.* from all_users, (select level from dual connect by level <= 100000/2 );

2050000 rows created.

Elapsed: 00:00:12.81
ops$tkyte%ORA10GR2> select count(*) from gtt;

  COUNT(*)
----------
   2054141

Elapsed: 00:00:00.07
ops$tkyte%ORA10GR2> @mystat physical
ops$tkyte%ORA10GR2> set echo off

NAME                                               VALUE
--------------------------------------------- ----------
physical read total IO requests                     1375
physical read total multi block requests            1375
physical read total bytes                      168960000
physical write total IO requests                    1375
physical write total multi block requests           1375
physical write total bytes                     168960000
physical reads                                     20625
physical reads cache                                   0
physical reads direct                              20625
physical read IO requests                           1375
physical read bytes                            168960000
physical writes                                    20625
physical writes direct                             20625
physical writes from cache                             0
physical write IO requests                          1375
physical write bytes                           168960000
physical writes non checkpoint                     20625
physical reads cache prefetch                          0
physical reads prefetch warmup                         0
physical reads retry corrupt                           0
physical reads direct (lob)                            0
physical reads direct temporary tablespace         20625
physical writes direct (lob)                           0
physical writes direct temporary tablespace        20625
physical reads for flashback new                       0

25 rows selected.

Elapsed: 00:00:00.01
ops$tkyte%ORA10GR2>


note that no IO was done until we did something really 'big'


and if you have to do big, well, ask your DBA if they really want you to allocate HUGE untunable chunks of pga memory - or if they would like something manageable.



A reader, July 21, 2008 - 7:37 pm UTC

In your "December 9, 2002 - 12pm US/Eastern:" followup for logging server error in 9i, you used a after servererror trigger. In that trigger, is there anyway I can capture the error message itself in addition to the Oracle error number. i.e. I'd like to capture the exact error message

"ORA-01652: unable to extend temp segment by 8 in tablespace TEST_TEMP"

as opposed to just capturing the Oracle error number. The message will be specific. In this case, it will include the name of the tablespace.

Tom Kyte
July 22, 2008 - 11:12 am UTC

http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm#i1007895

lists all of the ora_ functions - like ora_server_error_msg for example...

How to clear content of whole GTT?

Deval, February 08, 2009 - 8:31 am UTC

I have a question:
How to clear content of whole Global temporary table?
Scenario :
We have application in 3-tier architecture. We have Java server as middle tier. There is setting for CONNECTION_POOL in middle tier, which will not release connection from DB server. From any of the session if data is getting inserted in GTT, due to ON COMMIT PRESEVER ROWS, data will not be deleted due to CONNECTION_POOL setting. How to clear that contents?
Tom Kyte
February 09, 2009 - 6:34 pm UTC

fix the applications?
redo the gtt as on commit delete rows?

why wouldn't you understand how they work and then use them accordingly in the application?

sounds like you want "on commit delete rows", since each connection pool grab should end with commit or rollback by a well written (correctly written, trying to be bug free application) before being released back to the pool

Could Not Tracking ORA-1652 error

Nikhil Wani, April 20, 2009 - 4:31 am UTC

Dear Sir,
We implemented
"create or replace trigger failed_to_extend_temp
after servererror on database"
We are seeing ORA-1652 in alert log for TEMP tablespace, but entries are not recorded in a table "T" .
Do we need the bounce the database? Or somthing special has be done.
We are with 10.2.0.4 on HP-Unix(B11.11)


Tom Kyte
April 21, 2009 - 1:07 pm UTC

nothing special needs to be done, you must have a bug in your implementation of your trigger.


run the example code as I have in a test database with really small temp. You'll see it "working" (or not) and can debug your implementation there.

If you run my example and it works (UNCHANGED - RUN MY CODE UNCHANGED) then you have a bug in your implementation and you need to - debug it.

make sure the dba hasn't changed _system_trig_enable, a hidden parameter too.


ops$tkyte%ORA10GR2> create table t ( msg varchar2(4000) );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace trigger failed_to_extend_temp
  2  after servererror on database
  3  begin
  4      if ( is_servererror(1652) )
  5      then
  6          insert into t values ( 'ora_sysevent = ' || ora_sysevent );
  7          insert into t values ( 'ora_login_user = ' || ora_login_user );
  8          insert into t values ( 'ora_server_error = ' || ora_server_error(1) );
  9          insert into t select 'open cursor ' || rownum || ' ' || sql_text
 10                          from v$open_cursor where sid = (select sid from v$mystat where rownum=1);
 11      end if;
 12  end;
 13  /

Trigger created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create temporary tablespace test_temp
  2  tempfile '/tmp/test_temp.dbf' size 512k reuse
  3  extent management local
  4  uniform size 64k
  5  /

Tablespace created.

ops$tkyte%ORA10GR2> alter user ops$tkyte temporary tablespace test_temp
  2  /

User altered.

ops$tkyte%ORA10GR2> select * from all_objects
  2  order by 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12;
select * from all_objects
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 8 in tablespace TEST_TEMP


ops$tkyte%ORA10GR2> select * from t;

MSG
-------------------------------------------------------------------------------
ora_sysevent = SERVERERROR
ora_login_user = OPS$TKYTE
ora_server_error = 1652
open cursor 1 select t_metrics_id, t_instance_name from WRI$_ALERT_THRESHO
open cursor 2  select count(*)  from xdb.xdb$schema s  where bitand(to_num
open cursor 3 select count(distinct lang_desc) from languages
open cursor 4 select * from all_objects order by 1, 2, 3, 4, 5, 6, 7, 8, 9
open cursor 5 INSERT INTO T VALUES ( 'ora_login_user = ' || ORA_LOGIN_USER
open cursor 6 table_1_ff_154_0_0_0
open cursor 7 table_1_ff_154_0_0_0
open cursor 8 table_1_ff_154_0_0_0
open cursor 9 select value(p$) from "XDB"."XDB$ELEMENT"                 as
open cursor 10 select value(p$) from "XDB"."XDB$ANY"                 as of
open cursor 11 table_1_9_a30f_0_0_0
open cursor 12 select value(p$) from "XDB"."XDB$COMPLEX_TYPE"
open cursor 13 SELECT USER FROM SYS.DUAL
open cursor 14 table_1_ff_163_0_0_0
open cursor 15 table_1_ff_163_0_0_0
open cursor 16 table_1_9_a3a5_0_0_0
open cursor 17 SELECT USER_ID FROM ALL_USERS WHERE USERNAME = :B1
open cursor 18 table_1_9_a21d_0_0_0
open cursor 19 table_1_9_a359_0_0_0
open cursor 20 SELECT /*+ ALL_ROWS */ COUNT(*) FROM ALL_POLICIES V WHERE V.
open cursor 21 select value(p$) from "XDB"."XDB$SCHEMA"                 as
open cursor 22 select value(p$) from "XDB"."XDB$RESOURCE"                 a
open cursor 23 select value(p$) from "XDB"."XDB$SEQUENCE_MODEL"
open cursor 24 select value(p$) from "XDB"."XDB$CHOICE_MODEL"
open cursor 25 INSERT INTO T VALUES ( 'ora_server_error = ' || ORA_SERVER_E
open cursor 26 BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
open cursor 27 INSERT INTO T SELECT 'open cursor ' || ROWNUM || ' ' || SQL_
open cursor 28 table_1_ff_15e_0_0_0
open cursor 29 table_1_ff_15e_0_0_0
open cursor 30 table_1_ff_159_0_0_0
open cursor 31 table_1_ff_159_0_0_0
open cursor 32 table_1_ff_159_0_0_0
open cursor 33  select count(*)  from user$ u, xdb.xdb$schema s  where u.us
open cursor 34 begin     if ( is_servererror(1652) )     then         inser
open cursor 35 select ts.segment_space_management              from dba_seg
open cursor 36 table_1_9_a327_0_0_0
open cursor 37 table_1_9_a2f7_0_0_0
open cursor 38 select value(p$) from "XDB"."XDB$ATTRIBUTE"
open cursor 39 select value(p$) from "XDB"."XDB$ATTRIBUTE"
open cursor 40 table_1_9_a18f_0_0_0
open cursor 41 select value(p$) from "XDB"."XDB$SIMPLE_TYPE"
open cursor 42 INSERT INTO T VALUES ( 'ora_sysevent = ' || ORA_SYSEVENT )
open cursor 43 table_1_9_a33d_0_0_0

46 rows selected.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter user ops$tkyte temporary tablespace temp;

User altered.

ops$tkyte%ORA10GR2> drop tablespace test_temp including contents and datafiles;

Tablespace dropped.

ops$tkyte%ORA10GR2> drop trigger failed_to_extend_temp;

Trigger dropped.


My task

visitory, September 06, 2009 - 12:33 pm UTC

table A has 100000 rows ,each with a key of 10 bytes ,and data of 90 bytes . Table B has 50000 rows ,each with a key of 20 bytes and dataof 80 bytes . table C is dependent to table A with 10 rows for each parent row in A ,each row with a key of 12 bytes, and data of 78 bytes.Table D facilitates a many-to-many relationship between tables A and B , and has 20 bytes of additional data in each row .10% of the rows in table A have have a relationship with atlestone row in table bB,and 40% of rows in B have a Table A has 100,000 rows, each with a key of 10 bytes, and data of 90 bytes.
Table B has 50,000 rows, each with a key of 20 bytes, and data of 80 bytes.
Table C is a dependent to table A with 10 rows for each parent row in A, each row with a key of 12 bytes, and data of 78 bytes.
Table D facilitates a many-to-many relationship between tables A and B, and has 20 bytes of additional data in each row. 10% of the rows in table A have a relationship with at least one row in table B, and 40% of the rows in B have a relationship with at least one row in table A. On average, only 1% of all possible relationships is actually used (i.e. places data into table D).
Table E facilitates a many-to-many relationship between rows in the same table B, based on a unique key in table B, which is 8 bytes long. On average, three-times as many relationships exist between rows, as there are rows in B. No additional data are placed in table E.

Each table has an appropriate primary key, which is indexed (using a B-Tree). Each leaf page in the index can hold key entries in 50% of its page size of 8Kb (8192 bytes). Each key entry is (key-length + 4) bytes long.


MY QUESTION IS =How do i compute a size estimate for a table space that holds these five tables and their indexes and still has 50% free space after loading all data.
And not estimate any additional space for overhead.
Using formulas


please help me i need the answers now.

Tom Kyte
September 07, 2009 - 1:51 am UTC

this is so tiny, why bother.

but, load it, measure it. that is what I would do. It is TINY.

formulas will be guesses only.

tempoarary tbs

Madhavi, December 13, 2009 - 10:20 pm UTC

Hi Tom,

My Oracle (10.1.0.5.0) System temporary tablespace is used by other Users even though every user has assigned their default temporary tablespace
any idea?
When i disabled fine grained auditing it is not using.

Thanks.
Madhavi
Tom Kyte
December 14, 2009 - 9:27 am UTC

prove it, show me the data that you think indicates they are. That would be helpful in providing an answer.

Is my TEMP really used or it is FREE?

Bill, January 15, 2010 - 10:06 am UTC

From v$TEMP_EXTENT_POOL, v$TEMPFILE, v$TEMP_SPACE_HEADER, v$TEMPSEG_USAGE, v$SORT_SEGMENT, I get the following. My question is if my TEMP is really used (as shown by v$TEMP_SPACE_HEADER) or if it is free (last query below). I was told that it contains "orphaned extents" that need to be dropped and recreated. I am not too convinced that it is the case but I can't prove that it is not "used" as v$v$TEMPSEG_USAGE says there is no usage. I am quite confused because I can't figure out if those sum(bytes_used) is actually "used" or simply "allocated" and ready for next session. Please advise. Thanks.


select sum(bytes)/1024/1024/1024 from v$TEMPFILE;

SUM(BYTES)/1024/1024/1024
-------------------------
27

select sum(bytes_cached)/1024/1024/1024, sum(bytes_used)/1024/1024/1024 from V$TEMP_EXTENT_POOL;

SUM(BYTES_CACHED)/1024/1024/1024 SUM(BYTES_USED)/1024/1024/1024
-------------------------------- ------------------------------
13.6416016 0


SELECT tablespace_name, SUM(bytes_used) / 1024 / 1024/1024, SUM(bytes_free)/ 1024 / 1024/1024
FROM V$temp_space_header
GROUP BY tablespace_name;


TABLESPACE_NAME SUM(BYTES_USED)/1024/1024/1024 SUM(BYTES_FREE)/1024/1024/1024
------------------------------ ------------------------------ ------------------------------
TEMP 13.6464844 13.3535156

1 row selected.

select * from V$TEMPSEG_USAGE;

no rows selected.

select * from V$SORT_SEGMENT;


TABLESPACE_NAME SEGMENT_FILE SEGMENT_BLOCK EXTENT_SIZE
------------------------------- ------------ ------------- -----------
CURRENT_USERS TOTAL_EXTENTS TOTAL_BLOCKS USED_EXTENTS USED_BLOCKS FREE_EXTENTS
------------- ------------- ------------ ------------ ----------- ------------
FREE_BLOCKS ADDED_EXTENTS EXTENT_HITS FREED_EXTENTS FREE_REQUESTS MAX_SIZE
----------- ------------- ----------- ------------- ------------- ----------
MAX_BLOCKS MAX_USED_SIZE MAX_USED_BLOCKS MAX_SORT_SIZE MAX_SORT_BLOCKS
---------- ------------- --------------- ------------- ---------------
RELATIVE_FNO
------------
TEMP 0 0 128
0 13969 1788032 0 0 13969
1788032 0 1885 0 0 13969
1788032 522 66816 522 66816
0


select * from V$SORT_USAGE;

no rows selected.



SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;


TABLESPACE MB_TOTAL MB_USED MB_FREE
------------------------------- ---------- ---------- ----------
TEMP 27648 0 27648

1 row selected.


Tom Kyte
January 18, 2010 - 5:25 pm UTC

you have (not used the code button, but that is another story) temp allocated, but no one is using it right now. correct.


who made up the orphaned extents story?

Tahnks for your great support ever

A reader, February 03, 2010 - 12:27 am UTC

Hi Tom,

I had an problem that when ever i'm getting the error
ORA-1652: unable to extend temp segment by 256 in tablespace TEMP,I just added 500mb or 800mb to it.

But when I checked the mount point(where temp files are placed)size gets decreased slowly even after adding 800mb to the temp file..


Thnaks,
Arvind

IDX MAINTENANCE(SORT) leading to ORA-1652?

Swarup, March 04, 2010 - 4:58 am UTC

We have a INSERT statement of the form "INSERT /*+APPEND*/.. SELECT..FROM" as part of ETL tool.
The SELECT query has single join between two tables, one with 20G data and other with 0.5M data.
This SELECT query ends up in "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP".
Temp tablespace is 32G. The query plan is as given below :

--------------------------------------------------------------------------------------------------
Plan hash value: 2365522509

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 300M| 22G| 776K (2)| 02:35:22 |
|* 1 | HASH JOIN RIGHT OUTER| | 300M| 22G| 776K (2)| 02:35:22 |
| 2 | TABLE ACCESS FULL | TAB_SMALL_TABLE | 50464 | 443K| 171 (1)| 00:00:03 |
| 3 | TABLE ACCESS FULL | TAB_LARGE_TABLE | 300M| 20G| 775K (2)| 02:35:02 |
--------------------------------------------------------------------------------------------

Please help me. I want to know if this query really needs more than 32G of temp space.
Also, V$SQL_WORKAREA_ACTIVE view showed 6 to 8 "IDX MAINTENANCE(SORT)" operations.
I am worried, why so many INDEX related operations during FULL TABLE SCAN.

Thanks
Tom Kyte
March 04, 2010 - 9:56 am UTC

the table you are inserting into - how many indexes does IT have on it that we need to maintain?

eg: the index maintenance happens during the insert phase, not the select.

when you direct path insert, we store the index of the newly inserted data in temp and then merge that in bulk with the existing indexes afterwards.

Unable to extend temp segment issue & it Consumed 25 GB

KK, March 09, 2010 - 2:37 pm UTC

Hi

We had received below error when we execute the materialized view in production environment. My prod environment is oracle 9i. The script took 25GB temp table space and failed.

*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-12801: error signaled in parallel query server P007
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 841
ORA-06512: at line 1

We have executed query which was inside that materialized view. We got the same error. Below list are having all the tables with the record count.


SELECT COUNT(*) FROM KAT_FACT --> 22390150
SELECT COUNT(*) FROM KAT_CHECK_FACT --> 1103004
SELECT COUNT(*) FROM KAT_INVC_DIM INVOICE --> 1078143
SELECT COUNT(*) FROM KAT_CHECK_DIM --> 832006
SELECT COUNT(*) FROM KAT_INVC_PAYMT_DIM --> 914917
SELECT COUNT(*) FROM KAT_INVC_PAYMT_SCHED_DIM --> 982658
SELECT COUNT(*) FROM KAT_BTCH_DIM --> 53413
SELECT COUNT(*) FROM KAT_DIM --> 568655
SELECT COUNT(*) FROM KAT_PO_DIM --> 805417
SELECT COUNT(*) FROM KAT_INVC_DSTRBN_DIM --> 2637032

Query:

The SQL query which was available inside that materialized view.

SELECT
ven.KAT_nm supp
, ven. KAT_nbr supp_nbr
, po_head.po_hdr_agent_full_nm BUYER_NAME
, po_head.po_hdr_creation_date po_created
, po_head.PO_HDR_CLOSED_CD po_status
, CASE WHEN po_head.PO_HDR_KEY > 0
THEN po_head.PO_HDR_KAT_PO_NBR
WHEN po_head.PO_HDR_KEY < 0
THEN inv.INVC_E_INVC_PO_NBR
END PO_NUMBER
, inv_dstrbn.dstrbn_proj_nbr JOB_NUMBER
, inv_dstrbn.dstrbn_expenditure_typ job_type
, inv_dstrbn.dstrbn_task_nbr task_number
, inv_dstrbn.dstrbn_coa_lob LOB
, CASE WHEN po_head.PO_HDR_KEY < 0
THEN (0)
ELSE po_head.PO_HDR_PO_SHPMT_AMT
END po_total_value
, NVL(SUM(inv_DSTRBN.DSTRBN_INVC_DSTRBN_AMT),(0)) recognized_cost
, ((CASE WHEN po_head.PO_HDR_KEY < 0
THEN (0)
ELSE po_head.PO_HDR_PO_SHPMT_AMT
END) - SUM(inv_DSTRBN.DSTRBN_INVC_DSTRBN_AMT)) po_outstanding_amount
, CASE WHEN NVL(inv.invc_crncy_exch_rate,1.00000) = 1.00000
THEN NVL(SUM(inv_DSTRBN.DSTRBN_INVC_DSTRBN_AMT),(0))
ELSE (inv.invc_crncy_exch_rate * NVL(SUM(inv_DSTRBN.DSTRBN_INVC_DSTRBN_AMT),(0)))
END recognized_cost_in_local_crncy
, inv.invc_crncy_cd inv_curr
, inv.invc_nbr inv_num
, inv.invc_dt inv_date
, sched.sched_remaining_amt amount_unpaid
, NVL(SUM(inv.INVC_PD_AMT),(0)) amount_paid
, sched.sched_due_dt payment_due_date
, CASE WHEN NVL(paymt.paymt_crncy_exch_rate,1.00000) = 1.00000
THEN NVL(SUM(inv.INVC_PD_AMT),(0))
ELSE (paymt.paymt_crncy_exch_rate
* NVL(SUM(inv.INVC_PD_AMT),(0)) )
END amount_paid_in_local_crncy
, btch.btch_nm Batch_name
, CASE WHEN chk.check_dt = '01-JAN-1960'
THEN NULL
ELSE chk.check_dt
END cheque_date
, chk.check_amt cheque_amount
, chk.check_crncy_cd cheque_curr
, chk.check_nbr cheque_number
, inv.invc_op_unit_org_nm org_name
, 'PASS1' pass_nbr
, paymt.PAYMT_INVC_PAYMT_AMT PAYMT_AMT
, CASE WHEN SCHED_PAYMT_NBR >= 1
THEN SCHED_PAYMT_NBR
ELSE NVL(paymt.PAYMT_PAYMT_NBR,1)
END PAYMT_NBR
, inv.INVC_KAT_PO_HDR_KEY
, KAT_FACT.INVC_KAT_INVC_KEY
, SCHED.SCHED_PAYMT_SCHED_AMT SCHED_AMT
, inv_dstrbn.dstrbn_lgcy_proj_nbr legacy_job_nbr
FROM
KAT_FACT KAT_FACT
, KAT_CHECK_FACT CHECK_FACT
, KAT_INVC_DIM inv
, KAT_CHECK_DIM CHK
, KAT_INVC_PAYMT_DIM PAYMT
, KAT_INVC_PAYMT_SCHED_DIM SCHED
, KAT_BTCH_DIM BTCH
, KAT_DIM ven
, KAT_PO_DIM po_head
, KAT_INVC_DSTRBN_DIM inv_DSTRBN
WHERE
KAT_FACT.SRC_ID = 1
AND KAT_FACT.SRC_ID = CHECK_FACT.SRC_ID
AND KAT_FACT.INVC_KAT_INVC_KEY = CHECK_FACT.INVC_KAT_INVC_KEY
AND KAT_FACT.INVC_KAT_INVC_KEY > 0
AND CHECK_FACT.INVC_KAT_INVC_KEY = inv.INVC_KAT_INVC_KEY
AND KAT_FACT.SRC_ID = inv.INVC_SRC_ID
AND CHECK_FACT.CHECK_KEY = CHK.CHECK_KEY
AND CHK.CHECK_KEY = PAYMT.PAYMT_CHECK_KEY
AND CHECK_FACT.PAYMT_KAT_INVC_PAYMT_KEY = PAYMT.PAYMT_KAT_INVC_PAYMT_KEY
AND CHECK_FACT.SCHED_INVC_PAYMT_SCHED_KEY = SCHED.SCHED_INVC_PAYMT_SCHED_KEY
AND CHECK_FACT.SRC_ID = SCHED.SCHED_SRC_ID
-- AND inv.INVC_PAYMT_STAT_FLG = SUBSTR(SCHED.SCHED_PAYMT_STAT_FLG,1,1)
AND CHECK_FACT.BTCH_KEY = BTCH.BTCH_KEY
AND ven.SITE_KEY = inv.INVC_SLPR_SITE_KEY
AND ven.KAT_SRC_ID = inv.INVC_SRC_ID
AND inv.INVC_KAT_PO_HDR_KEY = po_head.PO_HDR_KEY
AND KAT_FACT.DSTRBN_KAT_INVC_DSTRBN_KEY = inv_DSTRBN.DSTRBN_KAT_INVC_DSTRBN_KEY
AND inv_dstrbn.dstrbn_proj_nbr IS NOT NULL
AND inv_dstrbn.DSTRBN_LN_TYP_LOOKUP_CD NOT IN ('TAX','FREIGHT','MISCELLANEOUS')
GROUP BY
inv.invc_op_unit_org_nm
, ven.KAT_nm
, ven. KAT_nbr
, KAT_FACT.INVC_KAT_INVC_KEY
, inv.invc_nbr
, inv_dstrbn.dstrbn_proj_nbr
, inv.INVC_KAT_PO_HDR_KEY
, po_head.PO_HDR_KEY
, inv.INVC_E_INVC_PO_NBR
, po_head.PO_HDR_KAT_PO_NBR
, chk.check_nbr
, SCHED.SCHED_PAYMT_NBR
, NVL(paymt.PAYMT_PAYMT_NBR,1)
, btch.btch_nm
, inv_dstrbn.dstrbn_coa_lob
, inv_dstrbn.dstrbn_task_nbr
, inv_dstrbn.dstrbn_expenditure_typ
, inv.invc_amt
, inv.invc_crncy_exch_rate
, inv.invc_crncy_cd
, inv.invc_dt
, po_head.po_hdr_agent_full_nm
, po_head.po_hdr_creation_date
, chk.check_crncy_cd
, chk.check_dt
, chk.check_amt
, po_head.PO_HDR_PO_SHPMT_AMT
, sched.sched_remaining_amt
, sched.sched_due_dt
, po_head.PO_HDR_CLOSED_CD
, paymt.paymt_crncy_exch_rate
, paymt.PAYMT_INVC_PAYMT_AMT
, SCHED.SCHED_PAYMT_SCHED_AMT
, inv_dstrbn.dstrbn_lgcy_proj_nbr



Explain plan:

We got the below explain for this query.


Plan Table
----------------------------------------------------------------------------------------------------
| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |
------------------------------------------------------------------------------------------
| SELECT STATEMENT | | 3 | 1K| 4760 | | |
| SORT GROUP BY | | 3 | 1K| 4760 | | |
| HASH JOIN | | 3 | 1K| 4756 | | |
| NESTED LOOPS | | 34 | 10K| 1210 | | |
| NESTED LOOPS | | 1 | 304 | 1209 | | |
| NESTED LOOPS | | 1 | 250 | 1208 | | |
| NESTED LOOPS | | 1 | 212 | 1207 | | |
| NESTED LOOPS | | 1 | 139 | 1206 | | |
| HASH JOIN | | 1 | 110 | 1205 | | |
| TABLE ACCESS FULL|KAT_INVC_PAYMT_DIM | 906K| 19M| 331 | | |
| HASH JOIN | | 1M| 91M| 874 | | |
| TABLE ACCESS FUL|KAT_CHECK_DIM | 832K| 24M| 650 | | |
| HASH JOIN | | 1M| 58M| 224 | | |
| TABLE ACCESS FU|KAT_BTCH_DIM | 53K| 1M| 29 | | |
| TABLE ACCESS FU|KAT_CHECK_FACT | 1M| 35M| 195 | | |
| TABLE ACCESS BY IN|KAT_INVC_PAYMT_SCH | 1 | 29 | 3 | | |
| INDEX RANGE SCAN |INVC_PAY_SCHD_DIM_I | 1 | | 2 | | |
| TABLE ACCESS BY IND|KAT_INVC_DIM | 1 | 73 | 3 | | |
| INDEX RANGE SCAN |KAT_INVC_DIM_IDX2 | 1 | | 2 | | |

Plan Table
----------------------------------------------------------------------------------------------------
| TABLE ACCESS BY INDE|KAT_DIM | 1 | 38 | 3 | | |
| INDEX RANGE SCAN |KAT_DIM_IDX3 | 1 | | 2 | | |
| TABLE ACCESS BY INDEX|KAT_PO_DIM | 1 | 54 | 3 | | |
| INDEX RANGE SCAN |KAT_PO_DIM_IDX5 | 1 | | 2 | | |
| INDEX RANGE SCAN |KAT_FACT_IDX9 | 32 | 480 | 3 | | |
| TABLE ACCESS FULL |KAT_INVC_DSTRBN_DI | 236K| 7M| 3546 | | |
------------------------------------------------------------------------------------------------------


The KAT_INVC_DSTRBN_DIM went for the table access full and same time cost was very huge(cost=3546). So we have tried to avoid that one.

So we have created the index on that table.

CREATE INDEX KAT_DSTRBN_DSTRBN_KEY_idx ON KAT_INVC_DSTRBN_DIM (DSTRBN_KAT_INVC_DSTRBN_KEY)

After that, the cost got reduced and query has executed with out issue. Please refer the below screen shot. It took less 4GB temp table space.

Plan Table
--------------------------------------------------------------------------------------------------
| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |
------------------------------------------------------------------------------------------
| SELECT STATEMENT | | 3 | 1K| 1227 | | |
| SORT GROUP BY | | 3 | 1K| 1227 | | |
| TABLE ACCESS BY INDEX RO|KAT_INVC_DSTRBN_DI | 1 | 34 | 3 | | |
| NESTED LOOPS | | 3 | 1K| 1223 | | |
| NESTED LOOPS | | 34 | 10K| 1210 | | |
| NESTED LOOPS | | 1 | 304 | 1209 | | |
| NESTED LOOPS | | 1 | 250 | 1208 | | |
| NESTED LOOPS | | 1 | 212 | 1207 | | |
| NESTED LOOPS | | 1 | 139 | 1206 | | |
| HASH JOIN | | 1 | 110 | 1205 | | |
| TABLE ACCESS FUL|KAT_INVC_PAYMT_DIM | 906K| 19M| 331 | | |
| HASH JOIN | | 1M| 91M| 874 | | |
| TABLE ACCESS FU|KAT_CHECK_DIM | 832K| 24M| 650 | | |
| HASH JOIN | | 1M| 58M| 224 | | |
| TABLE ACCESS F|KAT_BTCH_DIM | 53K| 1M| 29 | | |
| TABLE ACCESS F|KAT_CHECK_FACT | 1M| 35M| 195 | | |
| TABLE ACCESS BY I|KAT_INVC_PAYMT_SCH | 1 | 29 | 3 | | |
| INDEX RANGE SCAN|INVC_PAY_SCHD_DIM_I | 1 | | 2 | | |
| TABLE ACCESS BY IN|KAT_INVC_DIM | 1 | 73 | 3 | | |

Plan Table
--------------------------------------------------------------------------------------------------
| INDEX RANGE SCAN |KAT_INVC_DIM_IDX2 | 1 | | 2 | | |
| TABLE ACCESS BY IND|KAT_DIM | 1 | 38 | 3 | | |
| INDEX RANGE SCAN |KAT_DIM_IDX3 | 1 | | 2 | | |
| TABLE ACCESS BY INDE|KAT_PO_DIM | 1 | 54 | 3 | | |
| INDEX RANGE SCAN |KAT_PO_DIM_IDX5 | 1 | | 2 | | |
| INDEX RANGE SCAN |KAT_FACT_IDX9 | 32 | 480 | 3 | | |
| INDEX RANGE SCAN |KAT_DSTRBN_DSTRBN_ | 1 | | 2 | | |
------------------------------------------------------------------------------------------------------


My Question:

We have avoided the FULL table scan and changed it to the index scan. How it is reducing the Temp table space usage?

Could you guide me on this one? Could you explain how temp table space usage got reduced?

Regards,
Karthi

My Question

Karthi, March 23, 2010 - 1:23 pm UTC

Hi Tom,

It is continuation of my above post.

I have studied more on architecture side and still I did not get the solution for my question.

My Analysis and Doubts:

What ever query reads the data from the data files. It will go to Database buffer cache. It will not directly go into temp table space. Correct me if I am wrong.

Got below comment from net:

Temporary table spaces are used to manage space for database sort operations and for storing global temporary tables.

The following SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.

My Question:

The data will travel from Database buffer cache to Temporary table space for shorting. The below query is having only GROUP BY because that it is using temp table space. I removed the group by and checked the temp table space usage. The result is “Not using temp table space”.

My assumption is, the sort area should have the columns which are in the SELECT clause (not all the columns in the table). Take the below scenario.

SELECT c1,c2,c3 from T
Order by c2,c3;

The sort area should use only c2 and c3 and need to sort it. I am thinking that it will use only those 2 columns.

If my above assumption is true, there is no relation with index or full table scan because sort area will have same number of columns.

Please provide your comments on this.

Regards,
Karthi



Tom Kyte
March 24, 2010 - 3:47 am UTC

What ever query reads the data from the data files. It will go to Database buffer cache. It will not directly go into temp table space. Correct me if I am wrong.

well, this is a bit confusing. If a query is reading a datafile, of course it will not go to a tempfile, since it is (according to you) reading a datafile. And TYPICALLY - but not always - it will go to the buffer cache to look for the block. If the query decided to employ direct path reads, it will NOT go to the buffer cache but will just read the datafile directly into its own PGA.


The data will travel from Database buffer cache to Temporary table space for shorting.

maybe yes, maybe no. If the data was read using conventional IO (db file scattered read, db file sequential read) - it would go from buffer cache to PGA and from PGA to temp if it exceeds the workarea size allocated in the PGA for sorting.

If the data was read using direct path reads, it will go from disk to PGA and from PGA to temp if it exceed the workarea size...


The sort area should use only c2 and c3 and need to sort it. I am thinking that it will use only those 2 columns

you are probably wrong. First - if there was an index on (c2,c3,c1) - it would not necessarily need to sort at all, it would just read it from the index. If there was an index on (c2,c3) - the same could be true, it would read the index and then the table.

Second, if you need c1,c2,c3 and order by c2,c3 and there is no index being used, it would be a waste of time (typically) to read just c2, c3 - sort them - and then GO BACK to the table to pick up c1 again. We would get c1,c2,c3 - order that by c2,c3 and then return the data.


Created a index in on join. Please explain me how it reduced the Temp table space usage

Karthi, March 25, 2010 - 1:27 pm UTC

Thanks a lot tom for your reply!

I have posted my question with query on March 9, 2010 - 2pm Central time zone with subject “Unable to extend temp segment issue & it Consumed 25 GB “

That query did not have order by clause. It had a group by function. I hope, because of that it went for temp table space usage. One join had a full table scan, so we have created index to reduce the cost. After that it took very less temp table space. The group by clause is having 35 columns in both the time ( while full table scan and index scan)

Note: I have posted the full query and explain plain in my above post. Please refer it.

Query:

SELECT
………………
FROM
KAT_FACT KAT_FACT
, KAT_CHECK_FACT CHECK_FACT
, KAT_INVC_DIM inv
, KAT_CHECK_DIM CHK
, KAT_INVC_PAYMT_DIM PAYMT
, KAT_INVC_PAYMT_SCHED_DIM SCHED
, KAT_BTCH_DIM BTCH
, KAT_DIM ven
, KAT_PO_DIM po_head
, KAT_INVC_DSTRBN_DIM inv_DSTRBN
WHERE
KAT_FACT.SRC_ID = 1
AND KAT_FACT.SRC_ID = CHECK_FACT.SRC_ID
AND KAT_FACT.INVC_KAT_INVC_KEY = CHECK_FACT.INVC_KAT_INVC_KEY
AND KAT_FACT.INVC_KAT_INVC_KEY > 0
AND CHECK_FACT.INVC_KAT_INVC_KEY = inv.INVC_KAT_INVC_KEY
AND KAT_FACT.SRC_ID = inv.INVC_SRC_ID
AND CHECK_FACT.CHECK_KEY = CHK.CHECK_KEY
AND CHK.CHECK_KEY = PAYMT.PAYMT_CHECK_KEY
AND CHECK_FACT.PAYMT_KAT_INVC_PAYMT_KEY = PAYMT.PAYMT_KAT_INVC_PAYMT_KEY
AND CHECK_FACT.SCHED_INVC_PAYMT_SCHED_KEY = SCHED.SCHED_INVC_PAYMT_SCHED_KEY
AND CHECK_FACT.SRC_ID = SCHED.SCHED_SRC_ID
-- AND inv.INVC_PAYMT_STAT_FLG = SUBSTR(SCHED.SCHED_PAYMT_STAT_FLG,1,1)
AND CHECK_FACT.BTCH_KEY = BTCH.BTCH_KEY
AND ven.SITE_KEY = inv.INVC_SLPR_SITE_KEY
AND ven.KAT_SRC_ID = inv.INVC_SRC_ID
AND inv.INVC_KAT_PO_HDR_KEY = po_head.PO_HDR_KEY
AND KAT_FACT.DSTRBN_KAT_INVC_DSTRBN_KEY = inv_DSTRBN.DSTRBN_KAT_INVC_DSTRBN_KEY
AND inv_dstrbn.dstrbn_proj_nbr IS NOT NULL
AND inv_dstrbn.DSTRBN_LN_TYP_LOOKUP_CD NOT IN ('TAX','FREIGHT','MISCELLANEOUS')
GROUP BY
inv.invc_op_unit_org_nm
, ven.KAT_nm
, ven. KAT_nbr
, KAT_FACT.INVC_KAT_INVC_KEY
, inv.invc_nbr
, inv_dstrbn.dstrbn_proj_nbr
, inv.INVC_KAT_PO_HDR_KEY
, po_head.PO_HDR_KEY
, inv.INVC_E_INVC_PO_NBR
, po_head.PO_HDR_KAT_PO_NBR
, chk.check_nbr
, SCHED.SCHED_PAYMT_NBR
, NVL(paymt.PAYMT_PAYMT_NBR,1)
, btch.btch_nm
, inv_dstrbn.dstrbn_coa_lob
, inv_dstrbn.dstrbn_task_nbr
, inv_dstrbn.dstrbn_expenditure_typ
, inv.invc_amt
, inv.invc_crncy_exch_rate
, inv.invc_crncy_cd
, inv.invc_dt
, po_head.po_hdr_agent_full_nm
, po_head.po_hdr_creation_date
, chk.check_crncy_cd
, chk.check_dt
, chk.check_amt
, po_head.PO_HDR_PO_SHPMT_AMT
, sched.sched_remaining_amt
, sched.sched_due_dt
, po_head.PO_HDR_CLOSED_CD
, paymt.paymt_crncy_exch_rate
, paymt.PAYMT_INVC_PAYMT_AMT
, SCHED.SCHED_PAYMT_SCHED_AMT
, inv_dstrbn.dstrbn_lgcy_proj_nbr



Explain plan:

We got the below explain for this query.


Plan Table
----------------------------------------------------------------------------------------------------

| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |
------------------------------------------------------------------------------------------
| SELECT STATEMENT | | 3 | 1K| 4760 | | |
| SORT GROUP BY | | 3 | 1K| 4760 | | |
| HASH JOIN | | 3 | 1K| 4756 | | |
| NESTED LOOPS | | 34 | 10K| 1210 | | |
| NESTED LOOPS | | 1 | 304 | 1209 | | |
| NESTED LOOPS | | 1 | 250 | 1208 | | |
| NESTED LOOPS | | 1 | 212 | 1207 | | |
| NESTED LOOPS | | 1 | 139 | 1206 | | |
| HASH JOIN | | 1 | 110 | 1205 | | |
| TABLE ACCESS FULL|KAT_INVC_PAYMT_DIM | 906K| 19M| 331 | | |
| HASH JOIN | | 1M| 91M| 874 | | |
| TABLE ACCESS FUL|KAT_CHECK_DIM | 832K| 24M| 650 | | |
| HASH JOIN | | 1M| 58M| 224 | | |
| TABLE ACCESS FU|KAT_BTCH_DIM | 53K| 1M| 29 | | |
| TABLE ACCESS FU|KAT_CHECK_FACT | 1M| 35M| 195 | | |
| TABLE ACCESS BY IN|KAT_INVC_PAYMT_SCH | 1 | 29 | 3 | | |
| INDEX RANGE SCAN |INVC_PAY_SCHD_DIM_I | 1 | | 2 | | |
| TABLE ACCESS BY IND|KAT_INVC_DIM | 1 | 73 | 3 | | |
| INDEX RANGE SCAN |KAT_INVC_DIM_IDX2 | 1 | | 2 | | |

Plan Table
----------------------------------------------------------------------------------------------------

| TABLE ACCESS BY INDE|KAT_DIM | 1 | 38 | 3 | | |
| INDEX RANGE SCAN |KAT_DIM_IDX3 | 1 | | 2 | | |
| TABLE ACCESS BY INDEX|KAT_PO_DIM | 1 | 54 | 3 | | |
| INDEX RANGE SCAN |KAT_PO_DIM_IDX5 | 1 | | 2 | | |
| INDEX RANGE SCAN |KAT_FACT_IDX9 | 32 | 480 | 3 | | |
| TABLE ACCESS FULL |KAT_INVC_DSTRBN_DI | 236K| 7M| 3546 | | |
----------------------------------------------------------------------------------------------------
--


The KAT_INVC_DSTRBN_DIM went for the table access full and same time cost was very huge(cost=3546). (Please refer the last line in the explain plan)
So we have tried to avoid that one.

So we have created the index on that table.

CREATE INDEX KAT_DSTRBN_DSTRBN_KEY_idx ON KAT_INVC_DSTRBN_DIM (DSTRBN_KAT_INVC_DSTRBN_KEY)

After that, the cost got reduced and query has executed with out issue. Please refer the below
Explain plan. It took less 4GB temp table space.

Plan Table
--------------------------------------------------------------------------------------------------
| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |
------------------------------------------------------------------------------------------
| SELECT STATEMENT | | 3 | 1K| 1227 | | |
| SORT GROUP BY | | 3 | 1K| 1227 | | |
| TABLE ACCESS BY INDEX RO|KAT_INVC_DSTRBN_DI | 1 | 34 | 3 | | |
| NESTED LOOPS | | 3 | 1K| 1223 | | |
| NESTED LOOPS | | 34 | 10K| 1210 | | |
| NESTED LOOPS | | 1 | 304 | 1209 | | |
| NESTED LOOPS | | 1 | 250 | 1208 | | |
| NESTED LOOPS | | 1 | 212 | 1207 | | |
| NESTED LOOPS | | 1 | 139 | 1206 | | |
| HASH JOIN | | 1 | 110 | 1205 | | |
| TABLE ACCESS FUL|KAT_INVC_PAYMT_DIM | 906K| 19M| 331 | | |
| HASH JOIN | | 1M| 91M| 874 | | |
| TABLE ACCESS FU|KAT_CHECK_DIM | 832K| 24M| 650 | | |
| HASH JOIN | | 1M| 58M| 224 | | |
| TABLE ACCESS F|KAT_BTCH_DIM | 53K| 1M| 29 | | |
| TABLE ACCESS F|KAT_CHECK_FACT | 1M| 35M| 195 | | |
| TABLE ACCESS BY I|KAT_INVC_PAYMT_SCH | 1 | 29 | 3 | | |
| INDEX RANGE SCAN|INVC_PAY_SCHD_DIM_I | 1 | | 2 | | |
| TABLE ACCESS BY IN|KAT_INVC_DIM | 1 | 73 | 3 | | |

Plan Table
--------------------------------------------------------------------------------------------------
| INDEX RANGE SCAN |KAT_INVC_DIM_IDX2 | 1 | | 2 | | |
| TABLE ACCESS BY IND|KAT_DIM | 1 | 38 | 3 | | |
| INDEX RANGE SCAN |KAT_DIM_IDX3 | 1 | | 2 | | |
| TABLE ACCESS BY INDE|KAT_PO_DIM | 1 | 54 | 3 | | |
| INDEX RANGE SCAN |KAT_PO_DIM_IDX5 | 1 | | 2 | | |
| INDEX RANGE SCAN |KAT_FACT_IDX9 | 32 | 480 | 3 | | |
| INDEX RANGE SCAN |KAT_DSTRBN_DSTRBN_ | 1 | | 2 | | |
----------------------------------------------------------------------------------------------------
--


My Question:

We have avoided the FULL table scan and changed it to the index scan. How it is reducing the Temp
table space usage? Still the group by column is having the same number of columns.

Could you guide me on this one? Could you explain how temp table space usage got reduced?



Regards,
Karthi

smitha, May 22, 2010 - 1:19 am UTC

When i ran this query to create a table gave me "unable o extend temp segment error "

There is already 54GB freespace on temp tablespace.

first table in from clause has 12 million of records ,rest others are of lacks in count(*)
/
On which table i need to create index ?


SELECT
………………
FROM
KAT_FACT KAT_FACT
, KAT_CHECK_FACT CHECK_FACT
, KAT_INVC_DIM inv
, KAT_CHECK_DIM CHK
, KAT_INVC_PAYMT_DIM PAYMT
, KAT_INVC_PAYMT_SCHED_DIM SCHED
, KAT_BTCH_DIM BTCH
, KAT_DIM ven
, KAT_PO_DIM po_head
, KAT_INVC_DSTRBN_DIM inv_DSTRBN
WHERE
KAT_FACT.SRC_ID = 1
AND KAT_FACT.SRC_ID = CHECK_FACT.SRC_ID
AND KAT_FACT.INVC_KAT_INVC_KEY = CHECK_FACT.INVC_KAT_INVC_KEY
AND KAT_FACT.INVC_KAT_INVC_KEY > 0
AND CHECK_FACT.INVC_KAT_INVC_KEY = inv.INVC_KAT_INVC_KEY
AND KAT_FACT.SRC_ID = inv.INVC_SRC_ID
AND CHECK_FACT.CHECK_KEY = CHK.CHECK_KEY
AND CHK.CHECK_KEY = PAYMT.PAYMT_CHECK_KEY
AND CHECK_FACT.PAYMT_KAT_INVC_PAYMT_KEY = PAYMT.PAYMT_KAT_INVC_PAYMT_KEY
AND CHECK_FACT.SCHED_INVC_PAYMT_SCHED_KEY = SCHED.SCHED_INVC_PAYMT_SCHED_KEY
AND CHECK_FACT.SRC_ID = SCHED.SCHED_SRC_ID
-- AND inv.INVC_PAYMT_STAT_FLG = SUBSTR(SCHED.SCHED_PAYMT_STAT_FLG,1,1)
AND CHECK_FACT.BTCH_KEY = BTCH.BTCH_KEY
AND ven.SITE_KEY = inv.INVC_SLPR_SITE_KEY
AND ven.KAT_SRC_ID = inv.INVC_SRC_ID
AND inv.INVC_KAT_PO_HDR_KEY = po_head.PO_HDR_KEY
AND KAT_FACT.DSTRBN_KAT_INVC_DSTRBN_KEY = inv_DSTRBN.DSTRBN_KAT_INVC_DSTRBN_KEY
AND inv_dstrbn.dstrbn_proj_nbr IS NOT NULL
AND inv_dstrbn.DSTRBN_LN_TYP_LOOKUP_CD NOT IN ('TAX','FREIGHT','MISCELLANEOUS')
GROUP BY
inv.invc_op_unit_org_nm
, ven.KAT_nm
, ven. KAT_nbr
, KAT_FACT.INVC_KAT_INVC_KEY
, inv.invc_nbr
, inv_dstrbn.dstrbn_proj_nbr
, inv.INVC_KAT_PO_HDR_KEY
, po_head.PO_HDR_KEY
, inv.INVC_E_INVC_PO_NBR
, po_head.PO_HDR_KAT_PO_NBR
, chk.check_nbr
, SCHED.SCHED_PAYMT_NBR
, NVL(paymt.PAYMT_PAYMT_NBR,1)
, btch.btch_nm
, inv_dstrbn.dstrbn_coa_lob
, inv_dstrbn.dstrbn_task_nbr
, inv_dstrbn.dstrbn_expenditure_typ
, inv.invc_amt
, inv.invc_crncy_exch_rate
, inv.invc_crncy_cd
, inv.invc_dt
, po_head.po_hdr_agent_full_nm
, po_head.po_hdr_creation_date
, chk.check_crncy_cd
, chk.check_dt
, chk.check_amt
, po_head.PO_HDR_PO_SHPMT_AMT
, sched.sched_remaining_amt
, sched.sched_due_dt
, po_head.PO_HDR_CLOSED_CD
, paymt.paymt_crncy_exch_rate
, paymt.PAYMT_INVC_PAYMT_AMT
, SCHED.SCHED_PAYMT_SCHED_AMT
, inv_dstrbn.dstrbn_lgcy_proj_nbr



Explain plan:

We got the below explain for this query.


Plan Table
----------------------------------------------------------------------------------------------------


| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |
------------------------------------------------------------------------------------------
| SELECT STATEMENT | | 3 | 1K| 4760 | | |
| SORT GROUP BY | | 3 | 1K| 4760 | | |
| HASH JOIN | | 3 | 1K| 14756 | | |
| NESTED LOOPS | | 34 | 10K| 1210 | | |
| NESTED LOOPS | | 1 | 304 | 1209 | | |
| NESTED LOOPS | | 1 | 250 | 1208 | | |
| NESTED LOOPS | | 1 | 212 | 1207 | | |
| NESTED LOOPS | | 1 | 139 | 1206 | | |
| HASH JOIN | | 1 | 110 | 1205 | | |
| TABLE ACCESS FULL|KAT_INVC_PAYMT_DIM | 906K| 19M| 331 | | |
| HASH JOIN | | 1M| 91M| 874 | | |
| TABLE ACCESS FUL|KAT_FACT | 832M| 24M| 650 | | |
| HASH JOIN | | 1M| 58M| 224 | | |
| TABLE ACCESS FU|KAT_BTCH_DIM | 53K| 1M| 29 | | |
| TABLE ACCESS FU|KAT_CHECK_FACT | 15| 35M| 195 | | |
| TABLE ACCESS BY IN|KAT_INVC_PAYMT_SCH | 1 | 29 | 3 | | |
| INDEX RANGE SCAN |INVC_PAY_SCHD_DIM_I | 1 | | 2 | | |
| TABLE ACCESS BY IND|KAT_INVC_DIM | 1 | 73 | 3 | | |
| INDEX RANGE SCAN |KAT_INVC_DIM_IDX2 | 1 | | 2 | | |

Plan Table
----------------------------------------------------------------------------------------------------


| TABLE ACCESS BY INDE|KAT_DIM | 1 | 38 | 3 | | |
| INDEX RANGE SCAN |KAT_DIM_IDX3 | 1 | | 2 | | |
| TABLE ACCESS BY INDEX|KAT_PO_DIM | 1 | 54 | 3 | | |
| INDEX RANGE SCAN |KAT_PO_DIM_IDX5 | 1 | | 2 | | |
| INDEX RANGE SCAN |KAT_FACT_IDX9 | 32 | 480 | 3 | | |
| TABLE ACCESS FULL |KAT_INVC_DSTRBN_DI | 236K| 7M| 3546 | | |
----------------------------------------------------------------------------------------------------

--



Temp Space failure because of parallel hint

A Mitra, June 16, 2010 - 12:07 pm UTC

Hi Tom,

I have a query which is throwing Ora-01652 error if I run it using PARALLEL hint. But the same query succeeds if I remove the parallel hint.

Select /*+ PARALLEL(a, 4) */ *
from TabA a, TabB b
where a.col1 = b.col1(+)

TabA => 1200,00,000 records
TabB => 180,000 records

Explain plan shows a full table scan on both tables.

My question is, can you please explain me why parallel hint is causing Temp space failure? Because my understanding is query will require the same total temp space in either cases (using or not using parallel hint).
Tom Kyte
June 22, 2010 - 11:41 am UTC

each parallel execution server will need it's own temp space and the coordinator could need its own and each parallel execution server might need as much (or more since they get less RAM overall to use - each one does) temp as the single thread did.

For a query like that - we would pick the smaller of the two tables and full scan/hash it. Each parallel execution server might have to spill that to temp (whereas in single threaded mode, just one would)

It is very very very easy for parallel query to require considerably more of every resource than a single threaded query.

I have a question about LOB_DATA

Pradeep Kumar, August 25, 2010 - 4:03 am UTC

Hi Tom,
I have a question here, LOB_DATA is occupying a lot of temp tablespace and it is not releasing the temp tablespace. Finally i am getting an error tmp tablespace is full.
So what I want do to overcome this issue?

SQL_HASH SEGTYPE
3422971926 LOB_DATA
Tom Kyte
August 26, 2010 - 11:59 am UTC

you may well have to increase temp - if you have lots of users using lots of temporary lobs, you may well need more temp space.

If you feel you don't have lots of users using lots of lobs - then you might have a bug in your developed code. You might have a plsql routine returning temporary lobs that the client application is receiving but not freeing. You'd need to dig into the application code to remedy that.

Temporary to Permanent

Arindam, September 18, 2010 - 10:34 pm UTC

Hi Tom,

Can a Temporary tablespace be converted into permanent and vice-versa? If yes then how and on what conditions?
Tom Kyte
September 20, 2010 - 1:53 pm UTC

no, a true temporary (create TEMPORARY tablespace....) tablespace cannot be.

Global temporary Tables

Rajeshwaran Jeyabal, November 25, 2010 - 8:10 am UTC

rajesh@10GR2> select (select decode(extent_management,'LOCAL','*',' ') ||
  2                 decode(segment_space_management,'AUTO','a ','m ')
  3            from dba_tablespaces where tablespace_name = b.tablespace_name) || nvl(b.tablespace_name,
  4               nvl(a.tablespace_name,'UNKOWN')) name,
  5         kbytes_alloc kbytes,
  6         kbytes_alloc-nvl(kbytes_free,0) used,
  7         nvl(kbytes_free,0) free,
  8         ((kbytes_alloc-nvl(kbytes_free,0))/
  9                            kbytes_alloc)*100 pct_used,
 10         nvl(largest,0) largest,
 11         nvl(kbytes_max,kbytes_alloc) Max_Size,
 12         decode( kbytes_max, 0, 0, (kbytes_alloc/kbytes_max)*100) pct_max_used
 13  from ( select sum(bytes)/1024 Kbytes_free,
 14                max(bytes)/1024 largest,
 15                tablespace_name
 16         from  sys.dba_free_space
 17         group by tablespace_name ) a,
 18       ( select sum(bytes)/1024 Kbytes_alloc,
 19                sum(maxbytes)/1024 Kbytes_max,
 20                tablespace_name
 21         from sys.dba_data_files
 22        where 1 = 0
 23         group by tablespace_name
 24         union all
 25        select sum(bytes)/1024 Kbytes_alloc,
 26                sum(maxbytes)/1024 Kbytes_max,
 27                tablespace_name
 28         from sys.dba_temp_files
 29         group by tablespace_name  )b
 30  where a.tablespace_name (+) = b.tablespace_name
 31  /

NAME                               KBYTES       USED       FREE   PCT_USED    LARGEST   MAX_SIZE PCT_MAX_USED
------------------------------ ---------- ---------- ---------- ---------- ---------- ---------- ------------
*m TEMP                            601088     601088          0        100          0   33554416   1.79138269

Elapsed: 00:00:01.09
rajesh@10GR2> create global temporary table t(
  2     x ,
  3     y ,
  4     z  )
  5     on commit preserve rows as
  6  select level,rpad('*',100,'*'),sysdate
  7  from dual
  8  connect by level <=100000;

Table created.

Elapsed: 00:00:02.28
rajesh@10GR2>
rajesh@10GR2> select count(*) from t;

  COUNT(*)
----------
    100000

Elapsed: 00:00:00.37
rajesh@10GR2>
rajesh@10GR2> select segment_name, segment_type, tablespace_name, blocks
  2  from user_segments
  3  where segment_name ='T'
  4  /

no rows selected

Elapsed: 00:00:00.06
rajesh@10GR2>
rajesh@10GR2> select (select decode(extent_management,'LOCAL','*',' ') ||
  2                 decode(segment_space_management,'AUTO','a ','m ')
  3            from dba_tablespaces where tablespace_name = b.tablespace_name) || nvl(b.tablespace_name,
  4               nvl(a.tablespace_name,'UNKOWN')) name,
  5         kbytes_alloc kbytes,
  6         kbytes_alloc-nvl(kbytes_free,0) used,
  7         nvl(kbytes_free,0) free,
  8         ((kbytes_alloc-nvl(kbytes_free,0))/
  9                            kbytes_alloc)*100 pct_used,
 10         nvl(largest,0) largest,
 11         nvl(kbytes_max,kbytes_alloc) Max_Size,
 12         decode( kbytes_max, 0, 0, (kbytes_alloc/kbytes_max)*100) pct_max_used
 13  from ( select sum(bytes)/1024 Kbytes_free,
 14                max(bytes)/1024 largest,
 15                tablespace_name
 16         from  sys.dba_free_space
 17         group by tablespace_name ) a,
 18       ( select sum(bytes)/1024 Kbytes_alloc,
 19                sum(maxbytes)/1024 Kbytes_max,
 20                tablespace_name
 21         from sys.dba_data_files
 22        where 1 = 0
 23         group by tablespace_name
 24         union all
 25        select sum(bytes)/1024 Kbytes_alloc,
 26                sum(maxbytes)/1024 Kbytes_max,
 27                tablespace_name
 28         from sys.dba_temp_files
 29         group by tablespace_name  )b
 30  where a.tablespace_name (+) = b.tablespace_name
 31  /

NAME                               KBYTES       USED       FREE   PCT_USED    LARGEST   MAX_SIZE PCT_MAX_USED
------------------------------ ---------- ---------- ---------- ---------- ---------- ---------- ------------
*m TEMP                            601088     601088          0        100          0   33554416   1.79138269

Elapsed: 00:00:00.20
rajesh@10GR2>


Tom:

http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref769

<quote>
Temporary tables use temporary segments. Unlike permanent tables, temporary tables and their indexes do not automatically allocate a segment when they are created
</quote>

Questions

1)Why the TEMP tablespace usage is not get increased after loading data's into table 'T' ?

2) Where i can find the segments allocated to this global temporary table, not able to find in user_segments ?


Tom Kyte
November 25, 2010 - 8:49 am UTC

1) because TEMP is allocated once (say six months ago) and reused. The temporary tablespace is showing as "601088 kbytes allocated", of which 100% is "used" (meaning owned by temp).

As applications need temp space, they get it from this "used" (allocated and formatted and manged space) space. A temporary segment in TEMP does not shrink - if you shutdown and restarted - and didn't let ANYONE log in - guess how much of temp would (still) be used - all of it.


2) V$TEMPSEG_USAGE

question on TEMP tablespace

Ravi B, December 07, 2010 - 11:37 pm UTC

Hi Tom,

We are getting the following error while running a query:

ORA-01652: unable to extend temp segment by 8192 in tablespace <TABLESAPCE_NAME>

But the TEMP tablespace for the user which is running the query is not <TABLESPACE_NAME> but <TABLESPACE_NAME_TMP>

I verified by:

select temporary_tablespace
from dba_users
where username =<USERNAME>;

<TABLESPACE_NAME> is default tablespace for the user. We got rid of the error after we added datafile to default tablespace <TABLESPACE_NAME>.

Could you please explain how this is possible?

Thanks!
Tom Kyte
December 08, 2010 - 9:35 am UTC

give more information about the environment surrounding this query.

Was it for example running in a stored procedure - if so, who owns it.

did they use an alter session to set the current_schema

anything else - tell us the environment running the query.

Temp segment in non-temporary tabalespace

Ravi B, December 16, 2010 - 12:08 am UTC

Tom,

I don't have much access to our clients database but it seems there could be TEMP segments in non temporary tablespace. I learn new things every day :)

http://hemantoracledba.blogspot.com/2008/05/temporary-segments-in-dataindex.html


Tom Kyte
December 16, 2010 - 2:13 am UTC

yeah, but they are not really "temp" segments they are real segments that are currently marked as temporary in case the command fails.

when you build an index - the index will initially be all "temp" - and when the index is built- the temp segment is converted into a permanent one. We do this so that if the index create fails due to an instance failure - SMON will come along and automagically clean it up for us.


Now, since you wrote:

We are getting the following error while running a query:


the operative word being "QUERY", I assumed you meant what you said and that you were not doing DDL.

I can only comment on that which is provided to me.

TABLESPACE

dinaballav, June 02, 2011 - 6:15 am UTC

Please guide me to resize temporary tablespace.

Please clarify that how my temp tablespace increaded to 21GB , reason behind?
Tom Kyte
June 02, 2011 - 9:11 am UTC

it grew to 21gb because:

a) your DBA configured it so that it could
b) your end users ran applications that needed lots of temp, probably at the same time, causing it to grow.

The reason it grew to that size is really rather quite straightforward (and shouldn't really be surprising?)


Easiest way to "shrink" it on a live database is:

a) create a new small temp (and this time, if you want, limit the size it can grow to - you'll get runtime errors instead of temp just growing)

b) alter the database to make this the new default temporary tablespace

c) alter any users that point manually to the existing temporary tablespace to this one

d) wait for existing activity in the old temporary tablespace to cease (query the v$ views)

e) drop the old.




HK, September 13, 2011 - 8:21 am UTC

Hi,
You wrote that you used 8 tempfiles in tcp test. Is it usually better to allocate several tempfiles instead of one. Of course there might be some filesize limit in OS that forces to do so. Also if there is many discs for tempfiles then it's obviously better to spread the IO. But is there some other reason...somebody mentioned something about file locks.

Is the 1 MB extent size still applicable for temp tablespaces? Or are things changed in 11G or since servers have more memory installed nowadays(larger sort_area_sizes/pga_aggregate_targets).

Tom Kyte
September 14, 2011 - 3:38 pm UTC

what is "tcp test"

where did I say that? big page - no context - I cannot figure out what you are referring to...


Is it usually better to
allocate several tempfiles instead of one.


On some OS, using really big files by many processes can cause contention as the file system level.

On some OS, you have limits on file sizes.

On some systems, you are not using striping - you are just using good old fashioned disk.


In those cases - yes, it might be tempted to create more small files - on different disks if you are in the last case.


1m is typically sufficient and is the default:

http://docs.oracle.com/docs/cd/E11882_01/server.112/e17120/tspaces002.htm#sthref1534


Orable Temp Tablespace

Chua, September 29, 2011 - 3:47 am UTC

Hi there,

Im new to oracle db, just wondering if Oracle would free up the temp tablespace after the restart of Oracle db.

thanks,
Chua
Tom Kyte
September 29, 2011 - 7:05 am UTC

nope, we allocate it once and then just hand it out as things need it.

Index with Direct Path - Temp Usage

kpanchan, October 22, 2011 - 10:27 pm UTC

Tom
On your following response, you mentioned Oracle store the index of newly inserted data in Temp.

What happens when INSERT Table indexes were marked UNUSABLE before data load?

We using direct path load by marking INDEXES UNUSBALE with 150Million records on AVG_ROW_LEN=1098 with 250GB Temp space. Still our job fails.

Thanks for your time to answer this comment.

Everyday, I learn something new from your site.
=================================================

Followup March 4, 2010 - 9am Central time zone:

the table you are inserting into - how many indexes does IT have on it that we need to maintain?

eg: the index maintenance happens during the insert phase, not the select.

when you direct path insert, we store the index of the newly inserted data in temp and then merge that in bulk with the existing indexes afterwards.


Tom Kyte
October 24, 2011 - 4:45 am UTC

What happens when INSERT Table indexes were marked UNUSABLE before data load?


then we do not maintain them during the load - and they'd have to be rebuilt from scratch at the end of the process.


so, why does your job fail? my car sometimes won't start - can you tell me why?

How to restrict temporary tablespace usage?

Albert Nelson, December 05, 2011 - 10:34 pm UTC

Hi Tom,

What is the correct way to restrict abnormal usage of temporary tablespace by a query?

Since we can't have quota for temporary tablespace, even one (poorly written) query can consume ALL the temporary tablespace. What is the correct way to prevent this?

Regards,
Albert Nelson A.
Tom Kyte
December 06, 2011 - 11:19 am UTC

You could use the resource manager or a profile to limit overall IO's performed, but there is no direct way to limit just "temp" usage.

Thanks

Albert Nelson, December 06, 2011 - 6:39 pm UTC


FIND 1652 CAUSING SQL

swapnil kambli, January 05, 2012 - 11:32 am UTC

Hi Tom,

Wish you happy new year!

Yesterday I experienced abnormal growth of Temp tablespace extended till the file system limit (tablepsace was in autoextend on mode).

Today I am trying to find out the sql that caused.I did not have the trigger you described above.The database version in 10.2.0.5 HPUX 64bit.

I used below sql to narrow down the search

set lines 200
col MODULE for a28
col SQL_TEXT for a80 wrap
select snap_id,DBA_HIST_SQLTEXT.SQL_ID,MODULE,SORTS_TOTAL, SORTS_DELTA,SQL_TEXT from DBA_HIST_SQLSTAT,DBA_HIST_SQLTEXT where snap_id between 1763 and 1764 and DBA_HIST_SQLSTAT.SQL_ID=DBA_HIST_SQLTEXT.SQL_ID and SORTS_DELTA > 1000 order by 5;

Could you please confirm if this is correct?


Tom Kyte
January 05, 2012 - 12:53 pm UTC

the sql that caused that might not be the sql that caused that. The 1652 could have been hit by a very small query - competing with lots of other very small queries - or a few medium queries.

there might not be "a" sql that caused it, there might be hundreds.

this might be of interest to you for the future:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:374218170986#6821401045030


Your query would show you how many sorts - not the size, but the raw number of - a given query did. "select * from dual order by 1" will do one sort - so will "select * from all_objects order by 1,2,3,4,5,6,7,8,9,10". One will need more temp than the other.

we do not retain the size of the sorts

TEMP space error

Tony, September 04, 2012 - 7:51 am UTC

Hi Tom,

I have a query which joins 3 tables - table1 with .7M records, table2 with .7 M records and table3 with 3.4 M rows. The output of the query is some 3.4 M rows. There are no indexes on the linking columns. When the query is run , the below error is thrown ( ORA-01652: unable to extend temp segment by 128 in tablespace TEMP ). I had to extend the temp table space from 5 gb to 64 gb to remove this error. There are seven other queries also which run in parallel but the size of the tables involved and the number of records output are less.

a)Is it ok that the above query ( with the size of the tables involved and number of rows output) need a temp space size of over 32 gb or does this suggest a problem with the query irself

b)Does lack of indexes on the linking columns cause this error?

Thank you
Tom Kyte
September 10, 2012 - 7:23 pm UTC

.7m records could be anywhere from .7MB of data to infinity.

record counts are useless. how big are these tables.


b) if you are joining most of the rows in three tables together you better not be using a single index.

Temp table space issue

Tony, September 13, 2012 - 2:29 pm UTC

Hi Tom,

Regarding point b) above,
for the below query

select A.f10,B.f11,C.f12,D.f13
from tab1 A,tab2 B,tab3 C, tab4 D
where A.f1=B.f2
and B.f3 = C.f4
and C.f5 = D.f6
and D.f7 = 'Y'

Suppose all the tables contains 100K records.

a) If the above query returns a large number of rows - say 75,000 rows, should there be indexes on the all the linking columns f1,f2,f3,f4,f5,f6

b) If the above query returns a small number of rows say 3 rows, should there be indexes on the all the linking columns f1,f2,f3,f4,f5,f6


c) My understanding was that all linking columns in a query should have indexes. Is this correct

Thank you
Tom Kyte
September 14, 2012 - 6:49 pm UTC

a) large number of rows => indexes are deadly, you do not want them. this query doesn't look like it should use any indexes, it looks like three full scans and two hash joins to me.

b) depends, is the query driven table D? if so, probably. If not, probably not.

c) not even a tiny bit. It depends on the rows you need to process.
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6749454952894#6760861174154

What goes into TEMP Tablespace (all columns or only column used in ranking functions)

Sameer Kumar, October 07, 2012 - 1:35 am UTC

Hi,

If I have written a query as below:

select col1, col2, col3, col4 ... coln, dense_rank() over(order by col1) from a_join_of_multiple_tables

What will be going into temp tablespace? Will be all the columns col1, col2...coln or just col1?

If it is all the columns (which goes to temp and hence takes a lot of temp space) then will the below query help me fix the temp space usage?

select t1.col1, t1.col2, t1.col3, t1.col4... t1.coln, t2.rn from (select select col1, col2, col3, col4 ... coln from join_of_tables) t1
join
(select dense_rank() over(order by col1), col1 from join_of_tables)t2 on t1.col1=t2.col1

Tom Kyte
October 09, 2012 - 12:35 pm UTC

all of the data will have to go there.


the second query you have would likely take even more temp space as we'll probably do a hash join and if the first result would spill to disk - so would the second likely spill to disk.


A reader, October 10, 2012 - 3:31 am UTC

Hi Tom,

with respect to above question

is it not possible for optimizer to compute rank() using row_id (of the row from where col1 is coming), col1
and join back to the query result base on row_id, to avoid include every columns in the query to temp?

Thanks
Tom Kyte
October 10, 2012 - 7:48 am UTC

look at the plan, it'll almost surely be a hash join instead of millions of rowid lookups (single block scattered IO's all over the place - it won't do that, that would be horribly inefficient)

free the temp tablespace

Mohan Soundararajan, March 22, 2013 - 7:11 am UTC

We have a scenario, where in PL/SQL we are supposed to execute 100s of 3rd party queries one by one. These queries individually use a lot of temporary table space. After few queries, we run out of temporary table space. Increasing the temporary table space would not help us, as it would hold for another few more queries.

Is there a way to free up the temporary table space ?

Below are the constraints

* We fetch the results of each query immediately after the execution and hence we do not need any cursors or anything related to this query

* We do not own the query, so fine tuning the query is not possible

* All the (100s of queries) are to be executed within the same session. Hence closing the session to free up space cannot be done.

* We are unable to use dbms_lob.freetemporary procedure, as we do not have (or do not know how to get) handle to any of the data in temporary table space.


Tom Kyte
March 25, 2013 - 2:05 pm UTC

our sort spaces are automagically freed upon the closing of a cursor. for example:

ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> create or replace procedure p
  2  as
  3  
  4          cursor c1 is select * from all_objects  order by 1,2,3,4,5,6,7,8,9;
  5          l_rec  c1%rowtype;
  6  
  7  begin
  8          execute immediate 'alter session set workarea_size_policy = manual';
  9          execute immediate 'alter session set sort_area_size = 65536';
 10          open c1;
 11          fetch c1 into l_rec;
 12          dbms_output.put_line( 'before closing' );
 13          for x in ( select * from v$tempseg_usage )
 14          loop
 15                  dbms_output.put_line( x.username || ', ' || x.segtype );
 16          end loop;
 17          close c1;
 18          dbms_output.put_line( 'after closing' );
 19          for x in ( select * from v$tempseg_usage )
 20          loop
 21                  dbms_output.put_line( x.username || ', ' || x.segtype );
 22          end loop;
 23  end;
 24  /

Procedure created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec p
before closing
OPS$TKYTE, SORT
after closing

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select username, segtype from v$tempseg_usage;

no rows selected

ops$tkyte%ORA11GR2> exec p
before closing
OPS$TKYTE, SORT
after closing

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select username, segtype from v$tempseg_usage;

no rows selected




so, we can see that the query uses sort space with the first dumping of v$tempseg_usage but immediately upon closing - it is freed up.

So either


the 3rd party application is leaking cursors, they have a bug.

the 3rd party application is leaking lobs, they have a bug.

the 3rd party application isn't to blame there is something else going on.


can you dump out RELEVANT columns (who, seg type, size for example) from v$tempseg_usage as your job is progressing to see what sort of temp data is being "leaked"

Allocation of Temporary Tablespace

Karan Chadha, April 01, 2013 - 11:20 am UTC

Hi Tom,

I am a developer and I have basic level of understanding about the architecture of Oracle.

We have an issue here. The temporary tablespace keeps growing to multiple GBs.

We have figured few queries which are causing this unusually large TEMP tablespace.

While testing those queries, I found that Oracle is allocating TEMP tablespace to various sessions in the units of 512MB. That is to say, At first Oracle does not allocate any TEMP until the memory limit set aside for this purpose gets consumed. Then, in our case, Oracle is allocating 512MB of TEMP to each session. Then it remains at 512MB for a while and then it grows to 1024MB and so on.

Is it normal? is there a way to change this setting?

Below is the query which I used to monitor TEMP tablespace while I executed the problem queries ..

SELECT S.sid || ',' || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module, TBS.block_size,
S.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) sort_ops
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
S.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;
Tom Kyte
April 22, 2013 - 12:56 pm UTC

multiple gigabytes is great! that is after all what temp is about - when you run out of real ram and you have a big query -it has to go somewhere!!!!

an oltp system might be able to get away with a few hundred megabytes, but any reporting/warehousing is going to be measured in 10s of gigabytes if not terabytes for many people these days.


We do what we can in RAM and then swap to disk. The 512mb is not hard and fast - it could be much smaller, it could be much larger - it really depends on your memory setting (pga aggregate target) and how many active sessions you have...




I do not view these as problem queries in general however, temp is all about - well - being temp and if you have large sorting/hashing to do - by many users at the same time - temp is going to grow and be used. that is exactly what it is supposed to do!

Allocation of Temporary Tablespace - 2

Karan Chadha, April 02, 2013 - 3:35 pm UTC

Hi Tom,

I have done some reading and think that I have some clues now regarding my problem. But still I am not able to link all the dots. Please help. My observations -

Database is 11gR2

1. At a particular time while testing, I noticed that
a. my TEMP tablespace is showing as 0% used
b. Total size of TEMP is 1 GB
c. HWM is also 1 GB and
d. MAX size is also 1 GB.
e. It is a Locally Managed Tablespace
f. AUTO EXTEND is ON
At this time our queries started failing with Oracle error saying "Unable to extend space in TEMP tablespace." After some investigation, I found that MIN_EXTLEN is 512 MB and Segment Space Management is Manual and uniform 512M. Could this be the reason that Oracle, instead of using the available 1 GB space, tried to extend the tablespace by 512 MB?????

Also, my DBA told me that this is happening because Auto EXTEND is ON. His explanation - If Auto Extend is ON then Oracle will not use the Tablespace available below HWM but try to Extend. If Auto Extend is OFF then Oracle will not try to Extend and will force to use the available TEMP tablespace. Is this explanation Correct ???

Finally, I tried to drop and create TEMP tablespace with AUTOALLOCATE but it failed saying invalid option for TEMPORARY Tablespace. Any reason why it is not supported ?? Or did i miss something??

Thank You.

Yours,
Karan Chadha.
Tom Kyte
April 22, 2013 - 1:42 pm UTC

your DBA has temp confused very very much.


temp is a heap of space we manage. once we allocate - we never deallocate it - we just use and reuse it. allocating space = expensive. deallocating space = expensive. therefore, we just allocate and then remember "we have this space, it is our"

if your temp is 1gb and 1gb allocated - it could be totally empty, not being used. it is just allocated, we manage the space therein.


you would use the views such as v$temp_seg_usage:

http://docs.oracle.com/cd/E11882_01/server.112/e10820/dynviews_3107.htm

which shows you - of the allocated space - how much is actually being used right now and by whom and for what.


the high water mark discussion is a red herring, it doesn't apply. All space allocated to temp can and will be used. You might well have 1gb (or as I do in my tiny testing system, 3 1/2gb

ops$tkyte%ORA11GR2> @free 1
old  30: order by &1
new  30: order by 1

                                                                                                 %
                                                                %                   MaxPoss    Max
Tablespace Name           KBytes         Used         Free   Used      Largest       Kbytes   Used
------------------- ------------ ------------ ------------ ------ ------------ ------------ ------
*a BIG_TABLE             153,600      141,312       12,288   92.0       12,288   33,554,416     .5
*a EXAMPLE               352,256      327,680       24,576   93.0        8,640   33,554,416    1.0
*a SYSAUX              1,158,144    1,027,136      131,008   88.7       13,568   33,554,416    3.5
*a USERS                 451,584       40,064      411,520    8.9      367,616   33,554,416    1.3
*m SYSTEM                903,296      773,056      130,240   85.6      103,616   33,554,416    2.7
*m TMP                 3,446,776    3,446,776            0  100.0            0   33,554,416   10.3
*m UNDOTBS2              666,624       24,832      641,792    3.7      586,752   33,554,416    2.0
                    ------------ ------------ ------------
sum                    7,132,280    5,780,856    1,351,424

7 rows selected.


but none of it might be actually used:

ops$tkyte%ORA11GR2> select * from v$tempseg_usage;

no rows selected

ops$tkyte%ORA11GR2> 



unless someone starts doing something that needs it:

ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> variable x refcursor
ops$tkyte%ORA11GR2> alter session set workarea_size_policy=manual;

Session altered.

ops$tkyte%ORA11GR2> alter session set sort_area_size = 32765;

Session altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from v$tempseg_usage;

no rows selected

ops$tkyte%ORA11GR2> declare
  2          l_rec all_objects%rowtype;
  3  begin
  4          open :x for select * from all_objects order by 1,2,3,4,5,6,7,8,9;
  5          fetch :x into l_rec;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select * from v$tempseg_usage;

USERNAME                       USER                           SESSION_
------------------------------ ------------------------------ --------
SESSION_NUM SQLADDR     SQLHASH SQL_ID        TABLESPACE
----------- -------- ---------- ------------- -------------------------------
CONTENTS  SEGTYPE     SEGFILE#    SEGBLK#    EXTENTS     BLOCKS   SEGRFNO#
--------- --------- ---------- ---------- ---------- ---------- ----------
OPS$TKYTE                      OPS$TKYTE                      3EF04790
          9 3F9169E0 1029988163 9babjv8yq8ru3 TMP
TEMPORARY SORT             202     429696         17       2176          1




i forced the use of temp there by using sort area size (and a small one) and doing a big sort and keeping the cursor open....

if I close my cursor or end the sort - the space goes away - but TEMP will never ever change in size.


which, is after all, exactly what I wrote the first time...... again....


A reader, August 22, 2013 - 3:27 am UTC

Hi sir,

This trigger doesnt work in Oracle 11gR2. Any reason why ?
I tried the same steps on 9i. It worked successfully however it doesnt work in 11.2.0.1


SQL> create temporary tablespace test_temp
  2  tempfile 'C:/temp/test_temp.dbf' size 512k reuse
  3  extent management local
  4  uniform size 64k
  5  /

Tablespace created.


SQL> alter user tk temporary tablespace test_temp;

User altered.

SQL> conn tk/tk
Connected.

SQL> create table t ( msg varchar2(4000) );

Table created.

SQL> create or replace trigger failed_to_extend_temp
  2  after servererror on database
  3  declare
  4  l_sql_text ora_name_list_t;
  5  l_n        number;
  6  osuser varchar2(20);
  7  begin
  8  if ( is_servererror(1652) )
  9  then
 10  SELECT SYS_CONTEXT('USERENV','OS_USER') into osuser from DUAL;
 11  insert into t values ('osuser = '||osuser);
 12  insert into t values ( 'ora_sysevent = ' || ora_sysevent );
 13  insert into t values ( 'ora_login_user = ' || ora_login_user );
 14  insert into t values ( 'ora_server_error = ' || ora_server_error(1) );
 15  l_n := ora_sql_txt( l_sql_text );
 16  for i in 1 .. l_n
 17  loop
 18  insert into t values (l_sql_text(i));
 19  commit;
 20  end loop;
 21  end if;
 22  end;
 23  /

Trigger created.



SQL> show user
USER is "TK"
SQL>
SQL> Alter session set workarea_size_policy=MANUAL;

Session altered.

SQL> Alter session set sort_area_size=10;

Session altered.

SQL> Alter session set hash_area_size=10;

Session altered.

SQL>
SQL> select * from all_objects
  2  order by 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12;
select * from all_objects
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 8
ORA-01652: unable to extend temp segment by 8 in tablespace TEST_TEMP


SQL> select * from t;

no rows selected




Tom Kyte
August 28, 2013 - 6:02 pm UTC

my first guess would be:

osuser is too small.



how about putting in some "debug" code in your trigger.

Temp Table Space - Performance

China, September 03, 2014 - 10:27 am UTC

Hi Tom,

Thanks for your expert advise!

I have two questions:
1. I find in production env, temp tablespace always used in high pct, eg over 90%. Why not released auto?

2. If temp tablespace in this pct(eg, over 90%), it would affect sql performance, especially for sort where clause.