Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, dw.

Asked: April 08, 2002 - 10:45 pm UTC

Last updated: November 03, 2011 - 10:13 am UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

Hi tom...

Actually, ive never created any temporary table till now, i just like to ask if when creating the table, on what tablespace will the temporary table be stored? i need to consider the space to make sure that the creation is successful... and by the way, can you site an example on creating a temporary table on the duration of a session??

Thanks and God bless..



and Tom said...

search for

create global temporary table

to see examples. I have many.


When you create a temporary table -- it will not use ANY tablespace. It will use the TEMPORARY tablespace of the current schema when accessed at runtime. When you create a temp table -- no space allocated. when you insert into temp table, the space will be gotten from your temporary tablespace (or if the temporary table is used in a stored procedure with definer rights - the temporary tablespace of the owner of the table)

Rating

  (28 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

What about storage

G Rama Subramanian, April 09, 2002 - 11:28 am UTC

Hi Tom,

If the temporary table resides in the Temporary Tablespace during runtime, it means that the Temporary tablespace should be large enough to accommodate all the extents that might get created if the data stored at runtime in the temporary table is going to be huge. Does this create any sizing issue? We have a temporary tablespace, that does not take this aspect into account. I now think that I should add sufficient space by adding a datafile to the temporary tablespace. Am I right?

Tom Kyte
April 09, 2002 - 1:24 pm UTC

Yes, TEMP needs to be sized appropriately to hold these things as well.

What are all the drawbacks (performance), if they hold different datatype

Prince, April 09, 2002 - 2:34 pm UTC

Tom,

Say I have the TEMPORARY table with one varchar2 column.

create global temporary table gtd_tab1 (col1 varchar2(30)) on commit delete rows;

Is it better to use two different TEMP tables, if I want to keep NUMBER data type in one transaction and VARCHAR2 data type in another transaction? or is it OK, if I just share the same table?

Thanks,

Tom Kyte
April 09, 2002 - 3:21 pm UTC

well, that table can only hold varchar2 data, it cannot hold a number type.

If you add a number column, it'll be just fine to use the single table -- holding numbers one time, varchar2's the next.

ARCHIVE LOG

ashraf, April 10, 2002 - 8:21 am UTC

WHAT IS THE IMPACT TEMP. TABLE ON ARCHIVE LOG.

CAN YOU GIVE EXAMPLE

Tom Kyte
April 10, 2002 - 8:58 am UTC

WHY DO YOU SHOUT.

It will have no extraordinary impact on the logs. If you have my book -- I cover this in great detail.

If all you do is INSERT and SELECT from the temp tables -- very very little redo is generated. We do not generate any redo to REDO the insert, but we do generate rollback (undo) information and this undo is protected by redo.

Hence, if you INSERT into a temp table, a very little amount of UNDO will be generated (in the event you rollback, we need to undo the insert as well). That UNDO will be protected by REDO. The INSERT itself will not be.

It is only when you start massively updating or deleting this temp object that large amounts of redo would be generated. For example, suppose you put 1,000,000 rows into the temp table. That insert would generate little UNDO and correspondingly little REDO. Now, suppose you "delete from temp_table" -- that will generate a massive amount of UNDO (in order to rollback the delete if you need) and that UNDO will be protected by REDO.

Since temporary tables are used almost 100% for INSERTS and SELECTS, this is just fine. You should rely on a COMMIT, TRUNCATE or end of SESSION to "delete" the data and that will have no effect on the logs since the data will just be "wiped out" -- not logged.

good one on the shout , I guess you are going to teach them many things i guess.

Guest, April 10, 2002 - 11:22 am UTC


Redo and Rollback on Temporary Table

kumar, June 11, 2003 - 1:46 pm UTC

Tom,

We have 9iR2 on NT. When I was doing some tests on global temporary tables, I got the following results.
-------------------------------------------
TableType Normal Temporary
-------------------------------------------
Redo Generated 3035912 3396384
-------------------------------------------
Rollback (blks) 13 223
-------------------------------------------
The database in running in archivelog mode. I did a normal insert into table select * from all_objects; As per my understanding the redo generated for a temporary table should be much less than for a normal table..and also the rollback generated was quite dramatic.. But I am not able to understand why I got the above results.. Further, I did the same test with direct mode insert. This time I got the results as below:
-------------------------------------------
TableType Normal Temporary
-------------------------------------------
Redo Generated 3247980 224
-------------------------------------------
Rollback (blks) 1 1
-------------------------------------------

Though the second set of results was understandable I could not understand the first test.
Can you please explain why I am getting the above results ?

Yeah, I can replicate this as well!

Martin, August 28, 2003 - 4:42 am UTC

SQL> CREATE GLOBAL TEMPORARY TABLE gtt ( a VARCHAR2(10) ) ON COMMIT PRESERVE ROWS;

Table created.

SQL> INSERT INTO gtt SELECT 'x' FROM all_objects;

30568 rows created.


Statistics
----------------------------------------------------------
         25  recursive calls
      31273  db block gets
     139489  consistent gets
          1  physical reads
    3806280  redo size
        880  bytes sent via SQL*Net to client
        663  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      30568  rows processed

SQL> CREATE TABLE t ( a VARCHAR2(10) );

Table created.

SQL> INSERT INTO t SELECT 'x' FROM all_objects;

30569 rows created.


Statistics
----------------------------------------------------------
        109  recursive calls
        444  db block gets
     139536  consistent gets
          0  physical reads
     389604  redo size
        880  bytes sent via SQL*Net to client
        661  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      30569  rows processed

Why is there a massive difference in the redo generated for these statements (given what's been said above)?

Thanks in advance
 

Sorry, forgot to mention versions....

Martin, August 28, 2003 - 4:44 am UTC

SQL> SELECT * FROM v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
PL/SQL Release 9.2.0.3.0 - Production
CORE    9.2.0.3.0       Production
TNS for 32-bit Windows: Version 9.2.0.3.0 - Production
NLSRTL Version 9.2.0.3.0 - Production
 

Followup

Dušan, February 24, 2004 - 6:31 am UTC

Tom,

Just as I thought you would shed some light on the questions posted above on amount of Redo that the guys observed in their tests on 9.2, the thread came to an end, without your further postings.

Could you please explain why we can see redo much bigger than we thought in these test cases (and real apps too)?

Indeed, it seems that more redo gets generated with global temp tables.

Consider what I just saw on 9.2.0.3:

SQL> create table x1 as select * from all_objects where 0=1;

Table created.

SQL> create global temporary table x2 on commit delete rows as select * from all_objects where 0=1;

Table created.

SQL> insert into x1 select * from all_objects;

26798 rows created.

Statistics
----------------------------------------------------------
        364  recursive calls
       2337  db block gets
     108386  consistent gets
          2  physical reads
    3045692  redo size
        378  bytes sent via SQL*Net to client
        330  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      26798  rows processed

SQL> insert into x2 select * from all_objects;

26798 rows created.

Statistics
----------------------------------------------------------
         18  recursive calls
      28631  db block gets
     107874  consistent gets
          0  physical reads
    3441496  redo size
        378  bytes sent via SQL*Net to client
        329  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      26798  rows processed

Redo is bigger with global temporary table on commit delete?

Many thanks, and all the best! 

Global Temporary Table

Vamsi Dhar, February 25, 2004 - 1:24 am UTC

I am using a global temporary table and trying to access the global temporary table using synonym from a different schema. Its giving me an error saying "Table does not exist"

Pls. show me an alternate way of accessing a global temporary table using a differnt schema

Tom Kyte
February 25, 2004 - 9:07 am UTC

show me the example from start to finish where it does not work.  Perhaps you are missing a grant.

Here is my working example:


ops$tkyte@ORA920PC> drop user a cascade;
 
User dropped.
 
ops$tkyte@ORA920PC> drop user b cascade;
drop user b cascade
          *
ERROR at line 1:
ORA-01918: user 'B' does not exist
 
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> grant create table, create session to a identified by a;
 
Grant succeeded.
 
ops$tkyte@ORA920PC> grant create synonym, create session to b identified by b;
 
Grant succeeded.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> @connect a/a

a@ORA920PC> create global temporary table gtt ( x int );
 
Table created.
 
a@ORA920PC> grant all on gtt to b;
 
Grant succeeded.
 
a@ORA920PC>
a@ORA920PC> @connect b/b

b@ORA920PC> create synonym gtt for a.gtt;
 
Synonym created.
 
b@ORA920PC>
b@ORA920PC> select * from gtt;
 
no rows selected
 
b@ORA920PC> insert into gtt values ( 1 );
 
1 row created.
 
b@ORA920PC> select * from gtt;
 
         X
----------
         1
 
b@ORA920PC>
 

Follow up

Dušan, February 25, 2004 - 1:15 pm UTC

I might actually be hitting the bug 2874489.



Excessive redo on temporary tables

gorr, March 22, 2004 - 5:04 pm UTC

Though I always appretiate Tom's useful responses, this seems to be a thread that has been neglected. I am having the same problem with temporary tables in ver. 9.2.0.4 where there is a large volume of redo being generated for an insert into a temporary table. I have been watching this thread for some response but there doesn't appear to be any. I'm hoping for some follow up. Even if there is a patch related to this issue, that would be great as I can't seem to find any on MetaLink related to it.

Thanks again Tom,
Love the books and the advice

Tom Kyte
March 22, 2004 - 8:48 pm UTC

look up one review?

Sorry, I must be blind

feeling a little stupid..., March 23, 2004 - 3:37 pm UTC

Sorry about that. I don't know why I didn't see that.

temporary tables and redo

lizhuohua, August 01, 2004 - 9:26 am UTC

hi tom,
  I do some test like this:
SQL>drop table tt;

Table dropped.

Elapsed: 00:00:00.00
SQL>drop table temp2;

Table dropped.

Elapsed: 00:00:00.00
SQL>create table tt(a char(1));

Table created.

Elapsed: 00:00:00.00

SQL>create global temporary table temp2(a char(1));

Table created.

Elapsed: 00:00:00.00
SQL>BEGIN
  2  do_sql('insert into tt select ''x'' from all_objects');
  3  do_sql('insert into temp2 select ''x'' from all_objects');
  4  END;
  5  /
303,852 bytes of redo generated for "insert into tt select 'x'"...
2,925,432 bytes of redo generated for "insert into temp2 select "...

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.03
SQL>alter table tt modify a char(2000);

Table altered.

Elapsed: 00:00:09.01
SQL>alter table temp2 modify a char(2000);

Table altered.

Elapsed: 00:00:00.00
SQL>BEGIN
  2  do_sql('insert into tt select ''x'' from all_objects');
  3  do_sql('insert into temp2 select ''x'' from all_objects');
  4  END;
  5  /
51,760,160 bytes of redo generated for "insert into tt select 'x'"...
2,862,656 bytes of redo generated for "insert into temp2 select "...

PL/SQL procedure successfully completed.

Elapsed: 00:00:15.03

Can you please explain what generate redo when i do dml on temporary table?
 

 

Tom Kyte
August 01, 2004 - 11:11 am UTC

there is an issue were redo is erroneously generated for temp tables and the redo was a function of the number of rows -- it has been corrected in later (10g) releases.

Thanks

lizhuohua, August 01, 2004 - 8:02 pm UTC

Thanks tom for your review.

Global Temporary Table and Autonomous Transaction

A reader, September 07, 2004 - 4:39 pm UTC

Hi,

I have a transaction level global temporary table that is populated within a procedure. In the procedure, after the global temp table is populated, there is a call to a procedure that is defined as an autonomous transaction.

My question is:
Would a commit in the autonomous transaction cause the rows in my global temp table to be deleted?

Thanks


Tom Kyte
September 07, 2004 - 4:44 pm UTC

nope - but the atrans will not be able to see the rows in there (since they are not committed)


ops$tkyte@ORA9IR2> create global temporary table t ( x int ) on commit delete rows;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( 1 );
 
1 row created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          for x in ( select count(*) cnt from t )
  5          loop
  6                  dbms_output.put_line( 'cnt = '||x.cnt );
  7          end loop;
  8          commit;
  9  end;
 10  /
declare
*
ERROR at line 1:
ORA-14450: attempt to access a transactional temp table already in use
ORA-06512: at line 4
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from t;
 
         X
----------
         1
 

Hats Off!!!

A reader, September 07, 2004 - 4:48 pm UTC

Thanks a lot for your quick answer, complete with examples. It is amazing how you manage to do things so fast and easily...

Selects on GTT

PK, November 21, 2004 - 2:43 am UTC

Suppose I am reading a GTT in a procedure for some processing. Is selecting data from GTT always from disk or will the data be cached as similar to normal table so that subsequent reads would be faster?

Tom Kyte
November 21, 2004 - 9:17 am UTC

gtt's generate undo and everything, they do the same sort of consistent read stuff "real" tables do.

they are cacheable.

close but no cigar - 10g still broke with respect to excessive redo generation on global temp tables

Nik Malenovic, December 23, 2004 - 12:15 am UTC

Tom,

I am seeing the same bug - generating loads of redo in global temporary tables under 10.1.0.2.0 (Wintel). Did this bug creep back into the codebase after the initial release of 10g?

thanks,

Nik

C:\>sqlplus /nolog

SQL*Plus: Release 10.1.0.2.0 - Production on Wed Dec 22 23:04:22 2004

Copyright (c) 1982, 2004, Oracle. All rights reserved.

@ > @connect "mmesys/*********@m4u"
MMESYS@m4u > set autotrace on
MMESYS@m4u > create global temporary table sec_principal_rnum
2 as
3 select rownum rnum, sp.* from sec_principal sp where rownum < 1;

Table created.

Elapsed: 00:00:00.06
MMESYS@m4u >
MMESYS@m4u > insert into sec_principal_rnum
2 select rownum rnum, sp.* from sec_principal sp;

21428 rows created.

Elapsed: 00:00:00.21

Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=ALL_ROWS (Cost=121 Card=21428 Bytes=3557048)
1 0 COUNT
2 1 TABLE ACCESS (FULL) OF 'SEC_PRINCIPAL' (TABLE) (Cost=121 Card=21428 Bytes=35
57048)





Statistics
----------------------------------------------------------
150 recursive calls
23762 db block gets
969 consistent gets
0 physical reads
2920880 redo size <-------------- excessive redo generation
800 bytes sent via SQL*Net to client
859 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
21428 rows processed


Tom Kyte
December 23, 2004 - 11:11 am UTC

no, you have to compare redo generated by a temp table vs redo generated by "non-temp table".

temp tables never said "we don't do redo", they are about less redo and automagic clean up and no concurrency issues.

They generate UNDO, (have to, you can rollback, rollback to savepoint, consistent read them etc).  UNDO generates REDO.


ops$tkyte@ORA10G> create table t as select * from all_objects where 1=0;
 
Table created.
 
ops$tkyte@ORA10G> create global temporary table gtt as select * from all_objects where 1=0;
 
Table created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> set autotrace traceonly statistics;
ops$tkyte@ORA10G> insert into t select * from all_objects;
 
47800 rows created.
 
 
Statistics
----------------------------------------------------------
       3352  recursive calls
      54528  db block gets
      68457  consistent gets
          2  physical reads
   17799756  redo size
        809  bytes sent via SQL*Net to client
        821  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
      47800  rows processed
 
ops$tkyte@ORA10G> insert into gtt select * from all_objects;
 
47799 rows created.
 
 
Statistics
----------------------------------------------------------
       1854  recursive calls
      51056  db block gets
      67308  consistent gets
          0  physical reads
    6716312  redo size
        809  bytes sent via SQL*Net to client
        823  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      47799  rows processed
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select 6716312/17799756*100 from dual;
 
6716312/17799756*100
--------------------
          37.7326071


About 1/3 the redo. 

A reader, January 20, 2005 - 9:18 pm UTC


A reader, February 08, 2006 - 11:02 pm UTC


Use of GTTs to process "secure" data

Narendra, November 12, 2008 - 7:33 am UTC

Tom,

Scenario:
There exists 2 databases, say A & B, where A is main OLTP database and referes to B as and when required. The data in B is more "secured" and every access to B's data needs to be approved and subsequently monitored. Now, a new player comes into the picture (say BPEL) that needs to send data, to be consumed by A & B. It has been agreed that BPEL will pass the data to A, which in turn will be responsible for storing data in B.
While debating about the best way to pass data from BPEL to A database in single synchronous call, I suggested that we can use a set of GTTs (specifically ON COMMIT DELETE ROWS) in A to store the data (obviously temporarily) and procedures in A will refer to GTTs and populate the relevant data in B. When the call gets over, the GTTs will automatically be cleared and there will not be any issues of "accidently" leaving any traces of "secure" data in A.
However, one of the consultants stated that one can not guarantee that data in GTTs will be cleared and hence is not a "secure" option. In his support, he referred to this question and in particular a follow-up from you as below:

Suppose I am reading a GTT in a procedure for some processing. Is selecting data from GTT always
from disk or will the data be cached as similar to normal table so that subsequent reads would be
faster?


Followup November 21, 2004 - 9am US/Eastern:

gtt's generate undo and everything, they do the same sort of consistent read stuff "real" tables
do.

they are cacheable.


Now, I firmly believe that your comment was being read out of context and it is not true that GTTs (specifically ON COMMIT DELETE ROWS) can contain data after the transaction is over or the data in GTTs can be "seen" by other sessions.

Can you please throw some light over this?

Oracle Database Version : 8.1.7.4
Tom Kyte
November 12, 2008 - 10:42 am UTC

If you allow secure data to be passed to A, it might be found in


a) in the memory (OS memory) of A
b) possibly in swap on A
c) in the SGA of the instance on A
d) in the redo stream of A
e) in the undo stream of A
f) in the tempfiles of A
g) in the PGA of processes on A


Can you "select" the data on A after the transaction commits?
No.

Could someone determined to see it find it?
Probably, sure. It was on A, the OS dealt with it, the network handled it, it is potentially all over the place on A.

global temporary tables are NOT distributed transaction friendly either, beware of that.

GTTs

Narendra, November 13, 2008 - 5:28 am UTC

Tom,

Agreed. But all the ways mentioned to access data in GTTs are not "normal" ways of accessing data. What I mean is for this problem the other (supposedly safer) alternatives suggested were
1. Pass data as CLOB from BPEL to A (an IN parameter for a procedure of type CLOB)
2. Pass data as one or more record types from BPEL to A (again one or more IN parameters for a procedure of type SQL Types)

Now, will it be correct to say that the data will be more "secure" when passed using either of above options than using a GTT ? Won't it be still possible for a determined user to access this data using some or all options mentioned by you?
Tom Kyte
November 14, 2008 - 4:30 pm UTC

data is either

a) secure
b) or not

it is not analog, it is purely digital - zero or one.

Would some things be harder to exploit than others? Sure.


But the clob (which we can - well - put into temp and might well do so) suffers from the same attacks as the GTT.

Once you put that sensitive data on that machine, it is all over that machine. They can all be compromised in very similar fashion (because memory isn't assured to be in real memory).

You quite simply have to run this machine at "system high" security wise, period.

I could see it coming...

Narendra, November 19, 2008 - 1:25 pm UTC

data is either

a) secure
b) or not

it is not analog, it is purely digital - zero or one.

Would some things be harder to exploit than others? Sure.


But the clob (which we can - well - put into temp and might well do so) suffers from the same attacks as the GTT.

Once you put that sensitive data on that machine, it is all over that machine. They can all be compromised in very similar fashion (because memory isn't assured to be in real memory).

You quite simply have to run this machine at "system high" security wise, period.


I could almost sense similar kind of reply coming from you.
While I agree with your view, I think we are allowed to assume that our database server machine runs at "system high" security wise. I guess I was only trying to advocate the GTT approach because I thought that would be a very "database-friendly" approach i.e. I can easily query/sort/find records in GTTs rather than trying to parse the clob data. However, somehow, the CLOB approach promotor came up with this security reason and pointed out to your view in this thread as evidence. While I did ask him for a complete test case to show me how a data in GTT can be accessed from any other seesion either during the processing or after the session ends, I have not heard anything yet.
The only reason for debate was I was trying to find approach based on evidence rather than FUD (fear/Uncertainity/Doubt) factor (but I lost...:)
Anyways, thanks a lot for your input.
Tom Kyte
November 24, 2008 - 10:51 am UTC

... I think we are allowed to assume that our database server machine runs at "system high" security wise. ...

sweet, then there is nothing to worry about how you handle the data *at all* - you just said so. You are system high. You are done.


.... how a data in GTT can be accessed from any other seesion either during the processing or after the session ends, I have not heard anything yet. ...

they would have to wait for the GTT to spill to disk and then starting dumping datafiles manually - that'll be the only way.

BUT - if they do that, then everything I said above holds true, I'll just wait for you to page out or something and dump that from the OS - or just scan memory.

There will be no "sql" way for this data to leak out to another user, but someone intent upon getting it WILL GET IT - regardless of what you do here.



Stats on GTT Vs Dynamic Sampling

A reader, February 11, 2010 - 12:41 pm UTC

I have query which joins 6 tables. 5 Tables are permanent and 1 is GTT. Only use of GTT is for this specific query. GTT is always populated with 400k records. Stats on 5 tables is upto-to-date. These tables does not change after analyze. In such a scenario i think it would be better to have STATS on GTT instead of dynamic sampling hint in the query. Is that true, if not why?

Thanks
Koshal

Tom Kyte
February 16, 2010 - 11:01 am UTC

... In such a scenario i think it would be better to have STATS on GTT
instead of dynamic sampling hint in the query. ...

first, in 10g you would not need a hint, dynamic sampling would be level 2 by default, hence we would sample if it was not analyzed.

second, in 9i, if there were no indexes on the GTT (global temporary table), we would not need a hint, dynamic sampling level 1 is in effect

Third - why do you think that? What's the logic? Not saying you are wrong, just asking "why, what was the thought process".



If the GTT is always 400k *and* the skew of the data is always the same/similar (high/low values, distribution of data, etc) then yes, perhaps you populate it with representative data and gather/lock statistics on it once. It would reduce the hard parse time and make the statistics more predicable.


If the GTT isn't like that, then you would either have to refresh the statistics as things change over time, or rely on dynamic sampling to refresh it over time for you instead.

Temporary Tables and explain plans

Alex Soriano, February 17, 2010 - 2:33 pm UTC

I have a temporary table where i store data provided from an external system and is a kind of excel spreadsheet(in a database believe it or not)and i trying to normalize the information.

In order to normalize it I make a group by (in fact is a distinct but is easier to see my point with a group by) and
I have an index that is composed of all fields in the group by, but when i see the explains plans it always says full table scans, so i decided to drop the index; after dropping the index the query took around 10 times more than with the index, so i recreated the index and the query goes down to the original time.

In conclusion i assume the query is going by the index even when the explain plans says that is doing a full table scans.
am i right?
Is there any issue with explains plans, indexes and global temporaries?
Tom Kyte
February 18, 2010 - 8:20 am UTC

http://asktom.oracle.com/Misc/when-explanation-doesn-sound-quite.html

explain plan isn't what you want to use

dbms_xplan.display_cursor would be (with sql_id as input)

tkprof with row source operation would be.


I doubt it is using the index, something else was afoot. Using the index in general would be the wrongest way to approach an aggregation of every row in a table (unless all of the necessary columns where in the index of course)

temporary table and materliazed view

Syed Khurram, December 22, 2010 - 12:04 am UTC

Hi Tom

You are doing a great work man. my question is i have created a session based temporary table and i create a materialized view on it. but when the temporary table is populated, i execute dbms_snapshot.refresh('mview_temp_table','C') and immediately the data get vanished from temporary table. Why is this happening? and how can i achieve this? i want to store the data in snapshot even if the temp table gets empty. thanks


Tom Kyte
December 22, 2010 - 2:10 pm UTC

ops$tkyte%ORA11GR2> create global temporary table t ( x int primary key, y int ) on commit preserve rows;

Table created.

ops$tkyte%ORA11GR2> create materialized view mv as select * from t;

Materialized view created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t select rownum, rownum from all_users where rownum <= 5;

5 rows created.

ops$tkyte%ORA11GR2> exec dbms_mview.refresh( 'MV', 'C' )

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from t;

         X          Y
---------- ----------
         1          1
         2          2
         3          3
         4          4
         5          5

ops$tkyte%ORA11GR2> select * from mv;

         X          Y
---------- ----------
         1          1
         2          2
         3          3
         4          4
         5          5




where is your example? You have mine....

Temporary table

Tony, October 26, 2011 - 12:33 pm UTC

Hi Tom,

I have the below doubts on temporary tables

1) Suppose I have the same data in a temporary table and a normal table. Will querying the temporary table be faster compared to querying the normal table? ( Both tables have the same set of columns,same indexes and contain 1 lakh records each)

2) Will index on temporary table help to speed up the query in the same way as index on normal tables?If the temporary table contain 1 lakh records and the query returns 3 or 4 records, is indexing temporary table a good idea?

3) Is the memory for temporary table allocated in PGA just like pl/sql arrays or is it organized in file system (table spaces and data files) like normal tables?

Thank you.

Tom Kyte
October 26, 2011 - 1:31 pm UTC

1) not really.

2) sure

3) it is SGA/disk based. They in general wouldn't fit in your PGA


You should remember that "lakh" is a term not used frequently in the general population. I always have to look up what it means. Just saying...

change of behaviour??

dxl, November 03, 2011 - 9:06 am UTC

Hi

Earlier in this thread you gave an example of the behaviour of gtts with autonomous transactions on a 9iR2 database

I tried your example on an 11GR2 database and got different results:



SQL*Plus: Release 11.2.0.1.0 Production on Fri Nov 4 00:59:16 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


SQL> create global temporary table t ( x int ) on commit delete rows;

Table created.

SQL> insert into t values ( 1);

1 row created.

SQL> declare
  2              pragma autonomous_transaction;
  3      begin
  4              for x in ( select count(*) cnt from t )
  5              loop
  6                      dbms_output.put_line( 'cnt = '||x.cnt );
  7              end loop;
  8              commit;
  9      end;
 10  /

PL/SQL procedure successfully completed.

SQL>



In your example earlier this would have failed with an ORA-14450 error.
Is this change of behaviour expected? Documented ??

thanks

Tom Kyte
November 03, 2011 - 10:13 am UTC

ops$tkyte%ORA11GR2> create global temporary table t ( x int ) on commit delete rows;

Table created.

ops$tkyte%ORA11GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA11GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          for x in ( select count(*) cnt from t )
  5          loop
  6                  dbms_output.put_line( 'cnt = '||x.cnt );
  7          end loop;
  8          commit;
  9  end;
 10  /
cnt = 0

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          insert into t values ( 2 );
  5          commit;
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-14450: attempt to access a transactional temp table already in use
ORA-06512: at line 4


ops$tkyte%ORA11GR2> select * from t;

         X
----------
         1

ops$tkyte%ORA11GR2> 




looks like you can "access it" (read it), but that is rather useless. It will always be empty in the autonomous transaction. You cannot see the outstanding work of the parent transaction, nor can you populate it with your own data to use.

I believe this to be a bug. It raise the 14450 in 10.2.0.5, but changed in 11g. I shall file one.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library