Skip to Main Content

Breadcrumb

Dev Live Dev Intro

We are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August and September. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, moorthy.

Asked: February 21, 2004 - 4:02 pm UTC

Answered by: Tom Kyte - Last updated: September 20, 2012 - 5:40 am UTC

Category: Developer - Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Can you give me a situation where you need to use a global temp table??what are the pros and cons??

and we said...

do you have my book "Expert one on one Oracle"? (i cover this)

I'm of the mindset that they are almost never needed, they make sql server programmers feel 'more at home' since they are used so frequently there -- but in Oracle -- you don't:

put data into temp table 1;
put data into temp table 2;
put data into temp table 3;
return join of t1, t2, t3;

you just return
select *
from ( query1 ) t1, (query2) t2, (query3) t3
where ....;

The optimizer deals with optimizing -- not you.

I've used them (temp tables) in real applications. One application was a palm sync conduit -- the conduit running on the pc would dump the contents of the palm database into Oracle, run a stored procedure to reconcile the changes and return the updates. The temp table was used as an interface table.

So, it is rare to need them and you'll "sort of know you do" when you do as there won't be any other effective way to perform what you need.


and you rated our response

  (95 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

difference between "temp table" and "global temp table"

February 21, 2004 - 9:42 pm UTC

Reviewer: Dev from India

Hi Tom,
Have to disagree with you on that one. Some reporting procedures are so complex that they simply cannot be expressed in plain SQL ( or even analytical functions for that matter ).

I have a question though what is the difference between "temp table" and "global temp table".

Keep up the good work

Tom Kyte

Followup  

February 22, 2004 - 9:33 am UTC

show me. give me an example.

if you can:

a) populate table1
b) populate table2
c) populate table3
and so on

and return some convulated join thereof, I can do that in a single query.


so.... show me.


temp table and global temp table are synonymous in Oracle.

No difference

February 22, 2004 - 4:58 am UTC

Reviewer: Venkatesh from Chennai, India

Dev,

To my knowledge temp table = global temp table.

You just cannot create a temp table without the keyword "global".

Tom, your comment would be solicited.

Tom Kyte

Followup  

February 22, 2004 - 9:38 am UTC

in Oracle that is correct.

February 22, 2004 - 8:47 am UTC

Reviewer: Riaz Shahid from PRAL, Lahore, Pakistan

to my Knowledge:
global temp table is accesible to everyone while temp is not

Tom Kyte

Followup  

February 22, 2004 - 9:40 am UTC

In ANSI there are two types of temp tables

a) global, you create this and it is registered in the data dictionary, it lives "forever". the global pertains to the schema definition

b) local, it is created "on the fly" and disappears after it's use. you never see it in the data dictionary.

Oracle does global
SQLserver does local

for example.

Global Temp Table

February 23, 2004 - 12:38 am UTC

Reviewer: Dhimant Antani from Gujarat - India

First of all let me say that this is the best site when it comes to Oracle Database. I did like to have more info on Global Temp Tables. Please provide some examples if possible


Thanks and Regards,

Dhimant





Tom Kyte

Followup  

February 23, 2004 - 7:41 am UTC

simply search this site for

global temporary table


you'll see plenty

February 23, 2004 - 1:25 am UTC

Reviewer: Riaz Shahhid from PRAL, Lahore, Pakistan

Dhimant !

Just earch this sitr for "Temporary tables" or "Global Temprary Tables" and you will find lots about that.



Forced to use global temporary table, how to avoid it ?

February 23, 2004 - 8:40 am UTC

Reviewer: Ashiq Shamsudeen A from Trichy, Tamil Nadu, India

Hi tom,

I've forced to use a global temporary table to hold data. Is any way out in avoiding it. Look the below Package

package body test_pkg as

procedure proc_b(
i_tracking_no in varchar2,
my_ref_cur out test_pkg.my_cv
) as



begin

for i in (select cl_number,hts_no,part_id,product_desc,source_country
from classification_product_master
where cl_number = i_tracking_no)
loop


insert into class_test (
htsno,
us_hsno,
cl_number,
status,
rec_status,
hts_desc,
part_id,
product_desc,
source_country
)
select
i.hts_no,
hts_chapter_no,
i.cl_number,
'Y',
rec_status,
hts_desc,
i.part_id,
i.product_desc,
i.source_country
from lcs_hts_mult_desc
where (hts_chapter_no,effective_date) = (select hts_chapter_no, max(effective_date)
from lcs_hts_mult_desc
where country_id = '02'
and hts_chapter_no = i.hts_no
and language_cd='EN'
group by hts_chapter_no);
end loop;

open my_ref_cur for select * from class_test;

end proc_b;

end test_pkg;

In the classification_product_master table i've duplicates values for hts_no ,say

select hts_no from classification_product_master
where ....

hts_no
3924104000
3924104000
3924104000
3924104000
7323940026
7323940026
7323940026
7323940026
7323940026
7323940026
7323940026
7323940026
8214909000
8214909000
8214909000
8214909000
8214909000
8309900000
8309900000

Using this values I'll be passing to the query

select hts_chapter_no, max(effective_date)
from lcs_hts_mult_desc
where country_id = '02'
and hts_chapter_no = i.hts_no <===
and language_cd='EN'
group by hts_chapter_no


Is any another way around to avoid using this temp table ?



Tom Kyte

Followup  

February 23, 2004 - 9:04 am UTC

that logic is not any different than this query:


select i.hts_no, j.hts_chapter_no, i.cl_number, 'Y',
j.rec_status, j.hts_desc, i.part_id,
i.product_desc, i.source_country
from classification_product_master i,
lcs_hts_mult_desc j
where i.cl_number = I_TRACKING_NO
and (j.hts_chapter_no,j.effective_date) =
(select hts_chapter_no, max(effective_date)
from lcs_hts_mult_desc
where country_id = '02'
and hts_chapter_no = i.hts_no
and language_cd = 'EN'
group by hts_chapter_no );


Your existing logic does not "de-dup" any rows from classification_product_master, it takes every row in that table and joins it to the "most recent" row in lcs_hts_mult_desc table (so if there were 15 rows in classification_product_master with hts_no "12345", you would get 15 (or more) rows back.

It might be more efficient to simply query:



select *
from (
select i.hts_no, j.hts_chapter_no, i.cl_number, 'Y',
j.rec_status, j.hts_desc, i.part_id,
i.product_desc, i.source_country,
j.effective_date,
max( case
when country_id = '02'
and
language_cd = 'EN'
then j.effective_date
end )
over ( partition by j.hts_chapter_no ) max_effective_date
from classification_product_master i,
lcs_hts_mult_desc j
where i.cl_number = I_TRACKING_NO
and i.hts_no = j.hts_chapter_no
)
where effective_date = max_effective_date
/


that joins I to J by hts_no (which is what you do) and keeps only the record with the effective_date = max( effective_date ) for rows with that hts_no and country_cd = '02' and language_cd = 'EN'.


You were never forced to use a gtt, never.

ON COMMIT DELETE ROWS

February 23, 2004 - 9:36 am UTC

Reviewer: Dušan from London, England

I sometimes use them as a "multi-record" application interface to the underlying database.
In such a scenario, an application inserts a number of records in a "global temporary on commit delete rows" table and calls a stored proc to do something with a whole passed record set, usually some inserts, updates and-or merge statements, on a whole bulk of records, never on one by one in a loop. Upon commit or rollback, contents of temp table just disappear.
From the perspective of the application, the interface could not be simpler - start transaction, do a number of inserts and call a stored proc when the whole set is ready to be processed.
How useful do you find this way of using global temporary tables in your practice, Tom?

Tom Kyte

Followup  

February 23, 2004 - 10:02 am UTC

You just described the same thing I did in my answer :)


as an interface table they are excellent.

Also...

February 23, 2004 - 11:48 am UTC

Reviewer: Kashif from Reston, VA

Hi Tom,

I've also used GTTs when I've run into a query that takes a while to execute, and I need to execute that query over and over in my procedure. I simply dump the contents of the query once into the GTT at the beginning, and then hit the GTT as many times as I need throughout the procedure for the data. Some might argue that well, you might not be pulling in the latest set of the data, e.g. if some user added data to the underlying tables of the query after I store the resultset in the GTT, to which I have to say that this is happening in the middle of the night, no users on the database, so no chance of that happening. Your thoughts?

Kashif

Tom Kyte

Followup  

February 23, 2004 - 4:41 pm UTC

I'd have to understand why I needed the result over and over and over again. If so, so be it -- but many times, I find I can take the procedural logic that needed that set time and again and reduce it to a much smaller set of non-procedural code.

It would be a valid use case, yes.

temp table useful - did I say that?

February 27, 2004 - 4:53 pm UTC

Reviewer: dev from fla

I'm probably asking for trouble, since there are so many ways to approach this problem, but here is a case where a two step approach involving an intermediate table works more quickly than a single query.

Start with a query that performs well, add a predicate ( the one below that includes smk_u ), watch the perfectly good execution plan change slightly, performance go out the window. The original plan used hash joins, new plan uses hash joins then nested loops, performance is unacceptable. I know, work on tuning the query. To me it is quicker to divide and conquer - I am considering using a temp table as a workaround as demonstrated below.

Original query - if you remove the DECODE(NVL predicate it runs quickly, with that predicate it is just too slow.

SELECT
tran_pd, cal_date, sku0, store, SUM(sal_u) sal_u, SUM(smk_u) smk_u, SUM(sal_r) sal_r, temp_price, perm_price
FROM
(SELECT
tran_pd, cal_date,sku0, store, sal_u,smk_u , sal_r,
pricing_bp.get_temp_price(store, cal_date,sku0) temp_price,
pricing_bp.get_perm_price(store,cal_date,sku0) perm_price
FROM
skudaily skud, calendar c
WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd AND store = :store
AND NVL(sal_u,0) - DECODE(:p_sales, 1,0,NVL(smk_u,0)) <> 0
AND c.cal_pd = tran_pd
AND sku0 in
(SELECT tmsk_sku0 FROM rpt_temp_sku0s
WHERE tmsk_request_id = :p_req_id )
)
GROUP BY tran_pd, sku0, store , cal_date
/

where skudaily is a well-indexed, mature union view with millions of rows. I know, use partitioned tables.

The fairly easy, fairly fast workaround..

variable p_req_id number;
variable p_start_pd varchar2(8);
variable p_end_pd varchar2(8);
variable p_sales number;
variable store number;

begin
:p_req_id := &&1;
:p_start_pd := '20030131';
:p_end_pd := '20040131';
:p_sales := 1;
:store := &&2;
end;
/
alter session set optimizer_goal = CHOOSE;

insert into junk_pjk_gtemp
SELECT
tran_pd, cal_date,sku0, store, sal_u,smk_u , sal_r
FROM
skudaily skud, calendar c
WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd
AND c.cal_pd = tran_pd
AND sku0 in
(SELECT tmsk_sku0 FROM rpt_temp_sku0s
WHERE tmsk_request_id = :p_req_id )
AND store = :store
/
commit;

SELECT
tran_pd, cal_date, sku0, store, count(*) num_rows, SUM(sal_u) sal_u, SUM(smk_u) smk_u, SUM(sal_r) sal_r,
pricing_bp.get_temp_price(store, cal_date,sku0) temp_price,
pricing_bp.get_perm_price(store,cal_date,sku0) perm_price
FROM junk_pjk_gtemp
WHERE NVL(sal_u,0) - DECODE(:p_sales, 1,0,NVL(smk_u,0)) <> 0
group by tran_pd, cal_date, sku0 , store
/

By moving the problem predicate to the second query, I can execute the first query using the efficient execution plan.
It works, it's faster, the annoying predicate is used on the small data set in the temp table. I didn't have to try to out think the optimizer, I kept the nice plan it gave me and instead of struggling with tuning the query I used a temp table. Fire away...

Tom Kyte

Followup  

February 27, 2004 - 5:05 pm UTC

replace junk_pjk_gtemp with:

SELECT
tran_pd, cal_date,sku0, store, sal_u,smk_u , sal_r
FROM
skudaily skud, calendar c
WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd
AND c.cal_pd = tran_pd
AND sku0 in
(SELECT tmsk_sku0 FROM rpt_temp_sku0s
WHERE tmsk_request_id = :p_req_id )
AND store = :store
and rownum >= 0


and leave as one query -- but I'd be really surprised that a predicate on nvl(column) would do that personally. it is not like that predicate would help us towards an index.

Global temporary tables as a big 'in list'

February 28, 2004 - 4:17 am UTC

Reviewer: Dale Ogilvie from New Zealand

I think the discussion here about using GTT's as an 'interface' table matches our usage. We have a master table containing a few thousand rows, and a detail table containing tens of millions of rows with a foreign key to the master table, and a GTT with only one column designed to receive inserts of master table primary key values. In addition to this we have a stored procedure that performs a query (for reporting purposes) on the detail table using a query like (but much more complex than) this:

SELECT
detail.*
FROM
detail,
gtt
WHERE detail.master_id = gtt.master_id;

At the moment I'm not sure that our schema and the query above is as good as it can be. Specifically:

1. gtt.master_id is not declared unique, whereas master.master_id is unique, should it be?
2. gtt.master_id does not have a foreign key relationship defined to master. Can (should) a fk to master be defined for the column on the gtt?
3. My third concern is to do with the variability in the number of rows that can be inserted into gtt, anything from 1 to 5000. We currently have inserted stats on the gtt to tell the CBO that it contains 0 rows, so the CBO will choose a plan based on this information, and thereafter all queries will use this plan. My concern is that a plan that is optimal for 1 row, turns out to be not so flash for 1000. In practice we can expect either 10 or 1000 rows in gtt. What is the best approach to handle this variablity? I have looked at the DYNAMIC_SAMPLING and DYNAMIC_SAMPLING_EST_CDN hints which look promising.

SELECT /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */
detail.*
FROM
detail,
gtt
WHERE detail.master_id = gtt.master_id;

Does using these hints mean that we are effectively doing a hard parse each time the stored proc is called?


Tom Kyte

Followup  

February 28, 2004 - 11:58 am UTC

Ok, here big_table is 5,000,000 row table.

GTT is like yours is I assume....


@connect big_table/big_table
drop table gtt;

create global temporary table gtt ( id int ) on commit delete rows;

create or replace procedure foo( p_rand in number default dbms_random.value( 0, 5000 ))
as
l_cnt number := 0;
begin

insert into gtt
select rownum
from big_table
where rownum <= p_rand;

dbms_output.put_line( sql%rowcount || ' rows in gtt' );

for x in ( select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.*
from big_table, gtt
where big_table.id = gtt.id )
loop
l_cnt := l_cnt+1;
end loop;
dbms_output.put_line( l_cnt || ' rows fetched' );
commit;
end;
/

@trace
exec foo(0)
exec foo
exec foo
exec foo(5000)

the result of that TKPROF was:

select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.*
from big_table, gtt
where big_table.id = gtt.id

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 2 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 11716 1.26 1.12 0 58579 0 11712
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11721 1.26 1.12 0 58581 0 11712

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 152 (BIG_TABLE) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
11712 NESTED LOOPS (cr=58579 r=0 w=0 time=901210 us)
11712 TABLE ACCESS FULL GTT (cr=11731 r=0 w=0 time=228424 us)
11712 TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=46848 r=0 w=0 time=483374 us)
11712 INDEX UNIQUE SCAN BIG_TABLE_PK (cr=35136 r=0 w=0 time=260253 us)(object id 35091)


Now, that is showing us 1 hard parse (misses in library) and no subsequent reparses. This is what we expect -- plsql parses once per session for us and the first one was hard since we just created GTT for the very first time....Note the query plan -- nested loops.

Now, we want to see what happens in other sessions:




disconnect
!tk
@connect big_table/big_table
@trace
exec foo(50000)
disconnect
!tk

That tkprof shows:


select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.*
from big_table, gtt
where big_table.id = gtt.id

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 50001 5.41 5.15 639 250006 0 50000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 50003 5.41 5.15 639 250006 0 50000

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 152 (BIG_TABLE) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
50000 NESTED LOOPS (cr=250006 r=639 w=0 time=4143709 us)
50000 TABLE ACCESS FULL GTT (cr=50006 r=0 w=0 time=1020992 us)
50000 TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=200000 r=639 w=0 time=2339062 us)
50000 INDEX UNIQUE SCAN BIG_TABLE_PK (cr=150000 r=0 w=0 time=1195898 us)(object id 35091)


Missing in library = 0, so it too was soft parsed -- if there were hard parsed (with 50,000 rows) what would have happened?


@connect big_table/big_table
alter system flush shared_pool;
@trace
exec foo(50000)

select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.*
from big_table, gtt
where big_table.id = gtt.id

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 2 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 50001 35.13 74.99 78818 96819 0 50000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 50003 35.15 75.01 78818 96821 0 50000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 152 (BIG_TABLE) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
50000 HASH JOIN (cr=96819 r=78818 w=6720 time=74166346 us)
50000 TABLE ACCESS FULL GTT (cr=82 r=0 w=0 time=68518 us)
5000000 TABLE ACCESS FULL BIG_TABLE (cr=96737 r=72098 w=0 time=37932714 us)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
50000 NESTED LOOPS
50000 TABLE ACCESS (FULL) OF 'GTT'
5000000 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'BIG_TABLE'
0 INDEX (UNIQUE SCAN) OF 'BIG_TABLE_PK' (UNIQUE)


A very different plan with a hard parse (note how explain plan lies -- the first plan there is from the trc file's STAT records -- the other was "explain=u/p". EP cannot see certain things like bind variable types, bind variable values, results of dynamic sampling and so on)



So, after all of that. here are my responses

1) probably not, it would just tend to slow down the insertions and dynamic sampling would figure it out

2) nah, that would be useful with a MV and query rewrite and such, but not so here.

3) here we have the crux of the problem. Once parsed -- it be parsed. One approach would/could be for you to use a REF CURSOR and open a query based on thresholds (ref cursors are soft parsed or hard parsed upon each open, unlike static SQL but this is probably OK for this query, you are not going to run it a thousand times a second).

That said, maybe you decide "you know, every 1000 rows we would like the opportunity to revisit this particular plan". You could use an approach like this:

create or replace procedure foo( p_rand in number default dbms_random.value( 0, 5000 ))
as
l_cnt number := 0;

cursor just_a_template is select big_table.* from big_table, gtt;
type array is table of just_a_template%rowtype;
l_data array;
/* that just a template really is just a template so we can easily
define a record to fetch into -- an array of records really (i know
you must be on 9ir2 that can do this since you are using dynamic sampling
*/

l_cursor sys_refcursor;

begin

insert into gtt
select rownum
from big_table
where rownum <= p_rand;

dbms_output.put_line( sql%rowcount || ' rows in gtt' );


/* here is the trick -- spoon feed the size into the query, this will
not only make the sql text different every 1000 rows -- but it'll
give the optimizer "about how many rows". why did I pick 1000? no
reason, it was an easy number to work with */
open l_cursor
for 'select /*+ cardinality( gtt ' || trunc(sql%rowcount/1000)*1000 || ') */
big_table.*
from big_table, gtt
where big_table.id = gtt.id';


loop
fetch l_cursor bulk collect into l_data limit 100;
l_cnt := l_cnt+l_data.count;
exit when l_cursor%notfound;
end loop;
close l_cursor;
dbms_output.put_line( l_cnt || ' rows fetched' );
commit;
end;
/


If you tkprof that, you'll see it does nested loops for small numbers, hash joins for larger numbers.

Handling variability in GTT contents

February 28, 2004 - 10:15 pm UTC

Reviewer: Dale Ogilvie from New Zealand

Thanks. I expect your cardinality hint approach is the way to go. One question though, given these two tkprof results from your reply:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 50001 5.41 5.15 639 250006 0 50000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 50003 5.41 5.15 639 250006 0 50000

and

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 2 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 50001 35.13 74.99 78818 96819 0 50000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 50003 35.15 75.01 78818 96821 0 50000

Why is it that the second one that did the hard parse is so much more expensive than the first which only did a soft parse? Where is all the extra CPU and elapsed time use coming from? The dynamic_sampling on a fatter GTT? The hash join?


Tom Kyte

Followup  

February 29, 2004 - 9:01 am UTC

it was the effect of having the default optimizer_index_* settings, causing the index range scans to be aborted "too soon" in the mix. for that simple of a query, to get 50,000 well clustered rows from 5,000,000 -- probably should be an index range scan.

big_table@ORA9IR2> alter session set optimizer_index_caching =0;
Session altered.

big_table@ORA9IR2> alter session set optimizer_index_cost_adj =100;
Session altered.

big_table@ORA9IR2> select /*+ cardinality(gtt 50000) */ big_table.*
2 from big_table, gtt
3 where big_table.id = gtt.id
4 /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20705 Card=50000 Bytes=5650000)
1 0 HASH JOIN (Cost=20705 Card=50000 Bytes=5650000)
2 1 TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=50000 Bytes=650000)
3 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=6943 Card=5000000 Bytes=500000000)



big_table@ORA9IR2> alter session set optimizer_index_caching =90;
Session altered.

big_table@ORA9IR2> alter session set optimizer_index_cost_adj =10;
Session altered.

big_table@ORA9IR2> select /*+ cardinality(gtt 50000) */ big_table.*
2 from big_table, gtt
3 where big_table.id = gtt.id
4 /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5012 Card=50000 Bytes=5650000)
1 0 NESTED LOOPS (Cost=5012 Card=50000 Bytes=5650000)
2 1 TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=50000 Bytes=650000)
3 1 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TABLE' (Cost=2 Card=1 Bytes=100)
4 3 INDEX (UNIQUE SCAN) OF 'BIG_TABLE_PK' (UNIQUE)


big_table@ORA9IR2>
big_table@ORA9IR2> set autotrace off



dynamic sampling

March 01, 2004 - 6:03 pm UTC

Reviewer: Dave from Downers Grove, IL USA

Tom, you've listed examples using sql_trace/tkprof to grab the dynamically altered plan when dynamic_sampling was involved. Is there any way through V$ views (or other) to see this change?

Tom Kyte

Followup  

March 01, 2004 - 6:50 pm UTC

v$sql_plan

An example

April 09, 2004 - 1:40 pm UTC

Reviewer: Joe Simunac from Durham, NH USA

You asked for an example where a GTT was needed. Here's one that we have to use. We have two accounting tables, one with live data and one with archive data. The table are identical in structure. We want a report with data from each table combined. The temp table looks as follows:

ID
Transaction Number
Detail Code
Detail Desc
...

I have to pull the matching fields from each table and load them into the GTT (two cursors in the procedure), then query the GTT sorted by Transaction Number, to put everything in order. Is there a better way to do this, or is using a temporary table the only way?

Tom Kyte

Followup  

April 09, 2004 - 4:15 pm UTC

UNION ALL achieves you goal very nicely without a GTT, just union all the two queries together and sort the results

select *
from ( select .... query1
UNION ALL
select .... query2
)
order by whatever you want.

One more question

April 14, 2004 - 8:09 am UTC

Reviewer: Joe Simunac from Durham, NH USA

Duh! I shouldn't ask questions on a Friday afternoon after my brain has gone home. That was too obvious. I apologize.

Now for the real question: I'm trying to return a result set based on the UNION, but one of the fields needs to be calculated. It's a running balance - the sum of the previous running balance and the current amount. My predecessor used a temporary table to store the info and then read through it again to do the calculation. I can't think of a way to do this with just one SELECT statement.

Should I create an object with my data structure and then a table based on that object? I'll admit that using objects is not my strong point.

What I'd really like is to call a function (passing the user id) that would return a result set with all the information, including any calculated fields - just one for this one, but it could be more in other situations.

Thanks for any help you can give.

Tom Kyte

Followup  

April 14, 2004 - 9:01 am UTC

you mean like this:


ops$tkyte@ORA9IR2> select ename, deptno, sal, sum(sal) over (order by ename)
  2    from (select ename, deptno, sal from t1
  3          union all
  4          select ename, deptno, sal from t2)
  5   order by ename
  6  /
 
ENAME          DEPTNO        SAL SUM(SAL)OVER(ORDERBYENAME)
---------- ---------- ---------- --------------------------
ADAMS              20       1100                       1100
CLARK              10       2450                       3550
FORD               20       3000                       6550
JONES              20       2975                       9525
KING               10       5000                      14525
MILLER             10       1300                      15825
SCOTT              20       3000                      18825
SMITH              20        800                      19625
 
8 rows selected.
 
<b>Analytics Rock
Analytics Roll
Analytics Rule</b>

 

April 14, 2004 - 9:31 am UTC

Reviewer: A reader

is there a way to use alias in analaytic function

SQL> ed
Wrote file afiedt.buf

  1  select ename name, deptno, sal, sum(sal) over (order by name)
  2        from (select ename, deptno, sal from emp
  3              union all
  4              select ename name, deptno, sal from emp)
  5* order by name
SQL> /
select ename name, deptno, sal, sum(sal) over (order by name)
                                                        *
ERROR at line 1:
ORA-00904: "NAME": invalid identifier
 

Tom Kyte

Followup  

April 14, 2004 - 10:50 am UTC

alias it one layer down -- select ename NAME, deptno, sal from emp UNION ALL ...

Perfect

April 14, 2004 - 10:10 am UTC

Reviewer: Joe Simunac from Durham, NH USA

Thanks, Tom! That was exactly what I needed. I'm obviously going to have to learn more about analytics, since they are so useful. Thanks again. Your responses are always helpful!

-Joe

One last question

April 14, 2004 - 11:19 am UTC

Reviewer: Joe Simunac from Durham, NH USA

Thanks for your previous responses on UNION and the analytical functions. I have one last question.

I've set up a record type and a ref cursor based on that record. When I try to fetch into the cursor I'm getting a PLS-00386 type mismatch error. The record is defined using %type for all fields except two (they are concatenated in the query, so I have them defined as VARCHAR2(200), which should easily cover the largest that could be returned. The original fields are VARCHAR2, anyway). The %type is based on the table field the query is selecting from.

Does the UNION change data types in any way? Or am I missing something really obvious? Version 9.2.0.4 of Oracle. Here is the code:

type rbrec is record
(pidm spriden.spriden_pidm%type,
tran tbraccd.tbraccd_tran_number%type,
term tbraccd.tbraccd_term_code%type,
taxt tbbdetc.tbbdetc_taxt_code%type,
detail tbraccd.tbraccd_detail_code%type,
ddesc tbraccd.tbraccd_desc%type,
typei tbbdetc.tbbdetc_type_ind%type,
signed tbraccd.tbraccd_amount%type,
amt tbraccd.tbraccd_amount%type,
debit tbraccd.tbraccd_amount%type,
credit tbraccd.tbraccd_amount%type,
eff tbraccd.tbraccd_effective_date%type,
scd tbraccd.tbraccd_srce_code%type,
sid varchar2(200),
sname varchar2(200),
runbal tbraccd.tbraccd_amount%type);

type rbcur_type is ref cursor return rbrec;

function crerbal (pidmin in number) return rbcur_type is

rbcur rbcur_type;

cursor c_getrb is
select pidm,
tran,
term,
taxt,
detail,
ddesc,
typei,
signed,
amt,
debit,
credit,
eff,
scd,
sid,
sname,
nvl(sum(signed) over (order by pidm, tran),0) runbal
from (select tboaccd_pidm pidm,
tboaccd_tran_number tran,
tboaccd_term_code term,
tbbdetc_taxt_code taxt,
tboaccd_detail_code detail,
tboaccd_desc ddesc,
tbbdetc_type_ind typei,
decode(tbbdetc_type_ind,'C',1,-1)*tboaccd_amount signed,
tboaccd_amount amt,
decode(tbbdetc_type_ind,'C',1,0)*tboaccd_amount debit,
decode(tbbdetc_type_ind,'P',1,0)*tboaccd_amount credit,
tboaccd_effective_date eff,
null scd,
substr(spriden_id,1,3)||'-'||substr(spriden_id,4,2)||'-'||substr(spriden_id,6,4) SID,
spriden_last_name||', '||spriden_first_name||' '||spriden_mi sname
from spriden, tbbdetc, tboaccd
where spriden_pidm = pidmin
and spriden_change_ind is null
and tboaccd_pidm=spriden_pidm
and tbbdetc_detail_code=tboaccd_detail_code
union
select tbraccd_pidm pidm,
tbraccd_tran_number tran,
tbraccd_term_code term,
tbbdetc_taxt_code taxt,
tbraccd_detail_code detail,
tbraccd_desc ddesc,
tbbdetc_type_ind typei,
decode(tbbdetc_type_ind,'C',1,-1)*nvl(tbraccd_amount,0) signed,
tbraccd_amount amt,
decode(tbbdetc_type_ind,'C',1,0)*nvl(tbraccd_amount,0) debit,
decode(tbbdetc_type_ind,'P',1,0)*nvl(tbraccd_amount,0) credit,
tbraccd_effective_date eff,
tbraccd_srce_code scd,
substr(spriden_id,1,3)||'-'||substr(spriden_id,4,2)||'-'||substr(spriden_id,6,4) SID,
spriden_last_name||', '||spriden_first_name||' '||spriden_mi sname
from spriden, tbbdetc, tbraccd
where spriden_pidm = pidmin
and spriden_change_ind is null
and tbraccd_pidm=spriden_pidm
and tbbdetc_detail_code=tbraccd_detail_code
and tbraccd_detail_code != 'PURG')
order by pidm, tran;

begin

open c_getrb;
fetch c_getrb into rbcur;
close c_getrb;

return rbcur;

end crerbal;

Tom Kyte

Followup  

April 14, 2004 - 11:37 am UTC

can you put the example in a diet -- slim it down, provide create table scripts and have a complete reproducible example... as few columns as possible, as little code as you can.


begin
select ... into l_rec from select * from (query) where rownum = 1;
end;

would be my approach by the query - no procedural code really needed.



April 14, 2004 - 9:27 pm UTC

Reviewer: A reader

Tom sorry for not getting it,Can you please clarify.

"alias it one layer down -- select ename NAME, deptno, sal from emp UNION ALL"


Thanks

Tom Kyte

Followup  

April 15, 2004 - 8:21 am UTC

scott@ORA10G> select name, deptno, sal, sum(sal) over (order by name)
2 from (select ename NAME, deptno, sal from emp
3 union all
4 select ename name, deptno, sal from emp)
5 order by name
6 /

NAME DEPTNO SAL
------------------------------ ---------- ----------
SUM(SAL)OVER(ORDERBYNAME)
-------------------------
ADAMS 20 1100

......

can i cache global temporary table?

August 20, 2004 - 12:14 pm UTC

Reviewer: A reader

I am trying to cache global temporary table. so I did the following


alter table my_global_temp_table storage( buffer_pool keep);
doesn't work

tried

alter temporary table my_global_temp_table storage( buffer_pool keep);
doesn't work

tried
alter global temporary table my_global_temp_table storage( buffer_pool keep);
doesn't work


Is there anyway I can cache that global temp table?

Tom Kyte

Followup  

August 21, 2004 - 11:00 am UTC

the amount cached is a function of your temporary space -- this is "temp stuff"

have you identified a serious performance related issue with regards to gtt's and their doing direct IO? if so, lets see it (the metrics you used to identify this) and we'll work from there.

gtt's are special -- they consume no space until you start using them, they are not traditional segments.

reader

February 24, 2005 - 2:14 pm UTC

Reviewer: A reader

For the global temporary table which is created
in the temporary tablespace, is there a
v$ view or dba_ view to monitor the amount of
space used (blocks) by the gtt

Tom Kyte

Followup  

February 24, 2005 - 5:37 pm UTC

v$sort*

(two of them) shows the temp space used by a session.

Avioding Temp Tables

March 15, 2005 - 5:32 pm UTC

Reviewer: Vidya from New York, NY

Hi Tom,
My process is something like this.

insert into gtt
select col1,col2....
from bigTable1, bigTable2
Where .....

After this I work on small set of data which is in temp table. The operation i do are

open cursor for
select col1, col2...
from gtt
group by col1, col2

update gtt set col3=val1 where col1=val2.

end loop.

Off all the example you have given, there were no group by clause which were avioded.
What i can do to aviod using gtt. Is record by record processing the only way.




Tom Kyte

Followup  

March 15, 2005 - 9:20 pm UTC

insufficient data. No clue where val1 or val2 come from.

is a temp table appropriate here?

April 07, 2005 - 1:27 am UTC

Reviewer: Jeremy from Lansing, MI

you're pretty adamant that a temp table is almost always unnecessary in Oracle.

i'm writing a PL/SQL procedure that processes a large table in increments, and a global temp table seems very appropriate to me... the temp table stores some aggregate info on the big table, which is then used by several subsequent statements. seems much more efficient then having each statement have the same subquery (that does a full table scan to get the same aggregate info). one possible alternative might be using a materialized view and QUERY_REWRITE... (??) but the temp table seems much simpler to me...

what do you think?


begin
--STATEMENT USED TO CREATE TEMP TABLE:
--create global temporary table CTLLDR$TEMP_BUILD_NOTES (
-- ID VARCHAR2(10),
-- CNT NUMBER,
-- LEN NUMBER,
-- FIRST_PIECE VARCHAR2(16),
-- LAST_PIECE VARCHAR2(16)
--) on commit preserve rows;
--create index CTLLDR$TEMP_NOTES_I1 on CTLLDR$TEMP_BUILD_NOTES(cnt,id);

execute immediate 'insert into CTLLDR$TEMP_BUILD_NOTES
select "'||varfield||'" ID, count(*) cnt, sum("LENGTH") len,
min(to_char(nvl(SEQ,99999),''00000'')||ID) first_piece,
max(to_char(nvl(SEQ,99999),''00000'')||ID) last_piece
from "'||vartabledetail||'"
group by "'||varfield||'" ';


-- BIG UPDATE #1
EXECUTE IMMEDIATE 'update "'||vartable||'" t
set "NOTE"=(
select REPLACE(RPAD(min(n."DATA"),min(n."LENGTH")),CHR(141)||CHR(10)) from "'||tabnnote||'" n
where n."'||varfield||'"=t."ID"
having count(*)=1 group by n."'||fldnote||'"
) where ID in (select ID from CTLLDR$TEMP_BUILD_NOTES where cnt=1)';

-- BIG UPDATE #2
EXECUTE IMMEDIATE 'update "'||vartable||'" t
set "NOTE"=(
...
) where ID in (select ID from CTLLDR$TEMP_BUILD_NOTES where cnt=2)';

-- UPDATE #3
OPEN refnnote FOR 'select "'||varfield||'","LENGTH","DATA"
from "'||vartabledetail||'"
where "'||varfield||'" in (
select ID from CTLLDR$TEMP_BUILD_NOTES where cnt>2 and len<4000
)
order by "'||varfield||'",to_char(nvl(SEQ,99999),''00000'')||ID';
FETCH refnnote INTO noteid, notelen, notedata;

WHILE refnnote%FOUND
LOOP

... etc (update #4 is the last one; gets records where LEN>4000)

Tom Kyte

Followup  

April 07, 2005 - 9:12 am UTC

it would be the last option, not the first, correct. (temp tables)

why two updates, why not one?

what version are you using here.



Re: is a temp table appropriate here?

April 07, 2005 - 1:25 pm UTC

Reviewer: Jeremy from Lansing, MI

> why two updates, why not one?

actually i tried it as one update, and had it working...

wouldn't have had a problem doing it either way, but having two statements just seemed to have slightly better readability and didn't really run significantly slower (i think it took maybe less than 5% longer, which to me was worth it for code that's more clear about what it's doing and easier for the next guy to figure out)

basically, this procedure is run just after loading a bunch of data from external tables. the memo fields are stored in pieces in the external tables and this procedure reassembles the pieces into CLOB's. the reason for 4 updates is that over 50% of the notes only have 1 or 2 pieces, and can be built very quickly with the simple update statement(s). The other two updates are for (3) notes with less than 4000 characters - stored inline - and (4) notes with more than 4000 characters, stored in the LOB segment (less than 1% of the notes are more than 4000 chars).

the stored procedure actually gets run more than once, on different tables (which each have the same structure)... hence variables for "table" "tabledetal" and "field"


-- BIG UPDATE #2 (notes w/ 2 pieces)
EXECUTE IMMEDIATE 'update "'||vartable||'" t
set "NOTE"=(
select REPLACE(RPAD(d1."DATA",d1."LENGTH"),CHR(141)||CHR(10)) ||
REPLACE(RPAD(d2."DATA",d2."LENGTH"),CHR(141)||CHR(10))
from "'||vartabledetail||'" d1, "'||vartabledetail||'" d2
where t."ID"=d1."'||varfield||'"
and (d1."'||varfield||'",to_char(nvl(d1.SEQ,99999),''00000'')||d1.ID) in (
select ID, first_piece from CTLLDR$TEMP_BUILD_NOTES where cnt=2
)
and t."ID"=d2."'||varfielde||'"
and (d2."'||varfield||'",to_char(nvl(d2.SEQ,99999),''00000'')||d2.ID) in (
select ID, last_piece from CTLLDR$TEMP_BUILD_NOTES where cnt=2
)
) where ID in (select ID from CTLLDR$TEMP_BUILD_NOTES where cnt=2)';


Tom Kyte

Followup  

April 07, 2005 - 1:35 pm UTC

comment?

I can give you some nasty looking procedural code, it'll be as clear as it can be, but without a comment for the next guy, it is just nasty procedural code.

I can give you some what I believe to be the MOST READABLE code on the planet, and six months later I have no clue what it does till I reverse engineer it.

comments fix lots of things!

so, I'd be back to "we should be doing this as a single sql statement, so no global temporary tables are not needed here"

Re: is a temp table appropriate here?

April 07, 2005 - 1:28 pm UTC

Reviewer: Jeremy from Lansing, MI

using 10g [10.1.0.3.0]

(is there a better way to do this in 10g?)


Tom Kyte

Followup  

April 07, 2005 - 1:37 pm UTC

merge might be very useful

Re: is a temp table appropriate here?

April 07, 2005 - 3:47 pm UTC

Reviewer: Jeremy from Lansing, MI

> so, I'd be back to "we should be doing this as a single sql statement, so no global temporary tables are not needed here"

the temp table is still used by updates 3 and 4, which are procedural (and i don't think there's an easy way to make them bulk). they use OPEN/FETCH/CLOSE to iterate through several records at a time and build the CLOBs. in the case of update 4, it builds chunks in memory and then WRITEAPPEND's each chunk to the clob. is uses the global temp table to figure out which records have >4000 chars and thus need to be stored out-of-line.


> merge might be very useful

would merge still apply if i'm only doing updates? (the row *always* already exists in the target table, I'm just populating the CLOB column.)

========

I've got another question though. I've been looking closer and I'm not sure the temp table is really saving me any processing after all... and I may just wind up going back to not using it...

seems that every time I use the temp table in the WHERE clause to get the records I want, the DB wants to make sure that each record still exists in the source table and so it does a HASH SEMI back to the base from the temp.

additionally, it's seeing that there'll be high selectivity after the join and so it's doing a full table scan on the base table.

seems to me that if it's going to FTS the base table every time then I might as well just do...

WHERE id IN (SELECT id FROM basetable HAVING COUNT(*)=2 GROUP BY id)

...because this will do the same FTS (with an additional sort/group by operation but without the hash join)


sure wish there was some way I could do that FTS/GROUP BY only once for the 3 or 4 update/select statements I need to do on that base table...


hmmm... also, considering this new info, i'm going to go back and see if i might have been wrong about the speed difference between 2 update statements vs 1... i bet you're right and 1 statement will go faster


Tom Kyte

Followup  

April 07, 2005 - 4:40 pm UTC

merge can apply in 9i even if only doing updates (if the table you merge "from" the using table -- only has rows that are in the target table)

and especially so in 10g where you can skip the insert part.


It is not that the single statment will go faster, I just think less code is better code, comments are good :)

Re: is a temp table appropriate here?

April 08, 2005 - 2:41 pm UTC

Reviewer: Jeremy from Lansing, MI

for anyone who might read this thread...

FYI, I did manage to find a much better way to do this. :)

Got rid of the update statements, moved everything into one single OPEN/FOR-FETCH loop and used BULK COLLECT/LIMIT to process several hundred rows at a time...

Also, used analytic functions to get the aggregate info I needed in the same Full Table Scan that's actually getting the data.

I wish the CBO was smart enough to realize that the WINDOW SORT is putting the rows in the same order as the ORDER BY (and thus avoid the extra [massive] sort which does nothing in this case)...

But anyway, it runs in half the time it did before, and the code is much easier to read (especially with the comments <g>...)

And no, there are no "global temp tables". :)


Tom Kyte

Followup  

April 08, 2005 - 2:46 pm UTC

It does, if the sort is unneccesary -- no sort will be done....


scott@ORA9IR2> select empno, sal, rank() over (partition by job order by sal) rnk
2 from emp
3 order by job, sal;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 WINDOW (SORT)
2 1 TABLE ACCESS (FULL) OF 'EMP'


Re: is a temp table appropriate here?

April 08, 2005 - 6:10 pm UTC

Reviewer: Jeremy from Lansing, MI

>It does, if the sort is unneccesary -- no sort will be done....

yes, you're right... thanks


global temp table

October 23, 2005 - 4:28 am UTC

Reviewer: senthil from Bangalore,India

Hello Tom,

will global temp table generate more redo log files.
i have made some temp tables as global temp tables. all of a sudden more redo log files have generated. are GTT related to redo log files?

Senthil.

Tom Kyte

Followup  

October 23, 2005 - 4:52 am UTC

global temporary tables generate undo.

undo is protected by redo.

hence, global temporary tables will generate some amount of redo, yes.

global temp table to store session user

October 23, 2005 - 4:08 pm UTC

Reviewer: Wayne from Phoenix, AZ USA

Tom,
Using connection pools in our Java application, we cannot identify who is who directly, therefore, we cannot use them directly in the create_by or last_update_by fields (handled by trigger). I think we can use global temp table (with on commit delete rows setting, and other careful controls) to store LDAP user id (when they sign on to the application).
There are something our application has control, for example, before it releases a connection back to the connection pool, the transactions are either committed or rollback-ed.
In our tests, it worked quite well, do you think this is a feasible solution?

Thanks,


Tom Kyte

Followup  

October 24, 2005 - 12:27 am UTC

why not just use an application context? or dbms_session.set_client_id (that will be audited even, that identifier becomes part of the audit trail) instead of a global temporary table.


Just set the context right after you grab a connection, much like you must be inserting into a global temporary table now.

DBMS_SESSION

October 24, 2005 - 7:57 pm UTC

Reviewer: Wayne from Phoenix, AZ USA

Thanks for your great suggestion, it is very clean. We will look into using that.



Global Temp Tables

October 27, 2005 - 7:55 am UTC

Reviewer: Senthil from Banaglore, India

Tom,

is there any way to make the Global temp tables to prevent logging? alter statement is not supported here.


Tom Kyte

Followup  

October 27, 2005 - 12:31 pm UTC

conventional path inserts generate UNDO. they have to, you need to be able to rollback, you need to be able to support multi-versioning.

UNDO is always protected by redo.


If you direct path the global temporary table ( insert /*+ APPEND */) you can bypass undo ON THE TABLE - but not on the indexes. Hence you can reduce (marginally typically as it is usually indexes that generate the most undo) the amount of redo, but you cannot eliminate it.

Temporary Table allocation

November 23, 2005 - 3:21 am UTC

Reviewer: brahma prakash from Minneapolis, MN USA

Hi Tom, First i like to tell that i am a great admirer of this site.

I am unknown about how temporary tables allocation is done. I know that they are allocated in the users default temporary tablespace and segments created when data is first inserted.

What i would like to know is
1) Does this count towards the users quota on the temp tablespace.
2) Can a user be able to create a temporary table without giving him the create table privilage.

If there is a quota that exits, can a user be given a create table privilage and make quota=0 on his default tablespace, so that he can only create temporary table.
Is this possible.

Please give your comments on this. Thank you.



Tom Kyte

Followup  

November 23, 2005 - 9:47 am UTC

1) there is no such concept as a user quota in temp.  the temp segment is owned by the system, by Oracle -- NOT by the user using it.  They only borrow it for a short period of time.

2) No, they need create table:

ops$tkyte@ORA10GR2> create user a identified by a;

User created.

ops$tkyte@ORA10GR2> grant create session to a;

Grant succeeded.

ops$tkyte@ORA10GR2> connect a/a
Connected.
a@ORA10GR2> create global temporary table gtt ( x int );
create global temporary table gtt ( x int )
*
ERROR at line 1:
ORA-01031: insufficient privileges



 

what if single user fills up the temp space

November 24, 2005 - 8:13 pm UTC

Reviewer: Brahma

So, what if a single user fills up the temporary space.
How can the others use it.

Tom Kyte

Followup  

November 25, 2005 - 9:32 am UTC

by definition: they cannot use it if all that you have permitted to be used is in USE.

GTT and redo log

December 28, 2005 - 2:13 pm UTC

Reviewer: Charlie Zhu from Victoria, BC Canada

Why the INSERT into GTT generated so many redo logs
without the APPEND hints?

CREATE GLOBAL TEMPORARY TABLE SCOTT.VC_GT
(
LISTINGSID NUMBER(20) NOT NULL,
VENDCATSID NUMBER
)
ON COMMIT PRESERVE ROWS;

exec runStats_pkg.rs_start;
insert /* append */ into scott.vc_gt
select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by listingsid ;

exec runStats_pkg.rs_middle;
insert /*+ append */ into scott.vc_gt
select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by listingsid ;

exec runStats_pkg.rs_stop;

scott@lab> exec runStats_pkg.rs_stop;
Run1 ran in 15 hsecs
Run2 ran in 68 hsecs
run 1 ran in 22.06% of the time

Name Run1 Run2 Diff
...
STAT...redo entries 253 19 -234
STAT...consistent gets 418 162 -256
STAT...consistent gets from ca 418 162 -256
STAT...db block gets 568 170 -398
STAT...consistent changes 525 35 -490
STAT...db block gets from cach 568 39 -529
STAT...session logical reads 986 332 -654
STAT...db block changes 800 53 -747
LATCH.cache buffers chains 3,183 481 -2,702
STAT...undo change vector size 79,196 2,900 -76,296
STAT...redo size 97,816 4,280 -93,536

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
3,694 834 -2,860 442.93%

PL/SQL procedure successfully completed.


Tom Kyte

Followup  

December 28, 2005 - 5:59 pm UTC

(I would hardly term 92K of redo "so many redo logs", but...)


the conventional path insert generates UNDO, UNDO is always protected by REDO.

Here is an excerpt from Expert Oracle: Database Architecture regarding this:

<quote>

Temporary Tables and Redo/Undo

Temporary tables are still considered a relatively new feature of Oracle, having been introduced only in Oracle8i version 8.1.5. As such, there is some confusion surrounding them, in particular in the area of logging. In Chapter 10, we will cover how and why you might use temporary tables. In this section, we will explore only the question “How do temporary tables work with respect to logging of changes?”

Temporary tables generate no redo for their blocks. Therefore, an operation on a temporary table is not “recoverable.” When you modify a block in a temporary table, no record of this change will be made in the redo log files. However, temporary tables do generate undo, and the undo is logged. Hence, temporary tables will generate some redo. At first glance, it doesn’t seem to make total sense: why would they need to generate undo? This is because you can roll back to a SAVEPOINT within a transaction. You might erase the last 50 INSERTs into a temporary table, leaving the first 50. Temporary tables can have constraints and everything else a normal table can have. They might fail a statement on the five-hundredth row of a 500-row INSERT, necessitating a rollback of that statement. Since temporary tables behave in general just like “normal” tables, temporary tables must generate undo. Since undo data must be logged, temporary tables will generate some redo log for the undo they generate.

This is not nearly as ominous as it seems. The primary SQL statements used against temporary tables are INSERTs and SELECTs. Fortunately, INSERTs generate very little undo (you need to restore the block to “nothing,” and it doesn’t take very much room to store “nothing”), and SELECTs generate no undo. Hence, if you use temporary tables for INSERTs and SELECTs exclusively, this section means nothing to you. It is only if you UPDATE or DELETE that you might be concerned about this.

I set up a small test to demonstrate the amount of redo generated while working with temporary tables, an indication therefore of the amount of undo generated for temporary tables, since only the undo is logged for them. To demonstrate this, I will take identically configured “permanent” and “temporary” tables, and then perform the same operations on each, measuring the amount of redo generated each time. The tables I used were simply as follows:

ops$tkyte@ORA10G> create table perm
  2  ( x char(2000) ,
  3    y char(2000) ,
  4    z char(2000)  )
  5  /
Table created.
 
ops$tkyte@ORA10G> create global temporary table temp
  2  ( x char(2000) ,
  3    y char(2000) ,
  4    z char(2000)  )
  5  on commit preserve rows
  6  /
Table created.

I set up a small stored procedure to allow me to perform arbitrary SQL and report the amount of redo generated by that SQL. I will use this routine to perform INSERTs, UPDATEs, and DELETEs against both the temporary and permanent tables:

ops$tkyte@ORA10G> create or replace procedure do_sql( p_sql in varchar2 )
  2  as
  3      l_start_redo    number;
  4      l_redo            number;
  5  begin
  6      select v$mystat.value
  7        into l_start_redo
  8        from v$mystat, v$statname
  9       where v$mystat.statistic# = v$statname.statistic#
 10         and v$statname.name = 'redo size';
 11
 12      execute immediate p_sql;
 13      commit;
 14
 15      select v$mystat.value-l_start_redo
 16        into l_redo
 17        from v$mystat, v$statname
 18       where v$mystat.statistic# = v$statname.statistic#
 19         and v$statname.name = 'redo size';
 20
 21      dbms_output.put_line
 22      ( to_char(l_redo,'9,999,999') ||' bytes of redo generated for "' ||
 23        substr( replace( p_sql, chr(10), ' '), 1, 25 ) || '"...' );
 24  end;
 25  /
Procedure created.

Then, I ran equivalent INSERTs, UPDATEs, and DELETEs against the PERM and TEMP tables:

ops$tkyte@ORA10G> set serveroutput on format wrapped
ops$tkyte@ORA10G> begin
  2      do_sql( 'insert into perm
  3               select 1,1,1
  4                 from all_objects
  5                where rownum <= 500' );
  6
  7      do_sql( 'insert into temp
  8               select 1,1,1
  9                 from all_objects
 10                where rownum <= 500' );
 11          dbms_output.new_line;
 12
 13      do_sql( 'update perm set x = 2' );
 14      do_sql( 'update temp set x = 2' );
 15          dbms_output.new_line;
 16
 17      do_sql( 'delete from perm' );
 18      do_sql( 'delete from temp' );
 19  end;
 20  /
 3,297,752 bytes of redo generated for "insert into perm         "...
    66,488 bytes of redo generated for "insert into temp         "...
 
 2,182,200 bytes of redo generated for "update perm set x = 2"...
 1,100,252 bytes of redo generated for "update temp set x = 2"...
 
 3,218,804 bytes of redo generated for "delete from perm"...
 3,212,084 bytes of redo generated for "delete from temp"...
 
PL/SQL procedure successfully completed.
As you can see,
    *    The INSERT into the “real” table generated a lot of redo. Almost no redo was generated for the temporary table. This makes sense—there is very little undo data generated for INSERTs and only undo data is logged for temporary tables.
    *    The UPDATE of the real table generated about twice the amount of redo as the temporary table. Again, this makes sense. About half of that UPDATE, the “before image,” had to be saved. The “after image” (redo) for the temporary table did not have to be saved.
    *    The DELETEs took about the same amount of redo space. This makes sense, as the undo for a DELETE is big, but the redo for the modified blocks is very small. Hence, a DELETE against a temporary table takes place very much in the same fashion as a DELETE against a permanent table.
Note    If you see the temporary table generating more redo than the permanent table with the INSERT statement, you are observing a product issue in the database that is fixed in at least Oracle 9.2.0.6 and 10.1.0.4 patch releases (the current shipping releases as of this writing).
Therefore, the following generalizations may be made regarding DML activity on temporary tables:
    *    An INSERT will generate little to no undo/redo activity.
    *    A DELETE will generate the same amount of redo as a normal table.
    *    An UPDATE of a temporary table will generate about half the redo of an UPDATE of a normal table.

There are notable exceptions to the last statement. For example, if I UPDATE a column that is entirely NULL with 2,000 bytes of data, there will be very little undo data generated. This UPDATE will behave like the INSERT. On the other hand, if I UPDATE a column with 2,000 bytes of data to be NULL, it will behave like the DELETE as far as redo generation is concerned. On average, you can expect an UPDATE against a temporary table to produce about 50 percent of the undo/redo you would experience with a real table.

In general, common sense prevails on the amount of redo created. If the operation you perform causes undo data to be created, then determine how easy or hard it will be to reverse (undo) the effect of your operation. If you INSERT 2,000 bytes, the reverse of this is easy. You simply go back to no bytes. If you DELETE 2,000 bytes, the reverse is INSERTing 2,000 bytes. In this case, the redo is substantial.

Armed with this knowledge, you will avoid deleting from temporary tables. You can use TRUNCATE, bearing in mind, of course, that TRUNCATE is DDL that will commit your transaction, and in Oracle9i and before invalidate your cursors, or just let the temporary tables empty themselves automatically after a COMMIT or when your session terminated. All of these methods generate no undo and, therefore, no redo. You will try to avoid updating a temporary table unless you really have to for some reason. You will use temporary tables mostly as something to be INSERTed into and SELECTed from. In this fashion, you’ll make optimum use of their unique ability to not generate redo.
</quote> 

GTT, heap table and redo log

December 30, 2005 - 2:43 pm UTC

Reviewer: Charlie Zhu from Victoria, BC Canada

Hi Tom,

I read this part twice, that's I wonder why the big redo log generated for temp table.

As you did in the trigger and redo log part, I tested all case yesterday, and know all details.

Here is the number:

GTT vs Heap table,


1) normal INSERT, temp table win, 92k vs. 693K.

STAT...redo entries 255 920 665
STAT...redo size 92,820 693,268 600,448

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
4,807 8,882 4,075 54.12%



2) Force logging, DIRCT PATH INSERT (Load as select)
* We use physical standby db, have to enable force logging
on primary db.
+Append and force logging

STAT...redo entries 18 149 131
STAT...redo size 3,872 545,216 541,344

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
693 1,076 383 64.41%

3) No Force logging, Direct path INSERT
4k vs. 3k, about same redo logs

exec runStats_pkg.rs_start;
insert /*+ append */ into scott.vc_gt
select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by
listingsid ;

exec runStats_pkg.rs_middle;
insert /*+ append */ into scott.vc_stg nologging
select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by
listingsid ;

exec runStats_pkg.rs_stop;

STAT...redo entries 17 162 145
STAT...redo size 3,820 27,328 23,508
STAT...session uga memory max 199,968 0 -199,968
STAT...session pga memory max 786,432 0 -786,432

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
965 4,277 3,312 22.56%

PL/SQL procedure successfully completed.


Typo, paste the redo again

December 30, 2005 - 3:52 pm UTC

Reviewer: Charlie Zhu from Victoria, BC Canada

For (3), No force logging, +APPEND direct insert

GTT vs. heap table:

27426 rows inserted.

scott@lab> exec runStats_pkg.rs_stop;
Run1 ran in 14 hsecs
Run2 ran in 67 hsecs
run 1 ran in 20.9% of the time

Name Run1 Run2 Diff
STAT...redo entries 17 27 10
STAT...undo change vector size 2,468 2,496 28
STAT...redo size 3,748 4,272 524

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
10,658 10,674 16 99.85%


temptables for (inefficient) shaped recordset

February 23, 2006 - 2:04 pm UTC

Reviewer: Geoff Keenan from Milford, CT

great site.

i generate a shaped recordset (about 10 child levels deep) from a 10 million record table. (The much more efficient 'connect by' sql does not give me drilldown functionality when displaying the data in a hierarchical grid control.) it's real slow.

i wondered if you think a temp table populated by an insert query which uses the 'connect by' syntax in order to reduce # of records from 10 million to only the records needed for the shaped recordset (~1 million or less) is an appropriate use of temp tables. thanks.

Tom Kyte

Followup  

February 23, 2006 - 7:12 pm UTC

why doesn't connect by give you "drill down"? I missed that part, you just start with the right bit?

Where is it stored?

July 20, 2006 - 5:05 am UTC

Reviewer: A reader from england

Where is the session data for global temporary table stored? Temp tablespace?

Thanks


Tom Kyte

Followup  

July 22, 2006 - 4:38 pm UTC

that is where it can be "swapped" to. It is buffered as well in the buffer cache.

Where is it stored?

July 21, 2006 - 6:51 am UTC

Reviewer: A reader from england

Could you tell us where is the data for global temporary table stored?

Thanks


Tom Kyte

Followup  

July 22, 2006 - 7:23 pm UTC

in temp and/or buffer cache

Global Temporary Table

August 01, 2006 - 11:30 am UTC

Reviewer: CT VELU from Romeo MI

Hi Tom
I am in the process of tuning an application.I have a reporting table of 80 columns with 55 million records .There are two indexes on the table.One before insert or update trigger.Trigger has a PL/SQL function, No sql involved.
Every day the process goes like this inside the application
1. Delete 2 million record (15 minutes).
2. Insert 2 million records (15 to twenty minutes).
3. Procedure 1 to 20 fetches these 2 million records in different queries and process it , sometimes we do update.
4. There are no intermediate commits involved.
I did tuned some of the sql statements with my knowledge.I was able to bring it down from 1hour 45 miunutes to 1 hour 15 minutes. By looking at the sql statements and tkprof trace I found lot of time is spent on Reading these two million records every time the table is hit(of course it uses the index every time).

MY Aim is to speed up this process so that I can be well within 1 hour.

I was thinking
1. Create Global Temp table GTT
2.Insert 2 million records in to Global Temp Table GTT.
3.process this two million records in the procedures( here I am expecting time reduction )
4.Delete 2 million records from the permanent report table
5.Insert 2 million records from GTT to permanent table.( I cant do a merge because of the business practice in our shop).

I have read in your book and from documentation.temp table is in Disk not in memory.so it has to do physical read.In one of your book you have said that physical read from temp space is not going to be included in logical I/O( what does that mean ? wont I have the block in buffer for further processing).

I have seen your comments saying that Global Temporary Tables are almost not needed but I did see in this post you are saying we can use it as an Interface table.
Please do clarify whether I can use it here to speed up the process.

Thank you very much for your time
CT


P.S with a bit of knowledge I think it is going to add up an overhead of inserting 2 million records in temp table but should not be a killer.

Tom Kyte

Followup  

August 01, 2006 - 7:05 pm UTC

temp tables are cached in the buffer cache.

it can have physical reads, but truth be told - I'd rather full scan a 2million row table than do 2 million indexed reads!

this might be one time where I'd say "global temporary table, makes sense..."

This is a lot like my "interface table" example. Very similar indeed to my "palm sync" example.


don't know why a BUSINESS PRACTICE would prevent a TECHNICAL SOLUTION that arrives at the same answer from being used?????? This I am very curious about - can you explain?

Business Rule

August 01, 2006 - 10:02 pm UTC

Reviewer: CT Velu from MI USA

Hi Tom
Thank you very much for your response.Thought you were busy and I did tested for few procedures and was able to bring down the time,yet to implement in full scale will post the result here.
It goes like this
1.we delete 1400 header record(header numbers are from sequence)this will delete the corresponding 2 million records in the report table in question(which has a detail number as a primary key coming from sequence)
2.Insert 1400 or more header records depending on the sales detail(I guess we are inserting only for customers who has sales, not for every one)so today we may have 100 records and tomorrow we may have 110.
3.Now insert around 2 million records in to detail table(report table in question) from sales table and process.
My belief is that they may have a row in detail table which qualifies the condition today may not qualify tomorrow for that period(In that case we can't do merge with update and insert becoz the unwanted row will be still in the table and will yield wrong results down the line).So every day we want to have this 2 million rows fresh.
Actually my original thought was to implement materialised view with anlytical function as 70 columns are pertaining to the monthly sales, yearly sales, quarterly sales..etc.The reason why I thought like this is that we run this procedure for 3 times for 3 different quarters(3 x 1.45 Hours) but then I droped because we needed fresh rows.

If you have any different thought how I can do this, I am willing to listen.

Thaks again for your response.

CT


Tom Kyte

Followup  

August 02, 2006 - 11:18 am UTC

ok, so records are deleted as well as updated and inserted.

10g can do that (the merge can

a) insert
b) update
c) upon updating - look at record and say "now delete it"



temp table and join

August 09, 2006 - 5:14 pm UTC

Reviewer: Anil from Newyork

Hi tom

Following two select statements returns same number of row's but the Query using temp table is much faster than simple Query (pls look at the COST)
could you pls throw some light on this 
temp table contains only 1 row

insert into SPE_INTR_SEARCH_RESULT_GLBTMP values(1,98,'x');

SQL>  SELECT
  2     1
  3    ,spe.SPR_SEG_PROFILE_SYS_ID
  4    ,'X'
  5   FROM
  6     segregation_profile spe
  7    ,SPE_INTR_SEARCH_RESULT_GLBTMP g
  8   WHERE
  9    1=1
 10    AND spe.BRWR_CUS_SYS_ID = g.SPR_SEG_PROFILE_SYS_ID
 11    AND g.GLBTMP_DATASET_NO = 1;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=15 Card=1 Bytes=36)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'SEGREGATION_PROFILE' (Co
          st=13 Card=1110 Bytes=11100)

   2    1     NESTED LOOPS (Cost=15 Card=1 Bytes=36)
   3    2       TABLE ACCESS (FULL) OF 'SPE_INTR_SEARCH_RESULT_GLBTMP'
           (Cost=2 Card=1 Bytes=26)

   4    2       INDEX (RANGE SCAN) OF 'XIF1SEGREGATION_PROFILE' (NON-U
          NIQUE) (Cost=2 Card=1110)




SQL>  select spr_seg_profile_sys_id from segregation_profile where brwr_cus_sys_id = 98;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=1110 Bytes=1
          1100)

   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'SEGREGATION_PROFILE' (Co
          st=24 Card=1110 Bytes=11100)

   2    1     INDEX (RANGE SCAN) OF 'XIF1SEGREGATION_PROFILE' (NON-UNI
          QUE) (Cost=4 Card=1110)

regards,
ANIL

 

Tom Kyte

Followup  

August 09, 2006 - 7:46 pm UTC

look at the estimated cardinality - that'll explain the cost

but in any case - one would want a tkprof report to say anything about relative performance.

August 11, 2006 - 2:21 pm UTC

Reviewer: Anil from Newyork

Hi tom,
Here is the tkprof report for the same


TKPROF: Release 9.2.0.1.0 - Production on Thu Aug 10 11:08:20 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: bdsd2_ora_1323152_seeker.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

alter session set events '10046 trace name context forever,
level 12'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.01 0 0 0 0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 48.88 48.88
********************************************************************************

insert into SPE_INTR_SEARCH_RESULT_GLBTMP
values
(1,98,'x')


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.09 1.35 21 49 0 0
Execute 1 0.00 0.08 3 1 2 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.09 1.44 24 50 2 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3 0.01 0.02
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.01 0.01
********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.18 0 0 0 0
Execute 1 0.03 0.23 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.03 0.41 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 18.27 18.27
********************************************************************************

select spr_seg_profile_sys_id
from
segregation_profile where brwr_cus_sys_id = 98


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 281 0.02 0.14 60 638 0 4189
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 283 0.02 0.15 60 638 0 4189

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Rows Row Source Operation
------- ---------------------------------------------------
4189 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE
4189 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 281 0.00 0.00
db file sequential read 60 0.01 0.07
SQL*Net message from client 281 73.75 87.02
********************************************************************************

SELECT
1
,spe.SPR_SEG_PROFILE_SYS_ID
,'X'
FROM
segregation_profile spe
,SPE_INTR_SEARCH_RESULT_GLBTMP g
WHERE
1=1
AND spe.BRWR_CUS_SYS_ID = g.SPR_SEG_PROFILE_SYS_ID
AND g.GLBTMP_DATASET_NO = 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 281 0.22 0.26 82 7470 0 4189
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 283 0.23 0.27 82 7470 0 4189

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Rows Row Source Operation
------- ---------------------------------------------------
4189 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE
11004 NESTED LOOPS
6814 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP
4189 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 281 0.00 0.00
db file sequential read 62 0.01 0.01
db file scattered read 3 0.02 0.02
SQL*Net message from client 281 112.56 124.16
********************************************************************************

SELECT DECODE('A','A','1','2')
FROM
DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.11 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 2 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.11 2 3 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 2 0.00 0.00



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.11 1.66 21 49 0 0
Execute 6 0.03 0.33 3 1 2 2
Fetch 563 0.24 0.41 144 8111 0 8379
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 574 0.38 2.41 168 8161 2 8381

Misses in library cache during parse: 5
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 566 0.00 0.00
SQL*Net message from client 565 112.56 278.36
db file sequential read 127 0.01 0.11
db file scattered read 3 0.02 0.02


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 11 0.01 0.53 0 0 0 0
Execute 17 0.06 0.66 0 0 0 0
Fetch 40 0.02 0.12 24 52 0 29
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 68 0.09 1.31 24 52 0 29

Misses in library cache during parse: 11

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 24 0.01 0.10

6 user SQL statements in session.
11 internal SQL statements in session.
17 SQL statements in session.
********************************************************************************
Trace file: bdsd2_ora_1323152_seeker.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
6 user SQL statements in trace file.
11 internal SQL statements in trace file.
17 SQL statements in trace file.
17 unique SQL statements in trace file.
2140 lines in trace file.



regards,
ANIL



Tom Kyte

Followup  

August 11, 2006 - 2:55 pm UTC

6814 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP

how'd we get that many rows from a 1 row table?

August 14, 2006 - 2:09 pm UTC

Reviewer: A reader

Hi tom,

here's the recent tk prof report ..I don't know how it is showing these many number of records


TKPROF: Release 9.2.0.1.0 - Production on Mon Aug 14 13:59:17 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: bdsd2_ora_1028258_seeker14aug.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

alter session set events '10046 trace name context forever,
level 12'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 3.18 3.18
********************************************************************************

alter session set tracefile_identifier = 'seeker14aug'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.06 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.07 0 0 0 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 35.72 43.95
********************************************************************************

rollback


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.06 0 0 0 0
Execute 1 0.00 0.07 3 2 52 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.13 3 2 52 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3 0.02 0.02
log file sync 1 0.02 0.02
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 2.98 2.98
********************************************************************************

select count(*)
from
SPE_INTR_SEARCH_RESULT_GLBTMP


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
0 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 7.12 7.13
********************************************************************************

alter session set events '10046 trace name context forever,
level 12'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 3.44 3.44
********************************************************************************

insert into SPE_INTR_SEARCH_RESULT_GLBTMP
values
(1,98,'x')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 8 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 8 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 22.54 22.54
********************************************************************************

SELECT
1
,spe.SPR_SEG_PROFILE_SYS_ID
,'X'
FROM
segregation_profile spe
,SPE_INTR_SEARCH_RESULT_GLBTMP g
WHERE
1=1
AND spe.BRWR_CUS_SYS_ID = g.SPR_SEG_PROFILE_SYS_ID
AND g.GLBTMP_DATASET_NO = 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.05 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 281 0.06 0.11 76 631 0 4190
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 283 0.06 0.17 76 631 0 4190

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Rows Row Source Operation
------- ---------------------------------------------------
4190 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE
4192 NESTED LOOPS
1 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP
4190 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 281 0.00 0.00
db file sequential read 76 0.00 0.01
SQL*Net message from client 281 23.66 36.22
********************************************************************************

select spr_seg_profile_sys_id
from
segregation_profile where brwr_cus_sys_id = 98

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 281 0.06 0.05 0 628 0 4190
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 283 0.08 0.14 0 628 0 4190

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41

Rows Row Source Operation
------- ---------------------------------------------------
4190 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE
4190 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 281 0.00 0.00
SQL*Net message from client 281 0.61 16.57
********************************************************************************

SELECT DECODE('A','A','1','2')
FROM
DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 3 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 41

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL OBJ#(222)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 1143.24 1143.24



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 9 0.02 0.27 0 0 0 0
Execute 10 0.00 0.07 3 3 60 1
Fetch 565 0.12 0.17 76 1262 0 8382
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 584 0.14 0.52 79 1265 60 8383

Misses in library cache during parse: 7
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 571 0.00 0.00
SQL*Net message from client 571 1143.24 1279.30
db file sequential read 79 0.02 0.04
log file sync 2 0.02 0.02


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 5 2 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.00 0 5 2 1

Misses in library cache during parse: 0

10 user SQL statements in session.
1 internal SQL statements in session.
11 SQL statements in session.
********************************************************************************
Trace file: bdsd2_ora_1028258_seeker14aug.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
10 user SQL statements in trace file.
1 internal SQL statements in trace file.
11 SQL statements in trace file.
10 unique SQL statements in trace file.
1955 lines in trace file.


regards,
ANIL


Tom Kyte

Followup  

August 14, 2006 - 3:03 pm UTC

so, tkprof shows no difference really now.

before, you had made a mistake - that table did not have one row. Now, it does.

August 14, 2006 - 4:00 pm UTC

Reviewer: Anil from Newyork

hi tom,

Please let us know why there is no diff(may be bcz there are not considerable amt of rows in temp table? or..). So which approach is better.
Though we are seeing the diff in COST.
So does it mean we can have either of the approaches i.e sigle Query approach or global temp table( which will hold intermidiate data set)
Pls explain with an Example

Thanks a lot for Quick turnaround

regards,
Anil


Tom Kyte

Followup  

August 14, 2006 - 4:12 pm UTC

bcz?


there is no difference because the plans are very similar and the amount of work to get that single row was neglible.


right now, today you have:

Rows Row Source Operation
------- ---------------------------------------------------
4190 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE
4192 NESTED LOOPS
1 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP
4190 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019)

which differs from the "hard coded" plan in that we are full scanning that 1 row table.

Before you had:

Rows Row Source Operation
------- ---------------------------------------------------
4189 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE
11004 NESTED LOOPS
6814 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP
4189 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019)

A RATHER LARGE 6,814 rows - eg: not 1 as stated.

global temporary table usage

August 30, 2006 - 4:23 pm UTC

Reviewer: Qing Yu from Herndon, VA, USA

Tom, what is the best way to capture global temporary table usage for a session? We are considering to poll V$TEMPSEG_USAGE. Is there any other better way?

Tom Kyte

Followup  

August 30, 2006 - 5:57 pm UTC

define "capture global temporary table usage" for me.

Updating a GTT

August 31, 2006 - 3:51 am UTC

Reviewer: Paul McKerley from Bristow, VA USA

Hi Tom:

I have a question about updates to GTTs. Export Oracle: Database Architecture has a passage (quoted on the page above) saying that an update of a null column in a GTT will produce very little redo. But this does not match our experience. We've found that updating null columns in GTTs is far more expensive than inserting to a new GTT.

Here's a test package that compares updating and merging into a GTT vs. selecting into a new one. You need two GTTs:

create global temporary table test_gtt_upd_1
( owner varchar2(30) not null,
table_name varchar2(128) not null,
last_analyzed date null ) on commit delete rows;

create global temporary table test_gtt_upd_2
( owner varchar2(30) not null,
table_name varchar2(128) not null
) on commit delete rows;

-- The first test will create a GTT based on dba_tables.
-- It will have null in the last_analyzed
-- column that will be updated from dba_tables.
--
-- The second test will accomplish the same thing using
-- a second GTT with a copy of part of dba_tables. This
-- will be joined against dba_tables to populate the
-- original table. Statistics will be gathered and
-- reported.
--
-- The third test tries the update with a merge statement.

create or replace package pkg_test_gtt_update as
procedure test_update;
procedure test_insert;
procedure test_merge;
procedure report_metrics;
procedure run_test;
procedure start_stats;
procedure stop_stats(idx in varchar2);
end;
/
show errors;

create or replace package body pkg_test_gtt_update as
type metric_at_type is table of int index by varchar2(10);
redo_at metric_at_type;
gets_at metric_at_type;
before_redo int;
before_gets int;

procedure run_test as
begin
test_update;
test_insert;
test_merge;
report_metrics;
end;

procedure start_stats as
begin
select value
into before_gets
from v$mystat join v$statname
using (statistic#)
where name = 'consistent gets';

select value
into before_redo
from v$mystat join v$statname
using (statistic#)
where name = 'redo size';
end;

procedure stop_stats( idx in varchar2 ) as
begin
select value - before_gets
into gets_at(idx)
from v$mystat join v$statname
using (statistic#)
where name = 'consistent gets';

select value - before_redo
into redo_at(idx)
from v$mystat join v$statname
using (statistic#)
where name = 'redo size';
end;

procedure test_update as
begin
start_stats;

insert into test_gtt_upd_1
select owner, table_name, null from dba_tables;

update test_gtt_upd_1 tt
set last_analyzed = (select last_analyzed
from dba_tables dt
where dt.owner = tt.owner and
dt.table_name = tt.table_name );

stop_stats('update');
commit;
end;

procedure test_insert as
begin
start_stats;

insert into test_gtt_upd_2
select owner, table_name
from dba_tables;

insert into test_gtt_upd_1 tt1
select tt2.owner, tt2.table_name, dt.last_analyzed
from test_gtt_upd_2 tt2 join dba_tables dt
on tt2.owner = dt.owner and
tt2.table_name = dt.table_name;

stop_stats('insert');
commit;
end;

procedure test_merge as
begin
start_stats;

insert into test_gtt_upd_1
select owner, table_name, null
from dba_tables;

merge into test_gtt_upd_1 gtt1
using ( select owner,
table_name,
last_analyzed
from dba_tables ) dt
on ( gtt1.owner = dt.owner and
gtt1.table_name = dt.table_name )
when matched then update
set gtt1.last_analyzed = dt.last_analyzed;

stop_stats('merge');
commit;
end;

procedure report_metrics as
begin
dbms_output.put_line('Test redo gets ' );
dbms_output.put_line('-----------------------------------------------------' );
dbms_output.put( 'Updating ' );
dbms_output.put(to_char(redo_at('update'),'99999999999'));
dbms_output.put(' ');
dbms_output.put_line(to_char(gets_at('update'),'99999999999'));
dbms_output.put( 'Inserting ' );
dbms_output.put(to_char(redo_at('insert'),'99999999999'));
dbms_output.put(' ');
dbms_output.put_line(to_char(gets_at('insert'),'99999999999'));
dbms_output.put( 'Merging ' );
dbms_output.put(to_char(redo_at('merge'),'99999999999'));
dbms_output.put(' ');
dbms_output.put_line(to_char(gets_at('merge'),'99999999999'));
end;

end;
/
show errors;


Here are the results that I get consistently on my
10gR2 system:

paul@SURPRISE> exec pkg_test_gtt_update.run_test;
Test redo gets
-----------------------------------------------------
Updating 311144 29648
Inserting 12044 24136
Merging 311040 22913


Any idea what's going on?

Thanks,

Paul


Tom Kyte

Followup  

August 31, 2006 - 9:22 am UTC

I see I was ambigous - I was comparing global temporary tables to real tables there and it wasn't really clear.


ops$tkyte%ORA9IR2> create global temporary table gtt ( x varchar2(30) ) on commit delete rows;

Table created.

ops$tkyte%ORA9IR2> create table t ( x varchar2(30) );

Table created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> insert into gtt select null from all_objects;

30722 rows created.

ops$tkyte%ORA9IR2> insert into t select null from all_objects;

30722 rows created.

ops$tkyte%ORA9IR2> @mystat "redo size"
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE
---------------------- ----------
redo size                  442452

ops$tkyte%ORA9IR2> update gtt set x = rpad('x',30,'x');

30722 rows updated.

ops$tkyte%ORA9IR2> @mystat2
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size                 8981464        8,539,012

ops$tkyte%ORA9IR2> @mystat "redo size"
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE
---------------------- ----------
redo size                 8981464

ops$tkyte%ORA9IR2> update t set x = rpad('x',30,'x');

30722 rows updated.

ops$tkyte%ORA9IR2> @mystat2
ops$tkyte%ORA9IR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size                24478372       15,496,908

I'll try to "disambiguate that" in subsequent releases. 

Re: Updating a GTT

August 31, 2006 - 11:30 am UTC

Reviewer: Paul McKerley from Bristow, VA USA

Thanks for the feedback, Tom. That was very helpful.

Re: Updating a GTT

September 09, 2006 - 9:43 pm UTC

Reviewer: Paul McKerley from Bristow, VA, USA

Tom:

I went back and looked at this and realized that your answer didn't really address my example. I think my example showed that when working with GTTs, copying all your data to a new GTT in order to populate a new column is far more efficient that a updating null column in the original GTT. It looks like your example demonstrates that updating a null column in a GTT is far more efficient than updating a null column a permanent table. That's true but not really the point. Basically, I'm trying to prove that updating null values in GTTs is worse than inserting into a new GTT. That's counter-intuitive to many people.

Regards,

Paul McKerley


Performance Issue

January 05, 2007 - 3:10 am UTC

Reviewer: Khurram from Pakistan

SQL> DISCONNECT
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> CONN test2/test2@takaful
Connected.

SQL> SELECT COUNT(*) FROM vw_ins_outstanding
  2  /

  COUNT(*)
----------
     93612


vw_ins_outstanding is a complex view which in turn call another view and then this another view in turn call another view
and so on,there is 4 level views like

vw_A-->vw_B-->vw_C-->vw_D-->TABLES

and each view composed on with db tables having joining condition and as well combination of union all with some other 

predicates like OR,AND etc.

Tom All business logic dependent on these views these views are using everywhere in whole system,so couldnt courage to 

replace view.

One major problem  1 of this view say VW_A or vw_ins_outstanding is lower level view which is based on some other view which
i mentioned as above 

vw_A-->vw_B-->vw_C-->vw_D-->TABLES

is getting used in reports predicate like

SELECT  1 pol_type,IA.CLIENT_CODE,PARTTAKER_NAME PNAME,
DECODE(NVL(INSURED_NAME,'N'),'N',P.PARTTAKER_NAME,INSURED_NAME) PARTTAKERNAME ,IA.GENERATE_AGAINST,
DECODE(INSURED_ADDRESS,NULL,ltrim(P.ADDRESS),INSURED_ADDRESS) ADDRESS,
P.BUSINESS_CODE,IB.BUSINESS_NAME,IA.POLICY_TYPE_CODE,IA.CERTIFICATE_CODE,
SUBSTR(IA.ASSORTED_STRING,7,17) POL_NO,NULL END_NO,IA.POLICY_DATE,IA.PERIOD_FROM,IA.PERIOD_TO_DATE,
IA.AGENCY_CODE,IA.SUB_AGENCY_CODE,IA.TAKAFUL_TYPE,
IA.DOCUMENT_CODE,IA.ASSORTED_CODE assorted_code ,sumcovered_func(ia.policy_type_code,ia.assorted_code,ia.document_code) 

sum_covered,
SUBSTR(ASSORTED_STRING,-18,10) ASS_STRING,SUBSTR(ia.ASSORTED_CODE,1,2) ORD, SUBSTR(ASSORTED_STRING,-18,10) MASS_STRING       
From   INS_ASSORTED IA,INS_BUSINESS IB, INS_PARTTAKER P ,
(SELECT POLICY_CODE FROM VW_INS_OUTSTANDING<---------------------------------------------
  WHERE REC_TYPE in ('Due','Rcv')
    AND MODE_CODE IN ('24','31','02')
  GROUP BY POLICY_CODE
 HAVING NVL(SUM(DUE_AMOUNT),0)<>0) v1,
(SELECT ASSORTED_CODE FROM VW_INS_OUTSTANDING<-------------------------------------------
  WHERE MODE_CODE IN (DECODE(PRINCIPAL_COMPANY,'D','24'),'31','02')
    AND INSURANCE_TYPE_CODE IN 
        (DECODE(PRINCIPAL_COMPANY,'F','F',DECODE(PRINCIPAL_COMPANY,'L','L','D')),'F')
  GROUP BY ASSORTED_CODE
 HAVING SUM(NVL(DUE_AMOUNT,0)) <> 0) v2
WHERE   IA.CLIENT_CODE=P.PARTTAKER_CODE
AND    P.BUSINESS_CODE=IB.BUSINESS_CODE(+)
AND    ia.assorted_code=v1.policy_code
AND    ia.assorted_code=v2.assorted_code
AND          ia_zone_code=decode(:zone_code,'00',ia_zone_code,:zone_code)
AND       ia_branch_code= decode(:branch_code,'00',ia_branch_code,:branch_code)
&POL_COND
AND P.PARTTAKER_CODE = DECODE(:CLIENTCODE,'00',P.PARTTAKER_CODE,:CLIENTCODE)
AND IA.AGENCY_CODE = DECODE(:AGENCYCODE,'00',IA.AGENCY_CODE,:AGENCYCODE)
AND IA.POLICY_DATE BETWEEN TO_CHAR(:PERIODFROM,'DD/MON/YY') AND TO_CHAR(:PERIODTO,'DD/MON/YY')
&COND1
AND IA.POLICY_TYPE_CODE in (SELECT POLICY_TYPE_CODE from ins_policy_types &COND)  
&COND2
UNION



This view is taking so much time cause at runtime others chaining view query getting parsed with data.To avoid this chaining 
parsing i created the GLOBAL TEMP TABLE (tmp_vw_ins_outstanding) with the same structure like vw_ins_outstanding

CREATE GLOBAL TEMPORARY TABLE tmp_vw_ins_outstanding
(
REC_TYPE                                           VARCHAR2(7),
ASSORTED_CODE                                      VARCHAR2(12),
CLAIM_CODE                                         VARCHAR2(10),
CLAIM_INFO_CODE                                    VARCHAR2(10),
CLASS_CODE                                         VARCHAR2(2),
PRINCIPAL_COMPANY                                  VARCHAR2(1),
INSURANCE_TYPE_CODE                                VARCHAR2(1),
DUE_AMOUNT                                         NUMBER,
DUE_DATE                                           DATE,
MODE_CODE                                          VARCHAR2(3),
SEQ_NO                                             NUMBER(28),
COINSURANCE_CODE                                   VARCHAR2(8),
POLICY_CODE                                        VARCHAR2(12),
CHEQUE_NO                                          VARCHAR2(20),
CHEQUE_AMOUNT                                      NUMBER,
CHEQUE_DATE                                        DATE,
VOUCHER_NO                                         VARCHAR2(15),
VOUCHER_DATE                                       DATE,
SUP_BY                                             VARCHAR2(20),
PAID_TO_CODE                                       VARCHAR2(8),
CANCEL_DATE                                        DATE,
SUP_DATE                                           DATE,
POLICY_DATE                                        DATE,
ISSUE_DATE                                         DATE,
DOCUMENT_CODE                                      VARCHAR2(2))
ON COMMIT PRESERVE ROWS
/
Then at runtime i get populated this table using collection bulk bind which takes only 1 min

DECLARE
  TYPE test1_tab IS TABLE OF vw_ins_outstanding%ROWTYPE;
  t_tab  test1_tab := test1_tab();
BEGIN
  SELECT *
  BULK COLLECT INTO t_tab
  FROM vw_ins_outstanding;
  FORALL i IN t_tab.first .. t_tab.last
    INSERT INTO tmp_vw_ins_outstanding VALUES t_tab(i);
  COMMIT;
END;
/

After that i used tmp_vw_ins_outstanding instead vw_ins_outstanding in my report query after creating index on this gtt,
still no luck with performance.


SQL> SELECT COUNT(*) FROM tmp_vw_ins_outstanding
  2  /

  COUNT(*)
----------
     93612

SQL> SET AUTOTRACE TRACEONLY
SQL> SELECT POLICY_CODE FROM TMP_VW_INS_OUTSTANDING
  2   WHERE REC_TYPE in ('Due','Rcv')
  3     AND MODE_CODE IN ('24','31','02')
  4   GROUP BY POLICY_CODE
  5  HAVING NVL(SUM(DUE_AMOUNT),0)<>0
  6  /

1157 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=318 Card=100 Bytes
          =2200)

   1    0   FILTER
   2    1     SORT (GROUP BY) (Cost=318 Card=100 Bytes=2200)
   3    2       TABLE ACCESS (FULL) OF 'TMP_VW_INS_OUTSTANDING' (TABLE
           (TEMP)) (Cost=317 Card=2463 Bytes=54186)





Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       1421  consistent gets
       1417  physical reads
          0  redo size
      33284  bytes sent via SQL*Net to client
      10591  bytes received via SQL*Net from client
         79  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
       1157  rows processed


I hope you will divert me in correct direction to solve my problem.

Khurram

size of temporary table

February 27, 2007 - 2:38 pm UTC

Reviewer: A reader

Is there any way, a DBA or anyone else (other than the person who ran the query :)) - can have a idea of the number of rows in a global temporary table ?

Currently we have a SQL query going on in our prod database which is running for almost 4 hours now

INSERT INTO tabX
SELECT

UPPER( 'BCZ' ),

sxe.cpp_primary_id,

..... smd.user_group_char15 AS SEC_CURRENCY,

e.base_currency AS PORT_CURRENCY1 FROM

tabA,tabB,....

WHERE cond1 and cond2 and ... and
e.entity_id in (select entity_id from fmc.entity_list_temp) )

fmc.entity_list_temp is a global temp table used in the above query - (global temp table with 'on commit retain rows')

IS there anyway we can have an idea of the size of this glbal temp table fmc.entity_list_temp - so that we can know how much time the query should take normally ?

IS there any view similar to dba_Extents for global temp tables

thanks
Apraim
Tom Kyte

Followup  

February 27, 2007 - 2:44 pm UTC

ops$tkyte%ORA10GR2> create global temporary table gtt as select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select blocks from v$sort_usage where username = USER;

no rows selected

ops$tkyte%ORA10GR2> insert into gtt select * from all_objects;

50071 rows created.

ops$tkyte%ORA10GR2> select blocks from v$sort_usage where username = USER;

    BLOCKS
----------
       768

ops$tkyte%ORA10GR2> insert into gtt select * from all_objects;

50071 rows created.

ops$tkyte%ORA10GR2> select blocks from v$sort_usage where username = USER;

    BLOCKS
----------
      1408


size of gtt

February 27, 2007 - 2:51 pm UTC

Reviewer: A reader

Does v$sort_usage shows the information only for the current user or for all users ?

If it is only for current user, then that won't give any idea about the size of gtt used by someone else

Also I assume v$sort_usage includes blocks used for sorting/merging as well - in adddition to gtt usage ?

thanks
Apraim
Tom Kyte

Followup  

February 27, 2007 - 3:09 pm UTC

i used 'where username=user', sort of implies..... it is for all users, you have to filter.

yes, v$sort_usage shows other stuff, but - look at the table, there are many columns in there to help you out (like sql_id in 10g for example)

size of gtt

February 27, 2007 - 3:16 pm UTC

Reviewer: A reader

But in 8i, when I ran the query for v$sort_usage, it was showing only my sessions. That is the reason I asked whether it shows info for all users

Maybe 10g has more info

thanks
Apraim

Can GTT be a solution for sub-queries using views?

July 27, 2007 - 12:05 am UTC

Reviewer: Reader from USA

Tom--I have a query that uses sub-queries which are populated using function calls.This query takes 2 mins to run.If I comment the sub-queries part in the main query, it returns in 5.6 secs.I found this post while searching in google,which claims to use GTT as solution.

http://dba.ipbhost.com/index.php?showtopic=4388
Most of the times what this author says and what he writes do not match.So I request you to help me solve this pls..

SELECT 
P.DUR_HRS_DAY AS P_event_hrs_2day, 

(SELECT sum(p1.dur_hrs_day) from c_parent_DAY p1   ----------This column comes from a view where it is populated using a function
WHERE p.event_id = p1.event_id and trunc(p1.daytime,'MM')=trunc(p.daytime,'MM') 
) AS P_event_hrs_mth, 

( SELECT sum(p1.prod_loss_day) FROM r_parent_DAY p1 ---------Similar columns as mentioned above
WHERE p.event_id = p1.event_id and trunc(p1.daytime,'MM')=trunc(p.daytime,'MM') 
) AS parent_loss_mth, 

(SELECT sum(p1.lpg_val_day) FROM r_parent_DAY p1 
WHERE p.event_id = p1.event_id and trunc(p1.daytime,'MM')=trunc(p.daytime,'MM') 
) AS parent_lpg_loss_mth, 

(SELECT sum(c1.dur_hrs_day) from c_child_DAY c1 
WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') 
) AS C_event_hrs_mth, 

(select sum(c1.prod_loss_day) FROM r_child_DAY c1 
WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') 
) AS child_loss_mth, 

(select sum(c1.lpg_val_day) from r_child_DAY c1 
WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') 
) AS child_lpg_loss_mth 

FROM 
R_PARENT_DAY P, R_CHILD_DAY C 
where 
P.daytime = (SELECT MAX(P1.daytime) FROM R_PARENT_DAY P1 
                               WHERE 
                                P.event_id= P1.event_id and p1.DUR_HRS_DAY <>0 
                                and TRUNC(P1.daytime,'MM') = (TO_DATE( '01-jan-2007','dd-Mon-yyyy'))) 
                                AND  P.DAYTIME =C.DAYTIME(+) 
                                AND P.EVENT_ID = C.PARENT_EVENT_ID(+) 
AND (select sum(p1.prod_loss_day) from r_parent_DAY p1  where p.event_id = p1.event_id 
     and trunc(p1.daytime,'MM')=trunc(p.daytime,'MM'))>=TO_NUMBER('20') 
ORDER BY parent_loss_mth  DESC 


Tom Kyte

Followup  

July 27, 2007 - 2:03 pm UTC

I'd be looking to remove the scalar subqueries altogether.

why did you use them - why not JUST JOIN???


July 27, 2007 - 5:12 pm UTC

Reviewer: A reader


Could you please help me understand with an example..How do I join and re-write this query?


Tom Kyte

Followup  

July 27, 2007 - 8:29 pm UTC



for example, you have:
(SELECT sum(c1.dur_hrs_day) from c_child_DAY c1 
WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') 
) AS C_event_hrs_mth, 

(select sum(c1.prod_loss_day) FROM r_child_DAY c1 
WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') 
) AS child_loss_mth, 

(select sum(c1.lpg_val_day) from r_child_DAY c1 
WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') 
) AS child_lpg_loss_mth 


just join to:
(select sum(dur_hrs_day) x, sum(prod_loss_day) y, sum(lgp_val_day) z, 
        event_id, trunc(daytime,'mm') daytime
   from c_child_day
  group by event_id, trunc(daytime,'mm')
) 

by event_id and trunc(daytime,'mm')


July 28, 2007 - 9:10 am UTC

Reviewer: A reader

Thank you Tom.
Hope this speeds up the execution
Will try and update you.

Regards

July 30, 2007 - 10:44 am UTC

Reviewer: A reader

Tom--The query returns 500 rows but I should get 1 row
with 5 columns.Why is "group by" not grouping?

(select sum(dur_hrs_day) C_event_hrs_mth, sum(oil_prod_loss_day_bbls) child_oil_loss_mth, sum(lpg_val_day_bbls) child_lpg_loss_mth,
event_id, trunc(daytime,'mm') daytime
from rv_ct_PROD_DEFER_child_DAY
group by event_id, trunc(daytime,'mm')
)

Thanks


Tom Kyte

Followup  

July 30, 2007 - 5:41 pm UTC

if you want one row, there would be NO group by.

you are getting the sums by event_id and trunc(daytime,'mm') - you must have 500 combinations of that.

you are getting precisely what you asked by - sums by EVENT_ID/trunc(DAYTIME,'mm')


and you want to, umm, JOIN to that set by event_id and trunc(daytime,'mm')

look you have a query like:
select ...., (select sum(x) from t where t.y = a.y and t.z = a.z)
  from A, ....


I'm suggesting:
select ...., sum_x
  from A, ...., (select sum(x), y, z from t group by y, z ) T
where a.y = t.y and a.z = t.z


furthermore, if you can "push" any predicates into the inline view, DO SO, eg, if you have:


select ..., (select sum(x) ..... )
from a, ...
where a.y between (1 and 50)


put the predicate on Y into the inline view as well


select ...., sum_x
  from A, ...., (select sum(x), y, z from t where y between 1 and 50 
                 group by y, z ) T
where a.y = t.y and a.z = t.z


August 01, 2007 - 2:53 pm UTC

Reviewer: A reader

Tom--I apologize for taking on this again.But this is the crucial part of our reporting queries that is causing lot of slow down.Taking the responsibility to fix this, I am seeking your help.
Following ur advice I changed the query to look like this:

SELECT 
P.op_sub_pu_code AS BLOCK,
C.proact_react AS proactive_reactive,

iv.dur_hrs_day
iv.prod_loss_day,
iv.lpg_val_day

FROM 

R_C_PARENT_DAY P,
R_C_CHILD_DAY C,

(select sum(c1.dur_hrs_day), sum(c1.<b>prod_loss_day)</b>, sum(c1.lpg_val_day)  
 from r_c_child_DAY rc, r_c_child_DAY rc1
where c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM')
) iv

WHERE
P.daytime = (select max(P1.daytime) from  r_c_parent_day P1
                               WHERE 
              P.event_id= P1.event_id and P1.DUR_HRS_DAY <>0 
              and TRUNC(P1.daytime,'MM') = (TO_DATE( '01-jul-2007','dd-Mon-yyyy')))

AND  P.DAYTIME =C.DAYTIME(+) 
AND  P.EVENT_ID = C.PARENT_EVENT_ID(+) 
AND  (select sum(rc1.<b>prod_loss_day</b>) 
     from r_c_child_DAY rc1 
     where c.event_id = rp1.event_id and trunc(rp1.daytime,'MM')=trunc(c.daytime,'MM'))<b>>=TO_NUMBER('100')</b> 

ORDER BY <b>prod_loss_day</b> DESC


But I get an error ORA-00904 INVALID IDENTIFIER at the place where I referenced the in-line view columns(iv. <column_name>).

Also could you please show me how to push the last predicate above the ORDER BY clause into the in-line view.
All it does is to check if the value is greater than 100.
I tried doing it but got variety of errors.

Appreciate your time and thoughts..

Tom Kyte

Followup  

August 05, 2007 - 10:08 am UTC

well, I don't have your tables, so I get even worse errors.

eg: one needs an example to get syntax right.

August 01, 2007 - 4:39 pm UTC

Reviewer: A reader

oops---I resolved half the issue myself.Now the problem is predicates and the total cost.After using the in-line views the cost has gone up to negative numbers.Is CBO going crazy on the query?

Can you help me get the cost and the execution time down pls?

SELECT 
p.op_sub_pu_code AS BLOCK,
C.proact_react AS proactive_reactive,

IV_p.a,
IV_p.b,
IV_p.c,
IV_c.x,
IV_c.y,
IV_c.z

FROM 
R_C_PARENT_DAY P,
R_C_CHILD_DAY C,

---parent in-line view---------
(select sum(p1.dur_hrs_day) a, sum(p1.oil_prod_loss_day_bbls) b,sum(p1.lpg_val_day_bbls) c 
from r_c_parent_DAY p1, r_c_parent_DAY p
where p.event_id = p1.event_id and and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM')
) IV_p,

---child in-line view--------
(select sum(c1.dur_hrs_day) x, sum(c1.oil_prod_loss_day_bbls)y, 
sum(c1.lpg_val_day_bbls) z 
from r_c_child_DAY c1, r_c_child_DAY c
where c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM')
) IV_c

--main where clause-------
WHERE
P.daytime = (select max(P1.daytime) from  r_c_parent_day P1
                               WHERE 
              P.event_id= P1.event_id and P1.DUR_HRS_DAY <>0 
              and TRUNC(P1.daytime,'MM') = (TO_DATE( '01-jul-2007','dd-Mon-yyyy')))
AND P.DAYTIME =C.DAYTIME(+) 
AND P.EVENT_ID = C.PARENT_EVENT_ID(+) 
AND EC_OBJECTS_DEFERMENT_EVENT.event_type(p.event_id)IN ('SHUTDOWN')
AND  (select sum(rc1.prod_loss_day) 
     from r_c_child_DAY rc1 
     where c.event_id = rp1.event_id and trunc(rp1.daytime,'MM')=trunc(c.daytime,'MM'))>=TO_NUMBER('100') 
 
ORDER BY b DESC


I was shocked to see the negative cost because I never expected it to be so.

GTT issue

October 05, 2007 - 7:36 am UTC

Reviewer: Deepak from India

Hi Tom,

Need your help in one issue pertaining to Global Temporary Tables(GTT).

Have created a GTT with "on commit preserve rows" option.

The purpose of creating GTT here is to insert distinct values into the GTT from a table. The source table has an index on the columns for which we need to compute the distinct values.

In my case multiple sessions keep doing the same operation simultaneously on the GTT.

My insert statement is as follows...

insert into my_gtt
select distinct col1,col2 from t1;

The above insert operation is waiting on "db file sequencial read" wait events for ever.

When I executed the query part of the above insert statement (select distinct col1,col2 from t1;) the result came in less than a second.

Am unable to really understand what is making the insert operation slow. When I examined the v$lock view I found that all the sessions have aquired TO & TM locks on the GTT and no one is blocking the other.

One thing I have observed is, when only one session is inserting into the GTT then the insertion is getting complete quite fast.

Please help me on this.

Memory usage

January 21, 2008 - 4:22 pm UTC

Reviewer: Andrew Markiewicz from Madison, WI

Tom.
Since the memory comes out of the buffer cache, is there any concern about the use of a gtt using too many resources?

For instance a sort operation would temporarily use memory and/or some temp tablespace, but it should be released after the sort. But if an app loads a gtt and leaves it for the session, and if many sessions do the same, there could be memory tied up but not released until the sessions end (if the gtts are session based and not transaction based).

Also, I ran the "select blocks from v$sort_usage" after inserting 1 row into a gtt with 1 column. The gtt uses 128 blocks, block size 8192, so 1M of memory for the 1 row.
sql->create global temporary table agtt1(fld1 integer);

Table created.

Elapsed: 00:00:00.06
sql->insert into agtt1 values(1);

1 row created.

Elapsed: 00:00:00.02
sql->select blocks from v$sort_usage;

     BLOCKS
-----------
        128



The 128 blocks is the initial extent size of the TEMP tablespace. I was thinking of setting up a separate temp tablespace with a small initial extent so the gtts would not require as much memory per use (especially if they do not release the memory for a while).

Should I even bother with this? It's not causing an issue now with our gtt usage, but we may have some potential designs that might use them more extensively. I just don't want to create the gtt, install the app and watch the available buffer cache shrink to 0 while the db is constantly swapping the buffer cache.
Unless I'm wrong on my understanding of how the gtts work.

Thanks
Andrew
Tom Kyte

Followup  

January 21, 2008 - 10:21 pm UTC

... Since the memory comes out of the buffer cache, is there any concern about the use of a gtt using too many resources? ...

the buffer cache is quite capable of taking care of itself, it won't let too much of one thing in there. It'll be cached like anything else is cached. It is flushable - releasable - discardable, it is not "special"

GTT insert - multiple sessions - high CPU

March 03, 2008 - 2:14 am UTC

Reviewer: Reene

Hi Tom

we have a GTT and during the month end  - mulitple users insert the data  (using an oracle standard program) and we have been observing that this is causing a spike in CPU .

I have given below the SQL statement for your reference.this inserts maximum 10-15 rows at a time.

my questions are :

1. is there a known issue with this , if yes what is the possible resoultions or what should we test .
2 . as another reviewer has asked in this thread - i have also observed that if more than 2 users trying to insert into the GTT - only one is able to insert.is this a correct and expected.
3. will setting the stats on the GTT would help.or using the cardinality hint would help.
4. what are the stattics we should monitor in order to get a meaniful data which can help us resolving the issues with this concurrent insert - I tried to see the followings during the run of this insert stmt 

SQL> select * from
(
select STATISTIC_NAME,sum(value)  from v$segstat where obj#=5373155
group by STATISTIC_NAME order by 2 desc)
where rownum < 6 ;  2    3    4    5

STATISTIC_NAME                                                   SUM(VALUE)
---------------------------------------------------------------- ----------
logical reads                                                        137664
db block changes                                                      65216
physical reads                                                         6021
physical writes                                                        5588
ITL waits                                                                 0

-below is our insert statement :

INSERT INTO 
       CST_INV_QTY_TEMP
       ( QTY_SOURCE, ORGANIZATION_ID, INVENTORY_ITEM_ID, CATEGORY_ID, COST_TYPE_ID, RCV_TRANSACTION_ID, ROLLBACK_QTY )
SELECT 9, MS.TO_ORGANIZATION_ID, CILT.INVENTORY_ITEM_ID, CILT.CATEGORY_ID, CILT.COST_TYPE_ID, MS.RCV_TRANSACTION_ID, SUM(MS.TO_ORG_PRIMARY_QUANTITY)
FROM CST_ITEM_LIST_TEMP CILT,
       CST_CG_LIST_TEMP CCLT,
       MTL_SUPPLY MS,
       RCV_TRANSACTIONS RT,
       MTL_PARAMETERS MP,
       PO_LINES_ALL POL,
       PJM_PROJECT_PARAMETERS PPP
WHERE CILT.INVENTORY_ITEM_ID = MS.ITEM_ID AND
       MP.ORGANIZATION_ID = MS.TO_ORGANIZATION_ID AND
       MS.TO_ORGANIZATION_ID = :B1 AND
       NVL( MS.COST_GROUP_ID, NVL(PPP.COSTING_GROUP_ID,MP.DEFAULT_COST_GROUP_ID) ) = CCLT.COST_GROUP_ID AND
       MS.SUPPLY_TYPE_CODE = 'RECEIVING' AND
       RT.TRANSACTION_ID = MS.RCV_TRANSACTION_ID AND
       RT.SOURCE_DOCUMENT_CODE NOT IN ('INVENTORY','REQ') AND
       POL.PO_LINE_ID = RT.PO_LINE_ID AND
       PPP.PROJECT_ID (+) = POL.PROJECT_ID
GROUP BY MS.TO_ORGANIZATION_ID, CILT.INVENTORY_ITEM_ID, CILT.CATEGORY_ID, CILT.COST_TYPE_ID, MS.RCV_TRANSACTION_ID

Thnaks

Tom Kyte

Followup  

March 03, 2008 - 7:06 am UTC

... mulitple users insert the data
(using an oracle standard program) and we have been observing that this is
causing a spike in CPU .
...

we have multiple users doing work

THAT is causing the cpu to be used.


1) multiple users doing work - please - I would hope to see a "spike" in cpu, wouldn't you? I would be disappointed if not.

2) that is absolutely FALSE. global temporary tables are designed to be highly concurrent, in fact, each user gets their OWN SET OF EXTENTS, it is contention free.

Something else is afoot here if you see what you say you are seeing - suggestion: whittle away at this until you find the bit that makes it serial (eg: there is a bit of application logic you coded somewhere doing that - if it is happening)


3) if you are just inserting into it and doing a simple select - no, not really. Statistics are used to optimize data retrieval, you are not saying anything about retrieving data from the global temporary table being problematic.


4) tell you what, open up two sessions in sqlplus.

enable tracing including waits.

alter session set events '10046 trace name context forever, level 12';


get the insert into statement ready to go in both sessions.

as simultaneously as possible, run them, exit sqlplus and post the relevant bit of the TKPROF (for the insert statement).


What you'll discover is both run at the same time.
and we'll see how long they take, the work they perform and what if anything they wait one.

Usage of temp tables with mutating tables in a trigger

March 14, 2008 - 9:20 am UTC

Reviewer: j.sieben from Germany

Hi Tom,
hopefully not too far off the topic, I'd like to ask whether you feel that using temporary tables in the context of mutating table problems might be efficient.
I recall that you advice to use a PL/SQL table to collect information on rows to be changed within an "for each row" trigger to compute the changes on the underlying table afterwards.
Is there any reason against using a temporary table to collect this information and make it available for the after update trigger?
I wonder whether the database might benefit from working with SQL directly against this temporary table as opposite to looping over the PL/SQL table?
Tom Kyte

Followup  

March 15, 2008 - 9:42 am UTC

...
I wonder whether the database might benefit from working with SQL directly
against this temporary table as opposite to looping over the PL/SQL table?
.....


totally depends on your needs, totally. If you need to perform SQL against the set - sure, a global temporary table might make sense (don't forget to clean it up just like the plsql table)

GTT-how fetch order works ?

April 23, 2009 - 4:18 am UTC

Reviewer: Atul Gupta from Finland

Hello Tom,
I've forms applications in which i have to pass data from one form to another form , i use global temporary table for it.
Say if table name is comments with column like comm_short and comm_long and i insert using forms application called form(child form) data like
comm_short comm_long
1 11
2 22
3 33
4 44

and fetch this data in calling form (main form) using (for i in (select * from COMMENT)) inside KEY COMMIT trigger in calling form & commit data. Data inserts into database the way i inserted into called(child) form and this is what i want.......

but is it 100% sure that in above scenario the fetching order will be the same order as that of inserting ??

Your confirmation will be landmark to decide if we should add sequence no. for row number identification.

Many Thanks.

br
atul
Tom Kyte

Followup  

April 27, 2009 - 10:41 am UTC

... but is it 100% sure that in above scenario the fetching order will be the same order as that of inserting ?? ...


of course not, rows are never ever assured to come out in the order of insertion, the ONLY WAY - repeat ONLY WAY you can get rows in some order is to use ORDER BY - period

http://asktom.oracle.com/Misc/order-in-court.html

You need another column to order by, add a third column "seq" and put the numbers 1, 2, 3, 4 in there (unless comm_short is already "sortable")

Use temp tables with J2EE / session pooling

July 01, 2009 - 12:11 pm UTC

Reviewer: Gregor from Germany

Hi Tom,

I read various articles on temporary tables on AskTom and also the Oracle Guide on creating temp tables ( http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tables003.htm#i1006400 )

My Use Case is the same is in the documentation:
"Temporary tables are useful in applications where a result set is to be buffered (temporarily persisted)..."

My problem:
a) the rows in a temp table are visible only for the current session.
b) We use a J2EE application in a clustered environment nad hence of no control on the database session handling

My question:
1) How do I use temporary tables (use case hit buffer) in conjunction with J2EE?
2) We just need temporary tables whose content is visible to all database session. Is that not possible?

Tom Kyte

Followup  

July 06, 2009 - 7:07 pm UTC

forget j2ee - it is not relevant at all. It is just a fla (four letter acronym)

Your concern is using global temporary tables with a connection pool

IF the developrers code like this:

a) grab connection
b) populate temp table
c) use temp table
d) ungrab connection

and never care about that temp table again - you are fine. If they use the temp table during the generation of a single web page - they are fine (as long as they grab the connection ONCE)

if they need the global temporary table to persist from page to page - well, they are out of luck with a connection pool. It will not work, they will have to use a persistent table and manage the data (clean it up when sessions expire, etc) themselves.


.... ) We just need temporary tables whose content is visible to all database
session. Is that not possible?
....


think about what was just asked for there.... hmmmm. That would be a permanent table and since ONLY YOU know when the data is no longer necessary - only YOU can manage that table (remove data when it isn't needed anymore)

"Non-durable" tables

July 07, 2009 - 9:57 am UTC

Reviewer: Gregor from Germany

Hi Tom,

thanks for your comments! I have one remark and one precision of my question because my primary concern is avoiding redo logs for the hit buffer use case.

1. I distinguish between visibilty of the elements in a temp table and lifetime of the elements; when the database session is killed the elements in the temp table of this database session are gone, fine! But's pity that the visibilty is also restricted. Ok, that's life...

2. In our application, of course we know how long we need the data in the "temp" table (which in fact is a hit buffer) and we also delete the data when the application's session is killed or when our server start-up. Currently, in fact we use a permanent table for that.

My concern is about redo log creation which is not needed in this case. When you think about thousand users performing queries in our application with thousands of hits which need to be buffered then we have huge amount of uneeded redo logs.

Technically, I need a table which does no logging (except undo logs and their redos), but which is not "durable" in terms of ACID durability. But the NOLOGGING options have no effect for insert/update/delete statement and even a direct path insert via /* append */ is not possible due to exclusive table lock in this case (as far as I know).

Would need a "force nologging" for the table!

Do I miss anything?

Seems that I have no chance to realize a hit buffer in conjunction with connection pooling in 2009 (though there are nice options like FCF in Oracle RAC optimized for connection pools).










Tom Kyte

Followup  

July 07, 2009 - 6:39 pm UTC

... because my primary concern is avoiding redo logs for the hit buffer use case. ...

you cannot. Even for global temporary tables, you cannot - they generate undo, which in turns generates redo. It happens.

there is quite simply no way to ever avoid it.



index updates will always generate gobs of undo and redo (temporary and permanent).

conventional path inserts will always as well.

a direct path insert can bypass undo, and if you wish redo generation for the TABLE data - but never for the index data (temporary or not)




People constantly and continuously worry about hypothetical situations. Did you know that your session state in an APEX application is stored in a persistent table - millions of deletes and inserts are performed on this single table every day on my small site. Billions of times per day on something like metalink. And it is a 'real table'.



Global Temporary Tables (GTT)

July 23, 2009 - 10:42 am UTC

Reviewer: reader from UK

Hi Tom,

I have a general question for your regarding GTT's. We have a feed that loads a number of temp tables and query that selects from them. We have noticed with exactly the same feed, exactly the same data - different execution plans are generated - either nested loops or MERGE JOIN CARTESIAN. When we have the latter, the job hangs. Our dynamic sampling is set to the default of two - would it be an idea to set higher dynamic sampling? Does the exceution plan generated depend on the load on the box? i.e if there is a high load does the optimizer look at less plans before deciding which is best? Thanks
Tom Kyte

Followup  

July 26, 2009 - 6:58 am UTC

do you always hard parse - or are you using the same query over and over.

If you are doing a soft parse, it'll use the last plan, if the last guy to load put 1 row into a global temporary table and you put 100,000 - that could be a problem.

see
http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html

ctl-f for

Query Plans with Temporary Tables

Temporary Tables

July 26, 2009 - 12:20 pm UTC

Reviewer: reader from UK

Hi Tom,

Thanks for your reply. I don't think it is being hard parsed as the parameters to the sql are PL/SQL variables (so they are bind variables).

Here is the query. It is inside a procedure in a package body.

INSERT INTO TMP_RISK_WEIGHT_MAP(
class_id
,ctry_inc_cd
,derivative_ind
,maturity_band)
SELECT /*+ dynamic_sampling(5) */ DISTINCT
CCDB.class_id
,CCDB.ctry_inc_cd
,'N'
,fn_fac_maturity_band(FAC.last_review_dt,
FAC.renewal_dt,
FAC.maturity_dt,
FAC.approval_dt)
FROM SRC_FAC_DET FAC
,TMP_CCDB_MAP CCDB
WHERE FAC.cob_dt = pi_cob_dt
AND FAC.run_id = pi_run_id
AND EXISTS (SELECT row_id
FROM TMP_MAP_ROWIDS
WHERE row_id = fac.ROWID
AND cob_dt = fac.cob_dt
AND run_id = fac.run_id
AND gcds_type_cd = 'F')
AND NVL(FAC.cpty_type_cd, g_default_string_key) =CCDB.id_type_cd
AND NVL(FAC.cpty_cd, g_default_string_key)
=CCDB.id_value_cd
AND NVL(FAC.cpty_ccdb_id, g_default_number_key)
=CCDB.pushed_ccdb_id
AND CCDB.ccdb_id IS NOT NULL;

So, pi_run_id and pi_cob_dt are parameters from the procedure. I have added the dynamic sampling hint (at level 5). The gtt's are: TMP_CCDB_MAP and TMP_MAP_ROWIDS.

If it is soft parsed, and the last guy loaded 1 record into one of the GTT's and I loaded 100,000. How can I get the last plan? I thought the execution plan is "independent" of someone elses session?
Tom Kyte

Followup  

July 26, 2009 - 9:35 pm UTC

... I thought the execution plan
is "independent" of someone elses session? ...

no, that is what shared sql is all about - one hard parse generated the plan - everyone else uses the SAME plan over and over and over again.

Presuming you do not run this query frequently - you might consider using dynamic sql so that you can use a unique sql statement (put a comment in it for example that includes a date/timestamp - that'll make the query unique). Then the dynamic sampling will happen on every execution - instead of just once.

Temporary Tables

July 27, 2009 - 5:26 am UTC

Reviewer: reader from UK

Hi Tom,

Thanks for your explanation! It makes more sense now - we use temp tables all over the place. Do you have an example in your one of your books where you show this happening (by example).
Tom Kyte

Followup  

July 27, 2009 - 6:04 am UTC

show what is happening?

that sql is shared? that one hard parse will generate a plan and the subsequent soft parses will reuse the sql? That I demonstrate on almost every page on this site

not sure what you mean for this one?

Temporary Tables

July 27, 2009 - 7:23 am UTC

Reviewer: reader from UK

Hi Tom,

I don't understand how putting a comment in the SQL, makes the code unique in the shared pool. Should the comment be put like this - 01/01/9999 ?

INSERT /* 01/01/9999 */ INTO ....table_name(
col1
,col2
,col3
,col4)
SELECT /*+ dynamic_sampling(5) */ DISTINCT

Surely, user1 running this and user2 running this will both generate the same SQL in the shared pool? Thanks
Tom Kyte

Followup  

July 27, 2009 - 7:35 pm UTC

with date and timestamp 01/01/2009 12:01:05 - so as to make the text UNIQUE. My assumption/presumption and PREREQUISITE for doing this is (as written above) is the sql is infrequently executed so the date/timestamps could never be the same.

sql is found and reused in the shared pool based on a hash of the original sql statement - change just ONE BYTE of the text and they are different sql's.


select * from emp;
Select * from emp;

those are two different sql's to the shared pool.

Temporary Tables

July 27, 2009 - 7:25 am UTC

Reviewer: reader from UK

Sorry Tom - I have just read your reply more carefully. You mention dynamic SQL - so I could generate different comments. Thanks

Global Temp Tables

July 27, 2009 - 5:22 pm UTC

Reviewer: Reader from UK

What options are there, if this piece of SQL is called frequently? Would dynamic sampling and hint to use Nested Loops be enough or are there other options?
Tom Kyte

Followup  

July 27, 2009 - 8:43 pm UTC

you would probably set it up so that you have a "small med and large" version.

That is

- you know how many rows you put into the global temporary table
- you have the power (the knowledge) to categorize those row counts into "small, med and large" values
- in your code, you could:

open cursor for 'select /*+ ' || small_med_or_large_flag || ' */ ..... from ...';


You would open one of three cursors - and they would have the small plan or the medium plan or the large plan.

see
http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html
and look for
Query Plans with Temporary Tables

GTT's

July 28, 2009 - 9:59 am UTC

Reviewer: reader from UK

Hi Tom,

Thanks for the idea - another idea I had was
delete and lock table stats for the TEMP tables.
Then, when the insert statement runs - it will automatically sample the data. Would this be a viable alternative?

Could we use dbms_stats.set_table_stats for each of the temp tables (after loading records). Would the stats be available for a particular concurrent session?
Tom Kyte

Followup  

July 28, 2009 - 8:49 pm UTC

why would it automatically sample the data?

It would only do that on a hard parse. So, you would be right back to square one - one size fits all.

Unless you combine this with dynamic sql so that you have a hard parse with different sizes, this doesn't automatically do anything.


If you use set table stats, you would need to serialize operations, else my set table stats will overwrite your set table stats.

gtt's

July 29, 2009 - 6:15 am UTC

Reviewer: reader from UK

Hi Tom - ok that makes sense now! Thanks!

gtt's

August 01, 2009 - 10:28 am UTC

Reviewer: reader from uk

If you have a piece of SQL that's shared. It's hard parsed the first time and soft parsed thereafter. Is it possible "that a plan" can be aged out from the shared pool. With regards to this problem - our users have said that it will hang (doing a merge cartesian join on the 1st attempt). Kill process, restart job, hangs again (same plan), kill, restart and then it works on the third attempt. No changes to the SQL, no change to the data. There are other similar processes running the same code on the box, that, depending on the data could be successful. Thanks for clarifying ...
Tom Kyte

Followup  

August 04, 2009 - 12:55 pm UTC

... It's hard parsed the first time and
soft parsed thereafter. ...

Maybe, it's hard parsed - stop. When I program, it is parsed once and then executed over and over and over and over and over and over again and again (eg: my parses are tiny compared to my executes)



GTT and Local App Ctx

August 20, 2009 - 2:27 am UTC

Reviewer: Parthiban Nagarajan from Coimbatore, India

Hi Tom

REM I welcome you to India ( SANGAM'09 ;)

­­» In a disconnected, stateless architecture, we cannot use GTTs. Am I right ... ?

» In two-tier environment, how a GTT and Local Application Contexts are getting differed ... ?
Tom Kyte

Followup  

August 24, 2009 - 5:08 pm UTC

­­» In a disconnected, stateless architecture, we cannot use GTTs. Am I right

no, you are not right.

You can use them in a single connection pool grab - not across connection pool grabs.

eg: a single connection pool grab that:

a) inserts into gtt select .....;
b) select * from t1 where x in (select * from gtt);
c) select * from t2 where y in (select * from gtt);
d) commit; (since the only sensible gtt in a connection pool is on commit delete rows - this empties the gtt)

is perfectly fine.


I do not even know how to compare global temporary tables with application contexts?

GTT

February 08, 2010 - 6:16 pm UTC

Reviewer: sam

Tom:

DO you agree with this guy that GTT can be used to improve performance for complex summary tables.

http://www.dba-oracle.com/t_sql_rewrite_temporary_tables.htm


Tom Kyte

Followup  

February 15, 2010 - 2:05 pm UTC

no, not really, set up a test case - and you'll see.


I did, I created 50,000 stores, 500,000 sales records. I rewrote their overly complex query:

select *
  from store
 where store_key in (select store_key
                       from sales
                      group by store_key
                     having sum(sales.quantity) > (select avg(sum(quantity)) from sales group by store_key) );


and tested it (the rewrite) versus theirs, versus the global temporary table situation.

Guess which took the most CPU to run? And did the most logical IO's? And generated the most redo?



I most wholeheartedly disagree with the sarcasm regarding the "university approach". If one knows SQL, one knows SQL. If one does not - a two table join looks complex. To me, the global temporary table is truly hard to understand, you have to look at a total of 4 different SQL statements - understand that they are all related - figure them out one by one, put them together as a whole in your head (to make sense of them) and then figure what what they do.

Versus a single sql statement, that has everything you need right there. What IS MISSING from the example is that in real life, the SQL statement would be preceded with:

/* 
 *   Code added by: username
 *   Created on: date
 *   modification history: ......
 *
 *   logic:
 *     We will join stores to sales and gropu by the store_name (which is 
 *     unique - just like store key).  This will let us compute sum(quantity) - 
 *     the total sum 'quantity' (what they refer to as sales) by store.  
 *     We then compare this sum() in the having clause to the average sales
 *     by store, which we compute by getting total sales divided by the number
 *     of stores.
 */ 
select ......



Now, all of a sudden, it aint' so hard to understand is it?


If you do not comment this suggested code:

drop table store_sales;
drop table store_cnt;
drop table store_qty;

create global temporary table store_qty
on commit preserve rows
as select sum(quantity) all_sales from sales;

create global temporary table store_cnt
on commit preserve rows
as select count(*) nbr_stores from store;

create global temporary table store_sales
on commit preserve rows
as select store_name, sum(quantity) store_sales
from store natural join sales group by store_name;

select store_name, store_sales, all_sales / nbr_stores avg_sales
from store_qty, store_cnt, store_sales
where store_sales > (all_sales / nbr_stores);


it too would be so much gibberish, until you reversed engineered it.

Global temporary tables deleted on autocommit

March 11, 2010 - 1:29 am UTC

Reviewer: Kedar from India

My oracle stored procedure contains the select SQL statement from SQL server. When I execute this in analyzer it returns the records but in ASP script it returns blank recordset. Does GTT is being deleted in this instance. if yes any solution or settings for that.
Tom Kyte

Followup  

March 11, 2010 - 8:30 am UTC

autocommit - you are using AUTOCOMMIT???? stop doing that, that is the most childlike setting possible - I still cannot believe it is the microsoft default.

Disable autocommit, end your transactions when transactions should be ended, not until then.

GTT

March 12, 2010 - 10:33 am UTC

Reviewer: A reader


some more help

September 14, 2010 - 1:31 pm UTC

Reviewer: V

Dear Tom,

Good day to you, can you please throw some light on why GTT have pctfree as 10 and pctused as 40 and these can't be changed.

Also if you can explain on how are blocks used by GTT and effect of pctfree 10 and pctused 40 on the space usage.

Regards,
V.
Tom Kyte

Followup  

September 15, 2010 - 7:34 am UTC

space is managed differently for global temporary tables (gtt), there are no freelists (there is a single user in effect, each sessions gets its *own* gtt) - and pctused should not be even remotely considered as you would/should be using automatic segment space management - whereby pctused is meaningless no matter what it is set to.

Thanks for taking out your time on this.

September 17, 2010 - 2:58 am UTC

Reviewer: A reader

Dear Tom,

thanks a lot for taking out your time on this question.

1) I am sorry but I could not follow all of it "
space is managed differently for global temporary tables (gtt), there are no freelists (there is a single user in effect, each sessions gets its *own* gtt)".
Is it that something is built into the Kernel? If you can please explain it in more details.

2) Also while we are using automatic segment space management and pctused is not we think managing about I was wondering if we could set pctfree for gtt as the data we insert into gtt is not updated to an extent whereby it would need much storage. Data is inserted using sql*loader and some flags are updated after which the data is used to update permanent tables. So if we could reduce the blocks consumed by setting pctfree to a lower value.



Thanks a lot for your help on this.

Regards,
V.
Tom Kyte

Followup  

September 17, 2010 - 7:12 am UTC

1) it is, it is internal, it is just temporary space management, the same thing we do for temp sort areas and the like. Just like each session gets its own temp segments for sorting, the get the same for temporary tables.

2) you cannot set pctfree for a gtt, it won't let you - so you need not consider it.

how can data be loaded into a gtt by sqlldr (which creates a session, loads and ends a session) be useful in any sense??? I don't think sqlldr is loading a gtt here. There would NEVER be any data in the gtt since once sqlldr terminated its session - the data would disappear.


thanks for your help

September 17, 2010 - 8:18 am UTC

Reviewer: V

Dear Tom,

Thanks for your help and your time to answer my question. I am sorry I mixed up the things, data in gtt is not populated by sql*loader, my mistake.

Regards,
V

HWM issue and global temp tables

September 21, 2010 - 6:55 pm UTC

Reviewer: Rohan from Dallas,TX

Tom,

We use lots of global temp tables but we often encounter HWM issue. What would be the best way to reset HWM? First of all, Why the GTT would have HWM issue as its temporary? We user 10g Rel 2.


Tom Kyte

Followup  

September 23, 2010 - 9:45 am UTC

what "HWM" issue - the HWM is a 'fact' not an issue.

who said a global temporary table has a high water mark? You are the first one to use that term on this page. What "HWM issue" do you have with a global temporary table - those tables are allocated session by session (your gtt always starts at "zero" for every session you create - every session has their OWN segment).


GTT Issue

January 27, 2011 - 5:20 am UTC

Reviewer: Muralidhar from India

Hello Tom, One of User wrote on "GTT Issue", even Im also facing the same thing.
Im using execute immediate of dynamic sql which contains GTT insertion.
GTT dont have any index.
I did on commit preserve rows for GTT.
It was working for 3 months, and suddenly stopped inserting into GTT.
Tom Kyte

Followup  

February 01, 2011 - 3:11 pm UTC

no it didn't, something else is happening.

give us an example.

We don't "suddenly stop inserting" - it doesn't work that way, you are doing something to cause zero rows to be inserted or you are truncating the table or something.

Size of GTT in terms of ROWS or BYTES.

November 23, 2011 - 12:14 pm UTC

Reviewer: Rajorshi Kumar Sen from India, Hyderabad

Hi Tom
Just a part of my question have already been addressed where we can estimate the size of a Global temporary table from v$sort_uage , but what if:
1. there are more than 1 GTT under a schema. Seems v$sort_usage does not have column by which we can filter our a particuler GTT.
2. if a GTT is being used multiple users how to we find the total size of the GTT.
3. how do we know the total number of rows residing in the GTT irrespective of any user who is storing records in the GTT.

Thanks in advance for your answers.
regards
---Raj
Tom Kyte

Followup  

November 23, 2011 - 12:32 pm UTC

1) it is not tracked at the level.

2) you'd have to sum across v$sort_usage - but please do remember, this is very very very transient information, it is pretty much in the "doesn't really tell you anything" category.

3) unless they (the sessions) tell you - you cannot, that information just isn't recorded anywhere. Of what possible use would this be? It would be so transient (a simple commit and the answer could change by millions). It wouldn't provide any useful information.


I'm very curious what you would do this this data?

Can't be represented as straight up SQL

December 02, 2011 - 2:37 pm UTC

Reviewer: Kevin Ryan (big Fan) from Troy NY

Hi Tom,

I saw your statement ~ 'anything that uses a temp table can be done with regular SQL'.

My requirement is for a search, that uses multiple keywords. The key words are sent as a single string - comma delimited. I break them into an array.

Keyword Array - loop opened...
I select (as insert into GTT) the PK of every record whose fields contain my first search term.

I proceed through the list of search terms doing the same. Loop ended....

I select the keys (having count = # of search terms) and populate my cursor based on the list of PK values provided.

I would prefer to do this type of operation in one swoop if possible, as my use of the Global Temporary Table seems to be causing "java.sql.SQLException: ORA-01410: invalid ROWID" errors.

Thoughts?




Tom Kyte

Followup  

December 06, 2011 - 11:08 am UTC

do you mean like this

http://asktom.oracle.com/Misc/varying-in-lists.html

?

Temp Tables

February 08, 2012 - 8:48 am UTC

Reviewer: Pavan

Hi Tom,

I am facing performance issue becasue of the below scenario. Please suggest whether I should be using temporaray variable or not.

I have procedure that gets values from the result set of join of 2 views and inserts them into a table.
This procedure is suffering with performance issue because of this.
I am planning to use Temporary tables. Will this solve my problem on performance.

Thanks,
Pavan

Tom Kyte

Followup  

February 08, 2012 - 12:14 pm UTC

it won't do anything but make it take longer in all probability. You'll have added extra steps (populate - write - the temp tables, then read them again)


You need to find out where you are spending your time - and look to that to 'fix' this. Likely: it will be query tuning and looking at the 2 views and seeing if "we really need a 3rd view that doesn't involve the first two views"

Global Temporary Table VS Physical Table

April 10, 2012 - 7:22 am UTC

Reviewer: Pavan from India

Hi Tom,

I have scenario where in my procedure would return a refcursor.Now I am not sure on the number of records it would return (Its based on the input parameter). So I thought of using pagination concept here. I would be returning 500 records at a time.

We are populating records which is a union of multiple SQL queries. So I thought I will first get the entire result set into a physical table and then query this physical table for returning 500 records at a time.I would TRUNCATE this physical table after it returns the entire resultset.

So I am not really sure if using a GLOBAL TEMPORARY TABLE would be of any use in this case.

Please suggest.

Thanks,
Pavan


Tom Kyte

Followup  

April 10, 2012 - 7:51 am UTC

why not just return a ref cursor - period.

no temporary tables
no extra real tables

just return a cursor.


The client controls how many records to fetch from it. The act of opening it does *no real work* in retrieving the data, the fetching does.

July 31, 2012 - 2:14 am UTC

Reviewer: Pranay from INDIA

Dear Tom,
I have a requirement where the DBA will run my update query on production database which will update certain values if they are null.
He is insisting on providing a rollback script too for this.We thought of creating a GTT and storing all the records which are going to be updated in that table and later in rollback script will update these particular records back to null.
Do you think this can be achieved without using GTT ?


Tom Kyte

Followup  

July 31, 2012 - 12:24 pm UTC

here is your rollback script:


rollback;



????? I'm confused, if they run an update and it fails - just rollback.


If they want the ability to undo this change for some period of time, tell them to implement a guaranteed undo retention setting - then you can use flashback table anytime during that period of time to restore it.

http://docs.oracle.com/cd/E11882_01/server.112/e10803/config_backuprec.htm#HABPT5165


Otherwise - tell the dba - hey, you already have backups, you could just do a point in time restore of this tablespace in a temporary instance and retrieve the data if we need it. We don't need to write code for this.


Temp Table Redo

August 06, 2012 - 12:56 am UTC

Reviewer: Doug

Hi Tom,

I created 2 tables 1 TEMP and 1 "Permanent"

Session 1
CREATE GLOBAL TEMPORARY TABLE TEMP_TABLE_XMPL(COLUMN_X varchar2(30))
ON COMMIT DELETE ROWS;
CREATE INDEX TEMP_INDEX ON TEMP_TABLE_XMPL(COLUMN_X);


Session 2
CREATE  TABLE PERM_TABLE_XMPL(COLUMN_X varchar2(30));
CREATE INDEX PERM_INDEX ON PERM_TABLE_XMPL(COLUMN_X);



Then ran an insert with Autotrace on
SET AUTOTRACE ON
INSERT INTO TEMP_TABLE_XMPL VALUES ('A');

SQL> set autotrace on
SQL> insert into temp_table_xmpl values ('A');

1 row created.


Execution Plan
----------------------------------------------------------

--------------------------------------------------------------------------------

------------

| Id  | Operation                | Name            | Rows  | Bytes | Cost (%CPU)

| Time     |

--------------------------------------------------------------------------------

------------

|   0 | INSERT STATEMENT         |                 |     1 |   100 |     1   (0)

| 00:00:01 |

|   1 |  LOAD TABLE CONVENTIONAL | TEMP_TABLE_XMPL |       |       |
|          |

--------------------------------------------------------------------------------

------------



Statistics
----------------------------------------------------------
          5  recursive calls
         20  db block gets
          6  consistent gets
          1  physical reads
        632  redo size
        838  bytes sent via SQL*Net to client
        793  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>




Session 2

H:\>sqlplus system

SQL*Plus: Release 11.2.0.1.0 Production on Mon Aug 6 15:35:12 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> CREATE  TABLE PERM_TABLE_XMPL(COLUMN_X varchar2(30));

Table created.

SQL>
SQL> CREATE INDEX PERM_INDEX ON PERM_TABLE_XMPL(COLUMN_X);

Index created.

SQL> set autotrace on
SQL> insert into perm_table_xmpl values ('A');

1 row created.


Execution Plan
----------------------------------------------------------

--------------------------------------------------------------------------------

------------

| Id  | Operation                | Name            | Rows  | Bytes | Cost (%CPU)

| Time     |

--------------------------------------------------------------------------------

------------

|   0 | INSERT STATEMENT         |                 |     1 |   100 |     1   (0)

| 00:00:01 |

|   1 |  LOAD TABLE CONVENTIONAL | PERM_TABLE_XMPL |       |       |
|          |

--------------------------------------------------------------------------------

------------



Statistics
----------------------------------------------------------
          3  recursive calls
         10  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
        833  bytes sent via SQL*Net to client
        792  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>



The difference in the 2 was surprising, I am not sure why there 20 db block gets for the Temp table as opposed to the Perm table but the big surprise was 0 redo size for the perm table and 632 redo size for the temp table.

I expected to see more Redo for the perm table given that I believe that no redo is generated for temp tables just the rollback segment data that a temp table generates.
I certainly did not expect to see zero redo for session 2.

So I inserted a few more rows and the stats are almost the same.

Would love to hear whats going on why I had 0 redo for the 1st insert into the perm table and why redo for inserts into the perm table is the same size as inserting to the temp table.

Regards
Doug







Tom Kyte

Followup  

August 17, 2012 - 11:59 am UTC

NEVER EVER NEVER NEVER

as in

DO NOT EVER

use sys or system, they are ours, they are not for your use. use your OWN ACCOUNTS.

if you do, you'll see something realistic:


ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> drop table temp_table_xmpl;

Table dropped.

ops$tkyte%ORA11GR2> drop table perm_table_xmpl;

Table dropped.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> CREATE GLOBAL TEMPORARY TABLE TEMP_TABLE_XMPL(COLUMN_X varchar2(30)) ON COMMIT DELETE ROWS;

Table created.

ops$tkyte%ORA11GR2> CREATE INDEX TEMP_INDEX ON TEMP_TABLE_XMPL(COLUMN_X);

Index created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> CREATE  TABLE PERM_TABLE_XMPL(COLUMN_X varchar2(30));

Table created.

ops$tkyte%ORA11GR2> CREATE INDEX PERM_INDEX ON PERM_TABLE_XMPL(COLUMN_X);

Index created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> insert into temp_table_xmpl values ( 'a' );

1 row created.


Statistics
----------------------------------------------------------
          5  recursive calls
         20  db block gets
          6  consistent gets
          0  physical reads
        616  redo size
        901  bytes sent via SQL*Net to client
       1011  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA11GR2> insert into temp_table_xmpl values ( 'a' );

1 row created.


Statistics
----------------------------------------------------------
          0  recursive calls
          3  db block gets
          1  consistent gets
          0  physical reads
        292  redo size
        907  bytes sent via SQL*Net to client
       1011  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> insert into perm_table_xmpl values ( 'a' );

1 row created.


Statistics
----------------------------------------------------------
         67  recursive calls
        100  db block gets
         59  consistent gets
          0  physical reads
      10120  redo size
        901  bytes sent via SQL*Net to client
       1011  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA11GR2> insert into perm_table_xmpl values ( 'a' );

1 row created.


Statistics
----------------------------------------------------------
          0  recursive calls
          5  db block gets
          1  consistent gets
          0  physical reads
        464  redo size
        909  bytes sent via SQL*Net to client
       1011  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed




sys and system are special, magical, not to be used ever.


Global Temporary table Performance

September 18, 2012 - 10:32 pm UTC

Reviewer: Srini from India

Hi Tom,

I read your "expert one on one Oracle" and stumbled at the following line:

"Just to drive home the point, the goal here is to not create tables in your stored procedures
at runtime. That is not the proper way to do this in Oracle. DDL is an expensive operation,
we want to avoid doing that at runtime. The temporary tables for an application should be
created during the application installation never at run‐time."

In our project the OLTP application is creating a global temporary table for every logged in user, populates it and drops it after reading the populated data. I flagged this as a NOT recommended approach and suggested them to create the table once and use it multiple times. The developer, however disagreed to this and said creating table only takes milli seconds and this will not create a performance issue at runtime.

I tend to disagree with him but I need to prove that this approach (creating tables per login) is not correct. How can this be proved?
Tom Kyte

Followup  

September 20, 2012 - 5:40 am UTC

ask him this:

why would you do something millions of times you only have to do once?

what happens WHEN your process crashes AFTER creating it, but BEFORE dropping it? where is your code to clean up orphan tables?

why would you do something millions of times you only have to do once?

what happens when two people create a table at the same time - or dozens? Have you tested the scalability?

why would you do something millions of times you only have to do once?

where are the stored procedures and views you have created that use these objects - oh wait, you cannot create stored procedures and views to use objects that do not exist.

why would you do something millions of times you only have to do once?

do you understand how to program efficiently in Oracle or did you stop after reading some sql server documentation?

why would you do something millions of times you only have to do once?










Can you tell which point I think is the most obvious - and the silliest thing they are doing?

tell you what, revoke create table from this schema - an application schema for security reasons shouldn't be able to create tables at runtime in production. It is not a privilege they should even have. Heck, the developer schema - where they put their application (their code, their views) should NOT have create table - they shouldn't be able to create a table in the first place.


It sure looks like more than a few milli seconds


and it sure looks like it used a few more latches (which is a fancy name for a lock, a serialization device, a scalability inhibitor)

do they know that there are HUNDREDS (a little short of a thousand) sql statements executed for a create/drop pair?????

this is a no brainer, I don't get why a "developer" would do something over and over when.... they can do it once, have it be done, have it be documented, have it be managed....

One way seems easy, the other way seems way hard, way unreliable, why "not too performant", way over privileged and so on.


ops$tkyte%ORA11GR2> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> begin
  2          for i in 1 .. 1000
  3          loop
  4                  execute immediate 'create global temporary table gtt ( x int, y int )';
  5                  execute immediate 'drop table gtt';
  6          end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> exec null;

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> exec runStats_pkg.rs_stop;
Run1 ran in 1492 cpu hsecs
Run2 ran in 0 cpu hsecs

Name                                  Run1        Run2        Diff
...
LATCH.PL/SQL warning settings       10,004           4     -10,000
LATCH.call allocation               10,010           0     -10,010
STAT...HSC Heap Segment Block       10,214          15     -10,199
LATCH.shared pool simulator         11,689           1     -11,688
STAT...cluster key scans            13,000           0     -13,000
STAT...calls to kcmgas              13,749           0     -13,749
LATCH.redo writing                  16,939           0     -16,939
STAT...cluster key scan block       17,000           0     -17,000
LATCH.messages                      17,271           0     -17,271
LATCH.redo allocation               18,925           1     -18,924
LATCH.undo global data              20,104           1     -20,103
STAT...commit cleanouts succes      22,398           0     -22,398
STAT...db block gets from cach      23,400           0     -23,400
STAT...commit cleanouts             24,402           0     -24,402
STAT...index fetch by key           25,001           0     -25,001
STAT...table scan rows gotten       25,055           0     -25,055
STAT...shared hash latch upgra      27,000           0     -27,000
LATCH.Change Notification Hash      36,621          15     -36,606
STAT...session cursor cache hi      45,790           2     -45,788
STAT...enqueue requests             46,422           3     -46,419
STAT...enqueue releases             46,421           2     -46,419
STAT...redo entries                 54,015          15     -54,000
STAT...index scans kdiixs1          66,200           0     -66,200
LATCH.DML lock allocation           70,403           1     -70,402
STAT...physical read total byt      73,728           0     -73,728
STAT...cell physical IO interc      73,728           0     -73,728
STAT...physical read bytes          73,728           0     -73,728
STAT...db block gets from cach      76,368          48     -76,320
STAT...db block gets                76,368          48     -76,320
STAT...parse count (total)          79,418           3     -79,415
STAT...no work - consistent re      81,404           0     -81,404
STAT...consistent gets from ca      83,419           9     -83,410
STAT...consistent gets - exami      90,338           4     -90,334
LATCH.enqueue hash chains           93,164           5     -93,159
STAT...db block changes             94,225          72     -94,153
STAT...calls to get snapshot s     117,406           2    -117,404
STAT...buffer is not pinned co     120,201           0    -120,201
STAT...execute count               122,208           5    -122,203
STAT...opened cursors cumulati     124,208           5    -124,203
STAT...consistent gets from ca     208,754          13    -208,741
STAT...consistent gets             208,754          13    -208,741
LATCH.shared pool                  223,452          52    -223,400
STAT...session uga memory max      393,072     123,452    -269,620
STAT...session logical reads       285,122          61    -285,061
LATCH.row cache objects            313,530          43    -313,487
STAT...IMU Redo allocation siz     387,612           0    -387,612
STAT...session uga memory          393,072           0    -393,072
STAT...recursive calls             398,266           3    -398,263
STAT...IMU undo allocation siz     403,700           0    -403,700
LATCH.cache buffers chains         649,409         236    -649,173
STAT...session pga memory max      655,360           0    -655,360
STAT...session pga memory          655,360    -196,608    -851,968
STAT...temp space allocated (b   1,048,576           0  -1,048,576
STAT...undo change vector size   4,327,112       3,424  -4,323,688
STAT...redo size                13,038,916       4,608 -13,034,308

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
1,549,667         547  -1,549,120##########%

PL/SQL procedure successfully completed.


thank you..

September 30, 2012 - 5:11 am UTC

Reviewer: Srini from India

Thanks very much for the detailed explanation Tom. Really useful.

December 06, 2012 - 6:33 am UTC

Reviewer: Karthik Perumal from Chennai,India

Really nice .... Can these tables be used in Reports?

Global Temporary Table For Pagination

April 22, 2013 - 1:19 pm UTC

Reviewer: Sweta from India

Hi Tom,
I am using gtt for achieving Pagination and Filtering, the query has order by and data is huge. Earlier was using For loop, to achieve pagination, but it killed the performance.
Do had to go with gtt concept:
following is the query:


SELECT DISTINCT
C.Content_Sequence_ID, C.Content_ID,
ASDC_UTILS_PKG.Project_CSV( C.Content_Sequence_ID, i_User_Info , v_Bill_to_ID_Table_Seq) PROJECT_ID ,
C.Title, CTM.Content_Type, C.Revision, C.Version,ACS.Status, C.Updated_TS , CUTM.User_ID,
ASDC_UTILS_PKG.CUSTOMER_DETAIL_CSV( C.Content_Sequence_ID, i_User_Info,v_Bill_to_ID_Table_Seq) CUSTOMER_NAME,
EDM.EDM_Content_ID, NVL(C.Checkout_Status, 'N'), C.Checkedout_By, C.Checkedout_TS ,C.Description,
L.LANGUAGE_NAME, C.Mime_Type, C.File_Size, C.Publish_Date, C.EOL_Date, NVL(Mass_Publish, 0) ,
C.Created_TS, C.Publish_To_Customer, C. All_Individual_Flag, I.Image_ID
FROM
CONTENT C, Cont_Type_Map CTM , Cont_User_Type_Map CUTM, Cont_Version_Map CVM, Language L, EDM_MAP EDM, ASDC_Content_Status ACS, IMG_Content_Map I
WHERE
C.CONTENT_SEQUENCE_ID = CVM.CONTENT_SEQUENCE_ID
AND C.Curr_Status_ID = ACS.Status_ID AND C.CONTENT_SEQUENCE_ID = CUTM.CONTENT_SEQUENCE_ID AND C.CONTENT_SEQUENCE_ID = EDM.CONTENT_SEQUENCE_ID
AND C.Language_ID = L.Language_ID AND C.Content_Type_Sequence_ID = CTM.Content_Type_Sequence_ID AND CUTM.CONT_USER_TYPE_ID='A'
AND C.CONTENT_SEQUENCE_ID = I.CONTENT_SEQUENCE_ID(+)
AND C.Created_TS >= (SYSDATE - i_Duration )
AND C.CURR_STATUS_ID <> 10
AND
(
( EXISTS( SELECT bill_to_id FROM cont_cust_bill_to_map BWVCCBTM
where Customer_ID = v_WF_Customer_ID
and C.Content_Sequence_ID= BWVCCBTM.Content_Sequence_ID
)
AND NOT EXISTS( SELECT project_id FROM cont_cust_bill_to_map PWVCCBTM
where Project_ID IS NOT NULL
and C.Content_Sequence_ID= PWVCCBTM.Content_Sequence_ID
)
)
OR
EXISTS( SELECT project_id FROM cont_cust_bill_to_map PWVCCBTM
where Project_ID in ( SELECT DISTINCT Project_ID FROM ASDC_WV_Project_Map WHERE Workview_ID = i_Workview_ID ) AND Customer_ID = v_WF_Customer_ID
and C.Content_Sequence_ID= PWVCCBTM.Content_Sequence_ID
)
)
ORDER BY
DECODE ( i_Sort, 'UPDATED_TS_ASC', UPDATED_TS, NULL) ASC,
DECODE ( i_Sort, 'UPDATED_TS_DESC', UPDATED_TS, NULL) DESC,
DECODE ( i_Sort, 'USER_ID_ASC', LOWER(USER_ID), NULL) ASC,
DECODE ( i_Sort, 'USER_ID_DESC', LOWER(USER_ID), NULL) DESC,
DECODE ( i_Sort, 'CUSTOMER_NAME_ASC', LOWER(CUSTOMER_NAME), NULL) ASC,
DECODE ( i_Sort, 'CUSTOMER_NAME_DESC', LOWER(CUSTOMER_NAME), NULL) DESC,
DECODE ( i_Sort, 'PROJECT_ID_ASC', LOWER(PROJECT_ID), NULL) ASC,
DECODE ( i_Sort, 'PROJECT_ID_DESC', LOWER(PROJECT_ID), NULL) DESC,
DECODE ( i_Sort, 'TITLE_ASC', LOWER(TITLE), NULL) ASC,
DECODE ( i_Sort, 'TITLE_DESC', LOWER(TITLE), NULL) DESC,
DECODE ( i_Sort, 'CONTENT_TYPE_ASC', LOWER(CONTENT_TYPE), NULL) ASC,
DECODE ( i_Sort, 'CONTENT_TYPE_DESC', LOWER(CONTENT_TYPE), NULL) DESC,
DECODE ( i_Sort, 'STATUS_ASC', LOWER(STATUS), NULL) ASC,
DECODE ( i_Sort, 'STATUS_DESC', LOWER(STATUS), NULL) DESC,
DECODE ( i_Sort, 'VERSION_ASC', VERSION, NULL) ASC,
DECODE ( i_Sort, 'VERSION_DESC', VERSION, NULL) DESC;

After dumpling data into gtt, we delete the data accordinf to filte conditions applied(if any)
IF i_Filter(i).FilterCriteria IS NOT NULL AND i_Filter(i).FilterValue IS NOT NULL THEN

IF UPPER(i_Filter(i).FilterCriteria) = 'TITLE' THEN
DELETE FROM gtt_Content C
WHERE NOT( UPPER(C.Title) LIKE UPPER(v_Filter_Value||'%') ESCAPE '\' OR UPPER(C.Content_ID) LIKE UPPER(v_Filter_Value||'%') ESCAPE '\' ) ;

ELSIF UPPER(i_Filter(i).FilterCriteria) = 'CONTENT_TYPE' THEN
DELETE FROM gtt_Content C
WHERE NOT ( UPPER( C.Content_Type) LIKE UPPER(v_Filter_Value||'%') ESCAPE '\' );

ELSIF UPPER(i_Filter(i).FilterCriteria) = 'USER_ID' THEN
DELETE FROM gtt_Content C
WHERE NOT ( UPPER(C.Author) LIKE UPPER(v_Filter_Value ||'%') ESCAPE '\' );

ELSIF UPPER(i_Filter(i).FilterCriteria) ='CUSTOMER_NAME' THEN
DELETE FROM gtt_Content C
WHERE NOT ( EXISTS( SELECT customer_name FROM cont_cust_bill_to_map INCCBTM
where ( ( UPPER(customer_name) LIKE UPPER('%'||v_Filter_Value||'%') ESCAPE '\' OR UPPER(bill_to_id) LIKE UPPER('%'||v_Filter_Value||'%') ESCAPE '\')
and C.Content_Sequence_ID=INCCBTM.Content_Sequence_ID ))
);
ELSIF UPPER(i_Filter(i).FilterCriteria) = 'PROJECT_ID' THEN
--vQuery := vQuery || ' AND UPPER(CCBTM.Project_ID) LIKE UPPER('''|| v_Filter_Value||'%'') ESCAPE ''\''';
DELETE FROM gtt_Content C
WHERE NOT ( EXISTS( SELECT project_id FROM cont_cust_bill_to_map INCCBTM
where UPPER(project_id) LIKE UPPER(v_Filter_Value||'%') ESCAPE '\'
and C.Content_Sequence_ID=INCCBTM.Content_Sequence_ID )
);


ELSIF UPPER(i_Filter(i).FilterCriteria) ='STATUS' THEN

DELETE FROM gtt_Content C
WHERE NOT ( UPPER(C.Status) LIKE UPPER( v_Filter_Value ||'%') ESCAPE '\' );
v_Status_Filter := 'Y';

ELSIF UPPER(i_Filter(i).FilterCriteria) ='STATUS_DUMMY' THEN

--v_Query := v_Query || ' AND UPPER(ACS.Status) LIKE UPPER('''|| v_Filter_Value ||'%'') ESCAPE ''\'' ' ;
v_Status_Filter := 'Y';

ELSIF UPPER(i_Filter(i).FilterCriteria) = 'VERSION' THEN
DELETE FROM gtt_Content C
WHERE NOT ( C.Revision like v_Filter_Value||'%' );

ELSIF UPPER(i_Filter(i).FilterCriteria) = 'UPDATED_TS' AND v_Filter_Value <> 0 THEN
DELETE FROM gtt_Content C
WHERE NOT ( C.Updated_TS BETWEEN SYSDATE - v_Filter_Value AND SYSDATE );
END IF;

END IF;
Is there any simpler way to achieve dynamic Order by and Pagination


Hope to get your expertise solution asap .

Thanks in advance

how can add RPAD in sequence 4 digits of branch location like 0230 then create sequence

July 01, 2014 - 9:32 am UTC

Reviewer: Waheed Ahmed from paksitan

how to possible in form when create record then RPAD add 4 digits sequence like 0230 and generate sequence look like 023000001 RPAD get from emp location id 4 char type kindly resolve thi issue