Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, H.

Asked: December 13, 2002 - 4:21 pm UTC

Last updated: September 18, 2012 - 2:04 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Tom,
I am trying to interpret this output and it is not making much sense. The TRACE table has only 220 rows. What is your take on this?

Thanks.
********************************************************************************

select TRACE01.TYPE ,TRACE01.STATUS ,TRACE01.SEQUENCE into :b0,:b1,:b2
from
TRACE TRACE01 where TRACE01.TYPE=NVL(RTRIM(:b3,' '),' ')


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.01 0.00 0 0 0 0
Execute 19012 3.10 3.56 0 0 0 0
Fetch 19012 1056.38 1517.15 0 43282981 76048 19012
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 38028 1059.49 1520.71 0 43282981 76048 19012

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 33 (OPS$MAXBATCH)

Rows Row Source Operation
------- ---------------------------------------------------
4905 TABLE ACCESS FULL TRACE


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
4905 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TRACE'

********************************************************************************

and Tom said...

The high water mark (the biggest the table every was) is very high (about 2275 blocks). There was alot more then 220 rows at one point.

A table must full scan upto its HWM, consider:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t pctfree 90 pctused 10
2 as
3 select * from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> delete from t where object_id <> 55;

23663 rows deleted.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on
ops$tkyte@ORA817DEV.US.ORACLE.COM> select object_name from t;

OBJECT_NAME
------------------------------
I_UGROUP1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
0 recursive calls
27 db block gets
5395 consistent gets
0 physical reads
130980 redo size
377 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off


See, 5,395 consistent gets -- because the table used to be that big. You would either drop the table and repopulate or truncate it and repopulate to lower the HWM. In this case however -- an index on TYPE would "fix" the problem as well.


ops$tkyte@ORA817DEV.US.ORACLE.COM> create table tmp as select * from t;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> truncate table t;

Table truncated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t select * from tmp;

1 row created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> drop table tmp;

Table dropped.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on
ops$tkyte@ORA817DEV.US.ORACLE.COM> select object_name from t;

OBJECT_NAME
------------------------------
I_UGROUP1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
1 consistent gets
0 physical reads
0 redo size
377 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off

back down to 1 consistent get...


Rating

  (322 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

How to reset HWM

Arun Gupta, December 14, 2002 - 2:52 pm UTC

Tom
I have a table which increases in size from, say 200 rows in the morning to about 11,000 rows in the evening, when a batch process deletes rows from this table. Since this table stores session data, it is used very heavily. What approach should I adopt to make sure that statistics are up to date during daytime and the HWM is reset every night so the CBO is not thrown off track as you demonstrated in your example. Is there a way to reset HWM without losing availability ?

In your example, if you analyzed t after delete, what would be the effect?
Thanks

Tom Kyte
December 14, 2002 - 2:55 pm UTC

I showed you the two ways to reduce the HWM for a simple table like this. truncate and drop.


Use the scratch table+truncate+insert if you wish to lower the HWM.

Looks like you always full scan this table, no indexes - so stats are perhaps not relevant here. But if they are -- analyze during the day from time to time. 11,000 rows will take seconds.

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:5554725037775 <code>
for the last question



Alter table move

Marc, December 15, 2002 - 6:49 am UTC

Hi Tom

Instead of the scratch table+truncate+insert to lower the HWM I prefer "alter table t move". This is just one statement, and guaranteed no conflicts with any constraints and indices, triggers, grants, etc.



Tom Kyte
December 15, 2002 - 10:26 am UTC

excellent point, thanks -- totally slipped my mind.

remember to rebuild any indexes afterwards, but yes, this is a better approach.

High Water Mark

H, December 16, 2002 - 9:54 am UTC

Tom,
How did you get the 2275 blocks?

Tom Kyte
December 16, 2002 - 10:51 am UTC

ops$tkyte@ORA920> select 43282981/19012 from dual;

43282981/19012
--------------
    2276.61377



query mode gets divided by number of times the query was executed. 

No. Of Blocks

H, December 16, 2002 - 3:09 pm UTC

At the top of tkprof output query and current are said to
be buffers not blocks. This what is confusing me. In your book you refernece them as blocks. I am assuming they are blocks, hope my assumption is correct.

Also, I think that there is a code issue here (sql within FOR loop) as this a vendor app.

What stats can i look at in data dictionary to find the HWM?

Thanks.

Tom Kyte
December 16, 2002 - 3:13 pm UTC

so, what is the difference between a buffer and a block in your mind? think of them as the same. A buffer is where we put blocks after reading them.


search this site for

show_space hwm



No. Of Blocks

H, December 16, 2002 - 3:18 pm UTC

I think of a buffer as a variable space verses block which is fixed (in our case 8k), to me a buffer may have more than one block. I never thought of it as 1-1 mapping.

Thanks

Tom Kyte
December 16, 2002 - 3:32 pm UTC

there is the buffer cache -- thats the big thing with lots of blocks in it.


there then is a block buffer -- a buffered block. they are 1-1.

Clarification

A reader, December 17, 2002 - 12:25 am UTC

"
query mode gets divided by number of times the query was executed.
"

Does this calculation not give # of blocks in CR /execute.
How can this give total # of blocks in the table.

Also why there is a difference in number of rows returned
in tkprof (19012) vs execution plan ( 4905 )

Thanks

Tom Kyte
December 17, 2002 - 7:57 am UTC

Rows Row Source Operation
------- ---------------------------------------------------
4905 TABLE ACCESS FULL TRACE

TABLE ACCESS FULL -- for every execute, every block was read. Take total number of blocks read during the repeated executions of the query, divide by the number of executions -- you get the number of blocks in the table....


The rows are not different -- the 4905 isn't part of the query plan. It is the number of rows flowing through that phase of the actual execution plans execution at runtime (eg: it is reality, it is what happened).

Now, not that you should expect the ESTIMATED cardinality to be dead on or even close all of the time. it is an ESTIMATION.

# of blocks

H, December 17, 2002 - 10:06 am UTC

The DD shows blocks and empty blocks as 2 and 22 respectively, hence howcome the number 2280 (total number of blocks(query and current)/executions).

Can you please give some expert thoughts to clear up this disconnect.

Tom Kyte
December 18, 2002 - 9:56 am UTC

erh?

what "dd" -- do you mean Data Dictionary??? I have to assume so, that you don't mean dd the unix utility (just a gentle reminder to all -- acronyms are great but.... rather then saving two or three keystrokes, best to be clear...)


blocks and empty blocks are as of the LAST ANALYZE. They are out of date as soon as you stuff more data in there.

Search this site for

"show_space"

to find a utility that will give you instant information as of right now on the object.

# of blocks

H, December 17, 2002 - 10:06 am UTC

The DD shows blocks and empty blocks as 2 and 22 respectively, hence howcome the number 2280 (total number of blocks(query and current)/executions).

Can you please give some expert thoughts to clear up this disconnect.

Thanks ....

A reader, December 17, 2002 - 4:50 pm UTC

Tom,

The calculation of # of blocks in the table make much sense now, thanks.

As far the number of rows: The original posting says the the table has only
220 rows. The rows flowing through during TABLE ACCESS FULL is 4905 ( > 220 ).
From the tkprof, the rows got from 19012 executions is 19012. Could you put
all these in cohesive fashion so that understand better

Thanks

Tom Kyte
November 23, 2009 - 2:56 pm UTC

<code>Ok, so it took some amount of imagining but here is their scenario replicated to a tee:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t ( x int, y char(2000) ) pctfree 90 pctused 10;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert into t select rownum, rownum from all_objects where rownum < 2275;
2274 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> delete from t where x > 220;
2054 rows deleted.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> exec show_space( 'T' )
Free Blocks.............................2054
Total Blocks............................2304
Total Bytes.............................18874368
Unused Blocks...........................29
Unused Bytes............................237568
Last Used Ext FileId....................7
Last Used Ext BlockId...................38793
Last Used Block.........................35

PL/SQL procedure successfully completed.

Ok, there we go. We have their table -- same number of blocks -- same number of rows.... Now, let me simulate their process.

Remember -- they had 4 parses, so I'll set up 4 parses:




ops$tkyte@ORA817DEV.US.ORACLE.COM> alter session set sql_trace=true;
Session altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> declare
2 l_rec t%rowtype;
3 begin
4 for i in 1 .. 19012
5 loop
6 begin
7 if ( mod(i,4) = 0 )
8 then
9 select * into l_rec from t where x = mod(i,220)+1;
10 elsif ( mod(i,4) = 1 )
11 then
12 select * into l_rec from t where x = mod(i,220)+1;
13 elsif ( mod(i,4) = 2 )
14 then
15 select * into l_rec from t where x = mod(i,220)+1;
16 else
17 select * into l_rec from t where x = mod(i,220)+1;
18 end if;
19 exception
20 when no_data_found then null;
21 end;
22 end loop;
23 end;
24 /

PL/SQL procedure successfully completed.


Now, I do a tkprof with default settings (eg: aggregate=YES, so there will be one report for the above 4 queries -- their text is the same, roll them together:


SELECT *
FROM
T WHERE X = MOD (:b1,220) + 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.00 0.00 0 0 0 0
Execute 19012 2.05 1.95 0 0 0 0
Fetch 19012 536.14 537.00 0 43234322 133084 19012
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 38028 538.19 538.95 0 43234322 133084 19012

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 605 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
4753 TABLE ACCESS FULL T


look familar? same number of query mode gets more or less (full scan of 2275 blocks over and over)....

Same number of rows processed, 19012

About the 4,753 rows in the query plan there. Well, why is is that number and not one of:

220*19012 OR
19012 ???

In order to understand that, you have to understand what that number is. The 4,735 is the number of rows flowing OUT OF the full scan (so it would not be 220/full scan since the query gets 1 row/full scan -- at MOST we would expect it to be 19012 in this case).

So, why is it 4,735? That is a function of the fact they parsed the query 4 times (had 4 cursors for it at some point like I did) AND they used aggregate=yes by default on the tkprof command line.

Aggregate = yes aggregates all of the EXEC records (with consistent gets, cpu times, etc) but it cannot/does not aggregate the STAT records. The STAT records are the actual query plans with row counts of the rows that flowed out of that step (note: in 8.0 and before -- it was the number of rows that flowed INTO that step so beware). There are 4 sets of stat records for this tkprof since there were four queries. You are seeing the stat records for just ONE OF THEM. When I run the tkprof with aggregate = no, I get:


SELECT *
FROM
T WHERE X = MOD (:b1,220) + 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 4753 0.56 0.36 0 0 0 0
Fetch 4753 134.05 134.48 0 10809356 33271 4753
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9507 134.61 134.84 0 10809356 33271 4753

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 605 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
4753 TABLE ACCESS FULL T

--------------------------------------------------------------------------------

SELECT *
FROM
T WHERE X = MOD (:b1,220) + 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 4753 0.60 0.62 0 0 0 0
Fetch 4753 133.89 134.01 0 10808322 33271 4753
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9507 134.49 134.63 0 10808322 33271 4753

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 605 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
4753 TABLE ACCESS FULL T


SELECT *
FROM
T WHERE X = MOD (:b1,220) + 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 4753 0.42 0.44 0 0 0 0
Fetch 4753 134.19 134.15 0 10808322 33271 4753
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9507 134.61 134.59 0 10808322 33271 4753

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 605 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
4753 TABLE ACCESS FULL T

--------------------------------------------------------------------------------

SELECT *
FROM
T WHERE X = MOD (:b1,220) + 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 4753 0.47 0.53 0 0 0 0
Fetch 4753 134.01 134.36 0 10808322 33271 4753
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9507 134.48 134.89 0 10808322 33271 4753

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 605 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
4753 TABLE ACCESS FULL T
See -- 4 plans, 4 queries -- 4 stat records each

# of blocks

H, December 18, 2002 - 10:01 am UTC

Yes DD=Data Dictionary, I did use your utility and got the same results.


Tom Kyte
December 18, 2002 - 12:32 pm UTC

then you must have rebuilt the object AFTER you ran the tkprof BEFORE you did the analyze.  Take my example above that EXACTLY replicates your scenario:

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select blocks, empty_blocks from user_tables where table_name = 'T';

    BLOCKS EMPTY_BLOCKS
---------- ------------
      2274           29


ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select blocks, empty_blocks from user_tables where table_name = 'T';


    BLOCKS EMPTY_BLOCKS
---------- ------------
       220           35



You rebuilt it to lower the HWM and did that after the tkprof, before the analyze.
 

# of Blocks

H, December 18, 2002 - 12:56 pm UTC

Thanks for the very detailed reply to the previous readers questions. That was going to be my next question. Thanks to your clear explanation, many things are making sense now.

Anyway, you got me thinking, and i did check the data dictionary before any rebuilts but ran your utility after alter table compute. But, I am not sure of few things so may be what you are saying is the reason for inconsistent # of blocks. The only diff. i remember is using compute option verses esitmate. Anyway, i am running another test so i will keep all the knowledge gained from this discussion in mind.

THANKS for your expert advice !

# of blocks

H, December 18, 2002 - 2:08 pm UTC

Would you be willing to share the process(secret) of accurate esitmation to replicate our exact enviornment?

Tom Kyte
December 18, 2002 - 2:31 pm UTC

what do you mean -- ? do you mean "how did you manage to pretty much replicate our situation, what was the thought process?"

the physical process isn't secret -- anyone would be able to replicate your numbers exactly as I did simply by running the above script.

Just Brilliant !

A reader, December 18, 2002 - 8:23 pm UTC

Tom,

Thanks very much for the great explanation and the
brilliant example to simulate the results

Thanks so much for your time and for having gone
great length to prove the technical theory so that
we readers can comprehend it

I understand the arithmatic very well now. I
appreciate it very much. One of the useful things I
learn from this website is learning the technics in
setting up test cases to prove theories.

Your example and the derivation of results are
superb !

Best Regards

tkprof equal parses= execute how to overcome

umesh, February 13, 2003 - 12:46 am UTC

my parses = execute=4 ,fetch =8 for a single row retrived
consistent gets =1
all other attributes from tkprof seem to be ok
SELECT ROWID,UNIT_CODE, NAME, DEPT, ADDR_1, ADDR, LOCATION, STATE,
POSTAL_CODE, COUNTRY, COMPANY_DOCTOR, DOCTOR_ROLE, CONTACT_PHONE_NO,
CONTACT_FAX_NO, FDA_REGISTRATION_NUMBER, DATABASE_ID, TYPE_COMPANY_UNIT,
USER_ID, TO_CHAR(DATE_MODIFIED,'YYYY-MM-DD HH24:MI:SS'),
TO_CHAR(DATE_CREATED,'YYYY-MM-DD HH24:MI:SS'), RECORD_ID, UNIT_TYPE_FLAG,
DGFPS_NO, INTERCHANGE_ID, E2B_FILE_FORMAT
FROM
COMPANY_UNIT Where UNIT_CODE = 'BIO'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.00 0.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 8 0.00 0.00 0 8 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16 0.00 0.00 0 8 0 4

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 43

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID COMPANY_UNIT
1 INDEX UNIQUE SCAN (object id 25233)
according the book expert one on one you recommend using bind variables but here i need a constant
in another run i get parse=execute=fetch=159 for the same number of rows processed how do go about

Tom Kyte
February 13, 2003 - 9:29 am UTC

You need to find the programmer that wrote this code and teach them "parse once, execute many, don't close that cursor so far"

That will fix the parse count -- it should be 1. the fetch count should be 4 as well -- they must have coded something like:

prepare
open
fetch
fetch
close

they should

if ( firsttime ) prepare
open
fetch using array fetch size of 2



Same with the other more egregious example of 159. it is a programming error on their part. they are too quick to close.


tkprof equal parses= execute how to overcome

umesh, February 13, 2003 - 12:47 am UTC

my parses = execute=4 ,fetch =8 for a single row retrived
consistent gets =1
all other attributes from tkprof seem to be ok
SELECT ROWID,UNIT_CODE, NAME, DEPT, ADDR_1, ADDR, LOCATION, STATE,
POSTAL_CODE, COUNTRY, COMPANY_DOCTOR, DOCTOR_ROLE, CONTACT_PHONE_NO,
CONTACT_FAX_NO, FDA_REGISTRATION_NUMBER, DATABASE_ID, TYPE_COMPANY_UNIT,
USER_ID, TO_CHAR(DATE_MODIFIED,'YYYY-MM-DD HH24:MI:SS'),
TO_CHAR(DATE_CREATED,'YYYY-MM-DD HH24:MI:SS'), RECORD_ID, UNIT_TYPE_FLAG,
DGFPS_NO, INTERCHANGE_ID, E2B_FILE_FORMAT
FROM
COMPANY_UNIT Where UNIT_CODE = 'BIO'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.00 0.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 8 0.00 0.00 0 8 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16 0.00 0.00 0 8 0 4

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 43

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID COMPANY_UNIT
1 INDEX UNIQUE SCAN (object id 25233)
according the book expert one on one you recommend using bind variables but here i need a constant
in another run i get parse=execute=fetch=159 for the same number of rows processed how do go about

parse=execute

umesh, February 14, 2003 - 2:42 am UTC

Tom

why am i getting parse=executes
I am the only user connected to database

scott>var dno number
scott>exec :dno:=10

PL/SQL procedure successfully completed.

scott>select * from emp where deptno=:dno;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7839 KING PRESIDENT 17-NOV-81 5000 10
7934 MILLER CLERK 7782 23-JAN-82 1300 10

scott>/

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7839 KING PRESIDENT 17-NOV-81 5000 10
7934 MILLER CLERK 7782 23-JAN-82 1300 10

scott>/

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7839 KING PRESIDENT 17-NOV-81 5000 10
7934 MILLER CLERK 7782 23-JAN-82 1300 10

scott>/

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7839 KING PRESIDENT 17-NOV-81 5000 10
7934 MILLER CLERK 7782 23-JAN-82 1300 10

scott>/

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7839 KING PRESIDENT 17-NOV-81 5000 10
7934 MILLER CLERK 7782 23-JAN-82 1300 10

********************************************************************************

BEGIN :dno:=10; END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 35
********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.02 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.03 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 35
********************************************************************************

select *
from
emp where deptno=:dno


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.01 0.01 0 0 0 0
Execute 5 0.00 0.00 0 0 0 0
Fetch 10 0.00 0.00 0 10 20 15
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 20 0.01 0.01 0 10 20 15

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 35

Rows Row Source Operation
------- ---------------------------------------------------
3 TABLE ACCESS FULL EMP




********************************************************************************


Tom Kyte
February 14, 2003 - 7:40 am UTC

because you are using sqlplus, a simplistic command line tool that accepts a query, parses it, describes it, executes it, prints it over and over and over.

it is not "smart enough" to know that you are doing the same query over and over -- it just parses each time.

fortunately, developers are smarter then sqlplus and can do better -- they can realize "hey, I'm going to execute this more then once" and will just parse it once (well, the smart ones do!)

Row source 0

Trevor, March 31, 2003 - 5:43 pm UTC

Tom,

I am sure you have mentioned somewhere on this site
why the "rows" output with row source operation
which is output from tkprof is sometimes 0
Can you point me to a prior article ...
I have had no luck searching

Regards
Trevor

Tom Kyte
March 31, 2003 - 6:53 pm UTC


don't remember such an article personally...  if you see the plan in the tkprof without using the explain=u/p, then the STAT records got written.

Perhaps you are using explain=u/p to generate the plan?  for example, if I:

SQL> alter session set sql_trace=true;
SQL> select * from emp;
SQL> host <run tkprof NOW with explain=u/p>

I see:

select * from  emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          4          0          14
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          4          0          14

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 315  (OPS$TKYTE)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
      0   TABLE ACCESS (FULL) OF 'EMP'

then I exit and go back into sqlplus:

$ exit

SQL> exit

and quite sqlplus (closing the trace file) and run tkprof again, I see:

select * from  emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          4          0          14
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          4          0          14

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 315  (OPS$TKYTE)

Rows     Row Source Operation
-------  ---------------------------------------------------
     14  TABLE ACCESS FULL OBJ#(48637) (cr=4 r=0 w=0 time=226 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
     14   TABLE ACCESS (FULL) OF 'EMP'


So - maybe it is the Execution Plan you are seeing with zeroes and that just means "all cursors were not closed OR trace file was truncated -- hit max dump size" 

My TKPROF output

Rory Concepcion, April 03, 2003 - 2:35 am UTC

Hi Tom,

I executed a statement which I would use for my daily extraction for ftp to datawarehouse. I traced it but I didnt wait for the statement to finish coz it has been running for half hour but still no output. I have the statement here and the tkprof report. Could you help me in optimizing this query. thanks.

SELECT 'A'||'|'||RTRIM(AC.CUSTNUMBER)||'|'||AD.STREETNAME||'|'||AD.CITY||
'|'||AD.POSTCODE||'|'||AD.FLOOR||'|'||AD.BUILDING||'|'||AD.NO||'|'||AD.COUNTRY||
'|'||AC.CHILDNUM||'|'||AC.HOUSEHOLDNUM||'|'||AC.YEARREV||'|'||AC.EMAIL||
'|'||AC.SALUTATION||'|'||AC.FIRSTNAME||'|'||AC.LASTNAME||'|'||AC.MARITALSTATUS||
'|'||M.TRANSLATION||'|'||TO_CHAR(CO.DATEOFBIRTH,'YYYYMMDDHH24MISS')||'|'||AC.SEX
||
'|'||AC.EDUCATION||'|'||GS.OTHRSUBS1||'|'||GS.OTHRSUBS2||'|'||GS.OTHRSUBS3||
'|'||GS.OTHRSUBS4||'|'||GS.OTHRSUBS5||'|'||AC.SPOUSENAME||'|'||AC.EMPNAME||
'|'||AC.EMPSTATUS||'|'||AC.PROFESSION||'|'||AC.SPOUSEOCCPTN
FROM ACCOUNT AC, CONTACT CO, ADDRESS AD, GSMCUSTACCTINFO GS, MESSAGES M
WHERE RTRIM(AC.PRIMARYKEY) = RTRIM(CO.FOREIGNKEY)
and GS.FOREIGNKEY (+) = AC.PRIMARYKEY
AND RTRIM(AC.PRIMARYKEY) = RTRIM(AD.ACCOUNTKEY)
AND AC.RECORDTYPE not in('ARTICMSTmp','ARTProspec')
AND AC.YEARREV IN('AIncomeA','AIncomeB','AIncomeC','AIncomeD','AIncomeE',NULL)
AND M.MSGCODE = CO.NATIONALITY
AND TO_CHAR(AD.CREATED,'YYYYMMDDHH24MISS') > '20030401010101'
AND AD.EXPIRYDATE IS NULL

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.57 6.78 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1019.56 1251.56 8238702 8635100 1200 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1020.13 1258.34 8238702 8635100 1200 3

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 70 (POINT)

Rows Row Source Operation
------- ---------------------------------------------------
1 CONCATENATION
0 HASH JOIN
175 TABLE ACCESS FULL ADDRESS
0 NESTED LOOPS OUTER
1 HASH JOIN
19279 HASH JOIN
56870 TABLE ACCESS FULL MESSAGES
1400107 TABLE ACCESS FULL CONTACT
0 TABLE ACCESS FULL ACCOUNT
0 TABLE ACCESS BY INDEX ROWID GSMCUSTACCTINFO
0 INDEX RANGE SCAN (object id 8484)
0 NESTED LOOPS OUTER
1 HASH JOIN
175 TABLE ACCESS FULL ADDRESS
1 HASH JOIN
4 TABLE ACCESS FULL ACCOUNT
19279 HASH JOIN
56870 TABLE ACCESS FULL MESSAGES
1400107 TABLE ACCESS FULL CONTACT
0 TABLE ACCESS BY INDEX ROWID GSMCUSTACCTINFO
0 INDEX RANGE SCAN (object id 8484)
0 NESTED LOOPS OUTER
1 HASH JOIN
175 TABLE ACCESS FULL ADDRESS
15 HASH JOIN
92 TABLE ACCESS FULL ACCOUNT
19279 HASH JOIN
56870 TABLE ACCESS FULL MESSAGES
1400107 TABLE ACCESS FULL CONTACT
0 TABLE ACCESS BY INDEX ROWID GSMCUSTACCTINFO
0 INDEX RANGE SCAN (object id 8484)
0 NESTED LOOPS OUTER
1 HASH JOIN
38 HASH JOIN
56870 TABLE ACCESS FULL MESSAGES
173 HASH JOIN
134 TABLE ACCESS FULL ACCOUNT
1400107 TABLE ACCESS FULL CONTACT
175 TABLE ACCESS FULL ADDRESS
0 TABLE ACCESS BY INDEX ROWID GSMCUSTACCTINFO
0 INDEX RANGE SCAN (object id 8484)
3 NESTED LOOPS OUTER
4 HASH JOIN
179 HASH JOIN
56870 TABLE ACCESS FULL MESSAGES
500 HASH JOIN
330 TABLE ACCESS FULL ACCOUNT
1400107 TABLE ACCESS FULL CONTACT
175 TABLE ACCESS FULL ADDRESS
0 TABLE ACCESS BY INDEX ROWID GSMCUSTACCTINFO
3 INDEX RANGE SCAN (object id 8484)
0 NESTED LOOPS OUTER
0 HASH JOIN
0 HASH JOIN
0 NESTED LOOPS
82 TABLE ACCESS FULL ACCOUNT
81 TABLE ACCESS FULL CONTACT
0 TABLE ACCESS FULL MESSAGES
0 TABLE ACCESS FULL ADDRESS
0 TABLE ACCESS BY INDEX ROWID GSMCUSTACCTINFO
0 INDEX RANGE SCAN (object id 8484)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 CONCATENATION
0 HASH JOIN
175 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ADDRESS'
0 NESTED LOOPS (OUTER)
1 HASH JOIN
19279 HASH JOIN
56870 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'MESSAGES'
1400107 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CONTACT'
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ACCOUNT'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'GSMCUSTACCTINFO'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'GSMCAI_FOREIGNKEY' (NON-UNIQUE)
0 NESTED LOOPS (OUTER)
1 HASH JOIN
175 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ADDRESS'
1 HASH JOIN
4 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ACCOUNT'
19279 HASH JOIN
56870 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'MESSAGES'
1400107 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CONTACT'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'GSMCUSTACCTINFO'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'GSMCAI_FOREIGNKEY'
(NON-UNIQUE)
0 NESTED LOOPS (OUTER)
1 HASH JOIN
175 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ADDRESS'
15 HASH JOIN
92 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ACCOUNT'
19279 HASH JOIN
56870 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'MESSAGES'
1400107 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CONTACT'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'GSMCUSTACCTINFO'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'GSMCAI_FOREIGNKEY'
(NON-UNIQUE)
0 NESTED LOOPS (OUTER)
1 HASH JOIN
38 HASH JOIN
56870 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'MESSAGES'
173 HASH JOIN
134 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ACCOUNT'
1400107 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CONTACT'
175 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ADDRESS'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'GSMCUSTACCTINFO'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'GSMCAI_FOREIGNKEY'
(NON-UNIQUE)
3 NESTED LOOPS (OUTER)
4 HASH JOIN
179 HASH JOIN
56870 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'MESSAGES'
500 HASH JOIN
330 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ACCOUNT'
1400107 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CONTACT'
175 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ADDRESS'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'GSMCUSTACCTINFO'
3 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'GSMCAI_FOREIGNKEY'
(NON-UNIQUE)
0 NESTED LOOPS (OUTER)
0 HASH JOIN
0 HASH JOIN
0 NESTED LOOPS
82 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ACCOUNT'
81 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CONTACT'
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'MESSAGES'
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'ADDRESS'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'GSMCUSTACCTINFO'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'GSMCAI_FOREIGNKEY'
(NON-UNIQUE)


Hope you could help me on this. Dont have much idea on how to fix statements with so much LIO's.

Tom Kyte
April 03, 2003 - 7:53 am UTC

would there be a whole bunch of views hiding in there.

too bad you killed it -- you could have been a microsecond from getting the entire result set.


Tell you what -- when I'm taking a question -- feel free to post this as a question there. you would include relevant things like all of the create views that go with it, database versions, configuration information (*_area_size settings and their ilk and so on). This isn't the right place to do this here.

Consistent gets ?

Deepak Gupta, April 04, 2003 - 5:51 am UTC

Dear Mr. Tom,

Hi,


From your reply I interpret that consistent gets in large number is not very good and it is due to very high HWM or table is very big .
To solve this problem either use move command or export and import

My question is , How to know that table has very HWM and what should ratio (approx.) between consistent gets or total no of rows or other parameter

Thanks in advance.


Tom Kyte
April 04, 2003 - 6:44 am UTC

It is the RARE, obscure case where it is due to a table scan upto the HWM.

It is about a billion times more common that the causes are:

o over use of indexes
o improperly indexed tables
o poorly designed datamodels that force us to do a zillion LIO's to get the answer.


don't concentrate on a single fact, it is not always the HWM - in fact, it almost always *isn't*.

concentrated on identifying the queries that have the most impact on your system (statspack). concentrated on reducing their LIOs -- which may be done by

a) executing them less often -- did you really need to run it? (many times, surprisingly the answer is "NO")

b) executing them more efficiently.

Rory Concepcion, April 08, 2003 - 2:14 am UTC

Hi Tom,

I didnt really get what you meant. But did you say that I should post my problem as a question? If it's that, I'm really sorry for posting it here. Cant catch your site when I'm free to ask questions. I'll try to post this as a question. Thanks.

Tom Kyte
April 08, 2003 - 7:45 am UTC

yes, that is what I meant -- it is a new different question with lots of supporting material. doesn't fit nicely here.

no row source operation

Eka, May 14, 2003 - 12:54 am UTC

Hi Tom,
Please help, i don't understand why my tkprof report output doesn't have row source operation? also, the explain plan shows rows=0.
i'm using oracle 8.1.7.0.0 under windows nt.
i run tkprof tracefile outputfile explain=user/password@servicename from command prompt, so i'm sure there're no open cursors.

Here's the report :

INSERT INTO TMP$AGING_B SELECT A.ITMINVOICENO,A.ITMCUSTNO,DECODE(A.ITMTRXTYPE,
1,B.ITMTRXDATE,A.ITMTRXDATE) ITMTRXDATE FROM (SELECT ITMINVOICENO,
ITMCUSTNO,MIN(ITMTRXDATE) ITMTRXDATE,MIN(ITMTRXTYPE) ITMTRXTYPE FROM
AR9ITEMS WHERE ITMAPPYEAR = :b1 AND ITMAPPMONTH = :b2 AND ITMCUSTNO
BETWEEN :b3 AND :b4 GROUP BY ITMINVOICENO,ITMCUSTNO ) A,(SELECT
ITMINVOICENO,ITMCUSTNO,(ITMTRXDATE + ITMCRDPRD ) ITMTRXDATE FROM AR9ITEMS
WHERE ITMAPPYEAR = :b1 AND ITMAPPMONTH = :b2 AND ITMCUSTNO BETWEEN :b3
AND :b4 AND ITMTRXTYPE = 1 ) B WHERE A.ITMCUSTNO = B.ITMCUSTNO (+) AND
A.ITMINVOICENO = B.ITMINVOICENO (+)



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.04 0 95 0 0
Execute 1 1.42 2.11 0 15365 449 13207
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.44 2.15 0 15460 449 13207

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30 (JCI) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: CHOOSE
0 HASH JOIN (OUTER)
0 VIEW
0 SORT (GROUP BY)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'AR9ITEMS'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'XIE5AR9ITEMS'
(NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'AR9ITEMS'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'XIE5AR9ITEMS'
(NON-UNIQUE)

Am i missing setup something in my database? i've already set timed_statistics=true.
Thank you very much in advance.


Tom Kyte
May 14, 2003 - 6:59 am UTC

most likely your trace file is truncated.

look at max_dump_file_size, you might have to alter session to set it higher.

tkprof shows highCPU

Dhamo, May 27, 2003 - 2:15 pm UTC

Hi Tom,

We have a customer using our oracle server monitoring tool.
For him, the following query takes nearly 45% of CPU in one oracle server, Sun OS, Oracle8.1.7.
But takes very less CPU time for other(about 10) servers.

create table TBSP_CANT
storage (initial 256K next 256K pctincrease 0) as
select /*+ RULE */ c.tablespace_name,
max(a.extsize * 8192) max_seg_next_extent,
nvl(max(b.bytes), 0) max_free_bytes
from SYS.SEG$ a, DBA_FREE_SPACE b, AEXTS c, SYS.TS$ f
where a.ts# = f.ts#
and f.name = b.tablespace_name (+)
and f.name = c.tablespace_name
and not exists(select /*+ RULE */ 'X' from EXCL_OBJS g
where ( g.TS# = a.TS#
or g.TS# is NULL )
and ( g.FILE# = a.FILE#
or g.FILE# is NULL )
and ( g.BLOCK# = a.BLOCK#
or g.BLOCK# is NULL)
and ( g.USER# = a.USER#
or g.USER# is NULL))
group by c.tablespace_name;

where AEXTS => is the table that contains name of tablespaces that contains autoextensible datafiles.
EXCL_OBJS => is containing objects info that are excluded from monitoring
(in this case "no rows" in this table)

When looking into tkprof output :

From HighCPU server,

db_block_size = 8192
db_block_buffers = 16384
shared_pool_size = 140928615
sort_area_size = 10000000 (values set in init.ora)

create table TBSP_CANT
storage (initial 256K next 256K pctincrease 0) as
select /*+ RULE */ c.tablespace_name,
max(a.extsize * 8192) max_seg_next_extent,
nvl(max(b.bytes), 0) max_free_bytes
from SYS.SEG$ a, DBA_FREE_SPACE b, AEXTS c, SYS.TS$ f
where a.ts# = f.ts#
and f.name = b.tablespace_name (+)
and f.name = c.tablespace_name
and not exists(select /*+ RULE */ 'X' from EXCL_OBJS g
where ( g.TS# = a.TS#
or g.TS# is NULL )
and ( g.FILE# = a.FILE#
or g.FILE# is NULL )
and ( g.BLOCK# = a.BLOCK#
or g.BLOCK# is NULL)
and ( g.USER# = a.USER#
or g.USER# is NULL))
group by c.tablespace_name

call count cpu elapsed disk
------- ------ -------- ---------- ---------- ---
Parse 8 0.04 0.00 0
Execute 8 2674.18 2798.53 1036
Fetch 0 0.00 0.00 0
------- ------ -------- ---------- ---------- ---
total 16 2674.22 2798.53 1036

query current rows
------- ---------- ----------
16 0 0
840500 434984 48
0 0 0
------- ---------- ----------
840516 434984 48

Misses in library cache during parse: 8
Optimizer goal: RULE
Parsing user id: 42


From a lowCPU server,

create table TBSP_CANT storage (initial 256K next 256K pctincrease 0) as
select /*+ RULE */ c.tablespace_name,
max(a.extsize * 8192) max_seg_next_extent,
nvl(max(b.bytes), 0) max_free_bytes
from SYS.SEG$ a,
DBA_FREE_SPACE b,
AEXTS c,
SYS.TS$ f
where a.ts# = f.ts#
and f.name = b.tablespace_name (+)
and f.name = c.tablespace_name
and not exists(select /*+ RULE */ 'X'
from EXCL_OBJS g
where ( g.TS# = a.TS#
or g.TS# is NULL )
and ( g.FILE# = a.FILE#
or g.FILE# is NULL )
and ( g.BLOCK# = a.BLOCK#
or g.BLOCK# is NULL)
and ( g.USER# = a.USER#
or g.USER# is NULL))
group by c.tablespace_name

call count cpu elapsed disk
------- ------ -------- ---------- ----------
Parse 2 0.01 0.01 0
Execute 4 6.46 8.18 224
Fetch 0 0.00 0.00 0
------- ------ -------- ---------- ----------
total 6 6.47 8.19 224

query current rows
---------- ---------- ----------
4 0 0
1506708 39508 62
0 0 0
---------- ---------- ----------
1506712 39508 62

Misses in library cache during parse: 2
Optimizer goal: RULE
Parsing user id: 54

Both the databases are sized around 7 GB.

(i) Need to tune the operating system such as swap memory etc., ?
(ii) Need to tune the database ?
(iii) Need to tune the memory related parameters
Please suggest me whether the set values are sufficient or need to increase.

(iv) or Need to tune the query
Tuning not required I feel, since other servers are working fine with the same query.

(v) Any other possibilities. that causing this issue ?

Thanks for your valuable time. Could you pls suggest me, to solve this issue.

Thanks a lot,
Dhamo


Tom Kyte
May 27, 2003 - 5:32 pm UTC

do an autotrace traceonly explain

compare the outputs.

are they the same? betcha what is happening is that DBA_FREE_SPACE, which has a hint unfortunately in some releases -- makes it work well when USED ALONE but can mess up otherwise -- is getting the CBO involved and in light of no stats -- the plan is a mess.

You know what the best way to tune this is? DON'T DO IT. use LMT's and space managment becomes trivial. No more "is the next extent of some object larger then the biggest free chunk I have" (thats that query)


tkprof shows highCPU

Dhamo, May 28, 2003 - 10:17 am UTC

Hi Tom,

Thanks for the useful information.

I will cross-check with autotrace output. I will post you the
Explain Plan once I get the customer output.
Actually, I was thinking of tuning the operating system.

But as you said, if Oracle is doing the tricks all the way with DBA_FREE_SPACE,
couldn't we do anything for this to reduce the High Cpu.?

Appreciate your valuable suggestions.

Thanks Tom,
Dhamo

Tom Kyte
May 28, 2003 - 6:57 pm UTC

tuning the OS? to fix a bad query?

thats a bit "backwards"

you tune applications in isolation
then in a concurrent mode
then you can think about tuning the database instance
and then, lastly, tuning the OS.


the best way to speed something up is to not do it at all. I see what this query is for and it is quite simply -- not needed if you use best practices (LMTS or at least make every extent = every other extent and that darn pctincrease=0)



tkprof shows highCPU

Dhamo, June 03, 2003 - 11:52 am UTC

Hi Tom,

Thanks for the info about the tuning steps.
"to fix a bad query?" -- Tom, you mean the query is bad one?

I'm not able to get the "Execution Plan" for that query,
since query involved with dba_free_space, the only user "SYS" can get the Explain Plan( query is run as non-sys user) for dba_free_space, because an underlying x$ table with dba_free_space. I hope i am right.

I checked with customer reg. OS related information such as swap memory, etc. are huge enough.

When using "set timing on" for that query the HighCpu server shows:
Elapsed: 00:04:23.46
and
LowCpu server shows:
Elapsed: 00:00:02.47
The SYSTEM tablespace is DMT and pctincrease set to 0.

Somewhere Oracle is doing the trick, I found no way to reduce the High CPU consumption of the query.

Thanks for you time.
Looking forward your valuable thoughts.

Many thanks,
Dhamo



Tom Kyte
June 03, 2003 - 12:29 pm UTC

to tune it - don't run it. that has been my point.

some more tkprof doubts

A reader, June 03, 2003 - 3:08 pm UTC

Hi

How do we determine if the process is not using array processing?

If I have a result of a query:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 3377 1.19 1.90 0 0 0 3377
Fetch 3377 1.82 342.39 4759 15243 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6754 3.01 344.29 4759 15243 0 3379

Is that using array processing?

Also how can execute phase read from disk/buffer....?

Tom Kyte
June 03, 2003 - 3:16 pm UTC

I doubt it -- but even if it were -- so what. it only fetched two rows.

in general, divide rows/fetches. that is the array size.


Here, it looks like someone is doing an inefficient lookup.


queries can have execute stages, depends on the nature of the query, but you decided not to share that with us, so I'm not going to guess ;)

it is unusual to have rows filled in for execute and fetch however. be interesting to see the test case that brings that result about.

A reader, June 03, 2003 - 3:24 pm UTC


forgot the query

A reader, June 04, 2003 - 3:15 am UTC

hi the query was

SELECT 1
FROM msefa_elem_fase
WHERE msped_id_pedido = :b1
AND msfas_id_fase = :b2
AND msfam_id_familia = 'TB'
AND mscom_id_componente = 'CN'
AND msefa_sec_componente = 1
AND mspar_id_parametro = 'PEDCAN'
AND msefa_sec_parametro = 1


sorry!

I dont understand fetch rows during execution phase though
:(

divide rows/fetches

A reader, June 04, 2003 - 7:40 am UTC

Hello

When you say divide rows by fetches which rows should we use? The one in execute phase or the one in fetch phase?

Tom Kyte
June 04, 2003 - 8:36 am UTC

i'm not able to reproduce his example. since the parse row is zero, it looks like a mucked up trace file to me.

use the one in total.

high buffer gets and insert

A reader, June 05, 2003 - 6:13 am UTC

Hi

I would like to know what can cause high buffer gets in an insertion. I have a tkprof result

INSERT INTO gcprt_scon_solic_cons
(scon_num_solic, dcrp_grp_cons, dcrp_tipo_cons, scon_tipo_movi,
scon_valo_cons, scon_param_cons, scon_estado_solic,
scon_usuario, scon_timestamp)
VALUES (gcprt_scon_num_seq.NEXTVAL, :b1, :b2, :b3,
:b4, :b5, :b6,
:b7, SYSDATE)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 0 0 0
Execute 530 11.40 69.05 7520 57698 6340 530
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 531 11.42 69.07 7520 57698 6340 530

The table has only one index (primary key, sngle column), current gets is not high (so not many modifications in that table I assume) but the buffer gets is around 110 per executions. I think that´s high for an insert. Wat can be the cause?

Thank you

Tom Kyte
June 05, 2003 - 8:34 am UTC

so, lets see the create table and all -- thats alot of DISK reads (physical IO's) and current mode gets for a table with a single index on it.

table definition

A reader, June 05, 2003 - 11:13 am UTC

Hi

the create table is as follows

CREATE TABLE GCPRT_SCON_SOLIC_CONS
(
SCON_NUM_SOLIC NUMBER NOT NULL,
DCRP_GRP_CONS VARCHAR2(10) NOT NULL,
DCRP_TIPO_CONS VARCHAR2(10),
SCON_TIPO_MOVI VARCHAR2(2) NOT NULL,
SCON_VALO_CONS VARCHAR2(300) NOT NULL,
SCON_PARAM_CONS VARCHAR2(500),
SCON_ESTADO_SOLIC VARCHAR2(2) NOT NULL,
SCON_USUARIO VARCHAR2(8) NOT NULL,
SCON_TIMESTAMP DATE NOT NULL
)
TABLESPACE ETGCPRT100
PCTUSED 85
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 10M
NEXT 128M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
)
NOCACHE
NOPARALLEL;

ALTER TABLE GCPRT_SCON_SOLIC_CONS ADD (
CONSTRAINT PK_GCPRT_SCON_SOLIC_CONS PRIMARY KEY (SCON_NUM_SOLIC)
USING INDEX
TABLESPACE EIGCPRT100
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 10M
NEXT 10M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
));


It has a trigger as well on SCON_ESTADO_SOLIC, can that be the cause? (Does buffer gets we see in tkprof the sum of insert and trgger?)

Tom Kyte
June 05, 2003 - 1:27 pm UTC

yes, the trigger is the cause.  consider:

ops$tkyte@ORA817DEV> create table t ( x int primary key, y char(80) );

Table created.

ops$tkyte@ORA817DEV> @trace
ops$tkyte@ORA817DEV> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> insert into t /* test1 */ (x,y) values ( -1, 'x' );

1 row created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> create or replace trigger t_trigger
  2  before insert on t
  3  declare
  4          l_cnt number;
  5  begin
  6          select count(*) into l_cnt from all_objects;
  7  end;
  8  /

Trigger created.

ops$tkyte@ORA817DEV> insert into t /* test2 */ (x,y) values ( 1, 'x' );

1 row created.


insert into t /* test1 */ (x,y)
values
 ( -1, 'x' )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1         10           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1         10           1


insert into t /* test2 */ (x,y)
values
 ( 1, 'x' )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      2.76       6.16        370      64886          9           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.77       6.17        370      64886          9           1
 

Back to tkprof output

Robert Ware, June 05, 2003 - 12:30 pm UTC

Do you know why the bottom of my tkprof output file looks like this when in fact I have several cursors that consumed resources?


********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 30 0.14 0.99 0 0 2 0
Execute 94 0.71 1.02 14 7 479 35
Fetch 81 7.46 8.08 34984 53278 376 75
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 205 8.31 10.09 34998 53285 857 110

Misses in library cache during parse: 2

26 user SQL statements in session.
7 internal SQL statements in session.
33 SQL statements in session.
15 statements EXPLAINed in this session.
********************************************************************************


Tom Kyte
June 05, 2003 - 1:31 pm UTC

umm -- in a word -- no.

I don't know what you mean here at all. I see there were 26 sql statements in the trace and the over all totals are above?

tkprof output

Robert Ware, June 05, 2003 - 2:32 pm UTC

All of the SQL being executed is in couple of Packages.

I seem to remember something about stored PL/SQL SQL statements rolling up their totals in tkprof as recursive SQL. Is this correct or am I dreaming.

I have searched several resouces and I can't find anything stating this in so many words.

Tom Kyte
June 05, 2003 - 3:28 pm UTC


It just looks like you have a truncated trace file or a partial one is all. look at the raw trace file, is it complete? does it say "truncated" at the bottom?

tkprof output

Robert Ware, June 05, 2003 - 4:03 pm UTC

The raw trace file is complete.

I just wondered why the OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS are zeroed out.

Tom Kyte
June 05, 2003 - 6:27 pm UTC

I would have to see the raw trace file (no i don't want it here) -- but if you have my book Expert One on One Oracle -- chapter 10 tells you how to read a raw trace file. You'd be able to see if any of the sql that is in there, isn't in there.

check out the raw tracefile. see what it says, dep= is the recursive sql depth.

PARSING IN CURSOR #1 len=18 dep=0 uid=366 oct=3 lid=366 tim=1030128952796386 hv=1333943659 ad='58bf4ec4'
select * from dual

would not be recursive sql, dep=0

=====================
PARSING IN CURSOR #8 len=19 dep=1 uid=366 oct=3 lid=366 tim=1030128985013586 hv=4083983875 ad='58bedd7c'
select * from dual
END OF STMT
PARSE #8:c=0,e=89

would be -- only difference is that I did the first select * from dual at the sqlplus prompt, the second in a plsql block

Thanks

Robert Ware, June 09, 2003 - 9:41 am UTC

Tom,

You were right since all of the SQL in the raw trace file was executed within PL/SQL blocks the Recursive Depth for all the cursors were greater than zero. Apparently tkprof does not distinguish between user and system recursive sql.

unix040@pgblrel/home/rdware/pgblrel> grep dep= ora_12121_pgblrel.trc | wc -l
238

unix040@pgblrel/home/rdware/pgblrel> grep dep=0 ora_12121_pgblrel.trc | wc -l
0

unix040@pgblrel/home/rdware/pgblrel> grep dep=1 ora_12121_pgblrel.trc | wc -l
224

unix040@pgblrel/home/rdware/pgblrel> grep dep=2 ora_12121_pgblrel.trc | wc -l
14


Why the count is different...

Kamal Kishore, June 14, 2003 - 11:39 am UTC

Hi Tom,
I was trying your examples in your book "Expert One On One-Oracle" pages 236-239 and I was seeing the difference in the "Fetch Count" shown by tkprof.

In my tests, the Fetch Count was always one less than those in your examples. For example, here is one output from the "single_table_emp" SELECT (page 239):

select count(ename)
from single_table_emp, random
where
single_table_emp.empno = random.x

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 4 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 4.15 4.16 0 135682 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 4.15 4.16 0 135686 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 88 (OPS$KKISHORE)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=135682 r=0 w=0 time=4165935 us)
100000 NESTED LOOPS (cr=135682 r=0 w=0 time=3891922 us)
100000 TABLE ACCESS FULL RANDOM (cr=366 r=0 w=0 time=301644 us)
100000 TABLE ACCESS HASH SINGLE_TABLE_EMP (cr=135316 r=0 w=0 time=2777918 us)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 SORT (AGGREGATE)
100000 NESTED LOOPS
100000 TABLE ACCESS (FULL) OF 'RANDOM'
100000 TABLE ACCESS (HASH) OF 'SINGLE_TABLE_EMP'



As you can see, I'm getting the total Fetch Count as 3 whereas in your example it is 4.

Is this due to a change in Oracle9i (9.2.0.3) optimization?
Thanks,


Tom Kyte
June 14, 2003 - 1:58 pm UTC

what tool are you using to run the tests with. In sqlplus, a simple

select count(*) from dual;

will show two fetches in tkprof using sqlplus (9203). what does yours show.

This is surprising...

Kamal Kishore, June 14, 2003 - 2:30 pm UTC

Hi Tom,
When I use SQL*Plus, I get the following results which are in line with what seems to be expected:
==========================================================
select count(*)
from
dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.01 0.00 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.00 0 3 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 88 (OPS$KKISHORE)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=3 r=0 w=0 time=182 us)
1 TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=158 us)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 SORT (AGGREGATE)
1 TABLE ACCESS (FULL) OF 'DUAL'


==========================================================


But when I use another application (PL/SQL Developer, </code> http://www.allroundautomations.nl/ <code>, I get following results:

==========================================================
select count(*)
from
dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 3 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 88 (OPS$KKISHORE)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=3 r=0 w=0 time=156 us)
1 TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=133 us)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 SORT (AGGREGATE)
1 TABLE ACCESS (FULL) OF 'DUAL'
==========================================================


I could not understand why would there be a difference. Both the applications are OCI applications using the same OCI API library to connect to same Oracle database and it is the server which is generating this trace output.

Whay should the results be different when using different front-end application? (when it all happens on the server anyway)?
Thanks,


Tom Kyte
June 14, 2003 - 4:25 pm UTC

sqlplus technically should only have one fetch -- they are "double fetching" un-necessarily.  It (sqlplus) are using an array size of 15 by default.  On the first fetch -- it already KNOWS that there is no more data -- it got one row, it got the ora-1403, it need not have made the second fetch.  

It is because of the different clients that you saw that "little difference" in the number.  Consider:

ops$tkyte@ORA920> select count(*) from dual just_plus;

  COUNT(*)
----------
         1

ops$tkyte@ORA920>
ops$tkyte@ORA920> declare
  2          l_cnt number;
  3  begin
  4          select count(*) INTO l_cnt
  5            from dual in_plsql;
  6  end;
  7  /

PL/SQL procedure successfully completed.


select count(*)
from
 dual just_plus


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          3          0           1

********************************************************************************

SELECT count(*)
from
 dual in_plsql


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          3          0           1


<b>Plsql (a real programming environment) has that optimization that sqlplus could, but does not, have.  they only needed a single fetch call</b>

 

tkprof output on your example on page 464 - Expert one-on-one Oracle

Kamal Kishore, June 21, 2003 - 1:20 pm UTC

Hi Tom,
When I tried the example that you have on page 464 (actually the entire PL/SQL block after setting the ALTER SESSION SET EVENTS command) gave me following output:

UPDATE emp set ename = lower(ename)
where
empno = :b1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.07 0.06 0 0 0 0
Execute 1 0.00 100.18 0 1 2 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.07 100.25 0 1 2 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 88 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 UPDATE (cr=1 r=0 w=0 time=49141 us)
1 INDEX UNIQUE SCAN PK_EMP (cr=1 r=0 w=0 time=27 us)(object id 37436)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
enqueue 33 3.12 100.13


Here it shows the information on the wait events too. I guess this is a recent addition to the TKPROF with Oracle9i (9.2.0.3). Does this mean that we may no longer have to poke into the raw trace files anymore?

When looking at the output above, it shows me the wait event on "enqueue" with total time waited. My question is about the "Times Waited" column. Why is this column says Times Waited is 33 ? How did it arrive at that number?
Thanks,



Tom Kyte
June 21, 2003 - 1:28 pm UTC

yes, this is a nifty new feature of Oracle9i's tkprof (9.0 and up)

you would poke about in the trace file to find out what :B1's value is - but you don't need to look there for the waits anymore.

We timed out on the enqueue wait every three seconds, looking to see "whats up" and then went back to sleep waiting for that blocking session to commit or rollback.

3 seconds, can it be changed?

Kamal Kishore, June 21, 2003 - 1:40 pm UTC

Hi Tom,
Totally forgot about the bind variable values in the raw trace file. Thanks for reminding me.
But then, tkprof could as well print this on to the output trace file easily (it is right there in raw trace file). I wonder why would it not be able to pick that up?

In your reply above you said:
<quote>
We timed out on the enqueue wait every three seconds, looking to see "whats up"
</quote>

Is this 3 seconds the standard wait timeout for Oracle? Any reasons why it is 3 seconds? I guess the idea might be that anything less might be too less and anything longer might be too long for the waiting session. Correct?

1). Is this 3 second timeout standard for all Oracle versions or is different (or might change in future) for different Oracle versions?

2). Is it in any way dependent on the server configuration?

3). Is it documented somewhere, or we just "know it"?
Thanks,


Tom Kyte
June 21, 2003 - 2:45 pm UTC

tkprof does not print them out currently.


3 seconds is just what the developers have -- it is why it could take as long as 3 seconds to detect a deadlock for example.

1) it and everything is always "subject to change"

2) no

3) we just "know it" -- well, we can deduce it from things like the above.

problem

A reader, June 23, 2003 - 10:58 am UTC

Hi,

I'm getting following error while doing tkprof

*******************************************************************************
*


SELECT /*+ CHOOSE */ POREQUESTID,POLDGREFNO,POPOL,POPOD,POCREQSTAT,POVOYNO,

POVESSCODE,POCONTOPER,POFNLPORT,POCONTNO,POPREFIX,POSERIAL,POEQUIPCAT,

POSTWPOS

FROM

DCGCSTWV3 WHERE POVOYNO=RTRIM(:V1) AND PORQSTSTAT='A' ORDER BY POVOYNO ASC,

POCONTOPER ASC,POCONTNO ASC,POLDGREFNO ASC,POPOD ASC





call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ----------
----------
Parse 1 0.00 0.00 0 0 0
0
Execute 1 0.00 0.00 0 0 0
0
Fetch 21 0.05 0.05 0 1350 0
20
------- ------ -------- ---------- ---------- ---------- ----------
----------
total 23 0.05 0.05 0 1350 0
20


Misses in library cache during parse: 0

Optimizer hint: CHOOSE

Parsing user id: 411 (OPS$T020SBK)

error during parse of EXPLAIN PLAN statement

ORA-00942: table or view does not exist



parse error offset: 209



Where DCGCSTWV3 is a view and public synonym has been created for the view.

Could you tell me what is the error? and how to resolve

Thanks.
atul


Tom Kyte
June 23, 2003 - 11:18 am UTC

you don't have the privileges on the base table to do an explain on that query. so, it fails on the explain plan.

As long as you have a "normal" trace file from a completed session -- you don't *need* that anyway - -the explain plan will be in there (the real one)

problem

atul, June 23, 2003 - 11:00 am UTC

Hi,

I am getting following errors while tkprof


*******************************************************************************
*


SELECT /*+ CHOOSE */ POREQUESTID,POLDGREFNO,POPOL,POPOD,POCREQSTAT,POVOYNO,

POVESSCODE,POCONTOPER,POFNLPORT,POCONTNO,POPREFIX,POSERIAL,POEQUIPCAT,

POSTWPOS

FROM

DCGCSTWV3 WHERE POVOYNO=RTRIM(:V1) AND PORQSTSTAT='A' ORDER BY POVOYNO ASC,

POCONTOPER ASC,POCONTNO ASC,POLDGREFNO ASC,POPOD ASC





call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ----------
----------
Parse 1 0.00 0.00 0 0 0
0
Execute 1 0.00 0.00 0 0 0
0
Fetch 21 0.05 0.05 0 1350 0
20
------- ------ -------- ---------- ---------- ---------- ----------
----------
total 23 0.05 0.05 0 1350 0
20


Misses in library cache during parse: 0

Optimizer hint: CHOOSE

Parsing user id: 411 (OPS$T020SBK)

error during parse of EXPLAIN PLAN statement

ORA-00942: table or view does not exist



parse error offset: 209



While DCGCSTWV3 is a view..and public synonym has been
created for the view


Could you tell me how to resolve this error

Thank.
atul

Tkprof Error

atul, July 11, 2003 - 2:28 pm UTC

Hi,

I'm getting following errors in TKPROF output

1)
Misses in library cache during parse: 0
Optimizer hint: RULE
Parsing user id: 166 (POBATCH2)
error during parse of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the view


2)Misses in library cache during parse: 0
Optimizer hint: CHOOSE
Parsing user id: 166 (POBATCH2)
error during parse of EXPLAIN PLAN statement
ORA-00942: table or view does not exists



Kindly suggest me a solution for this..



Thanks & Regards.
atul


Tom Kyte
July 12, 2003 - 8:55 am UTC

you are using EXPLAIN= (which isn't necessary) and the user/pass you are using isn't allowed to run an explain plan on the object.

the plan should be in there without explain= and explain= is very easily "tricked" into giving you the wrong plan anyway. don't use it.

TKPROF does not show query plan

Vera, July 24, 2003 - 12:59 pm UTC




Thank you, Tom for all your work. You are the best. For a long time I wanted to ask you a question, but was never able to get through. Do you know why my TKPROF report never generates query plan? I have to use autotrace traceonly (not when I run trace file), but as you know it does not show rows that go through each step.
My sql_trace and time_statistics are set to true I have
Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
PL/SQL Release 8.1.7.4.0 - Production
CORE 8.1.7.2.1 Production
TNS for 32-bit Windows: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production
Thank you

Tom Kyte
July 24, 2003 - 4:30 pm UTC

do you exit the session that generated the trace?

if not, the STAT records are not written until a cursor is closed.

Thank you

A reader, July 25, 2003 - 1:41 pm UTC

Yes, I was viewing TKPROF report without exiting my session.
Thank you!

TKPROF output analysis

Phil Quinn, September 30, 2003 - 10:28 am UTC

Thanks Tom, most informative. Can I throw my tuppence worth in here -

We have moved our development application/database onto a production server. We find that on production, the update suite takes approx. twice as long to run. The database structure is exactly the same, including same parallelisation, indexes, stats. The machines are virtually identical, apart from the Development server has quicker CPUs (don't ask). I ran a simple query on both to see if it would flag up any obvious differences, which would affect run time accumulatively. Looking at the TKPROF output, the only thing I see of relevance is the cpu time on Production is more for the fetch, the elapsed time is also more, but not to the same percentage as the cpu usage. I'd appreciate it if you could cast your eye quickly over the stats to see if I'm missing something here. Does this reflect the difference in cpu speed ?

Version : 9.2.0.3 Win2k

Production Server
-----------------

select count(*) from person p,member m
where m.purn = p.purn

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 4 3 0
Execute 1 0.00 2.52 0 3 0 0
Fetch 2 2.53 62.77 0 2753 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 2.59 65.37 0 2760 3 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 32

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
0 SORT AGGREGATE
0 HASH JOIN
0 TABLE ACCESS FULL PERSON
0 INDEX FAST FULL SCAN NEW_MEMBER_PURN (object id 36374)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Join ACK 8 0.00 0.00
PX Deq: Parse Reply 5 0.00 0.00
SQL*Net message to client 2 0.00 0.00
PX Deq: Execute Reply 69 2.10 8.87
PX qref latch 86 1.03 49.22
PX Deq Credit: send blkd 523 0.58 0.83
PX Deq Credit: need buffer 483 0.00 0.11
PX Deq: Table Q Normal 1 0.00 0.00
latch free 2 0.00 0.00
PX Deq: Signal ACK 7 0.10 0.11
SQL*Net message from client 2 0.29 0.29
********************************************************************************
*************************************************************************************************************************************

Development Server
------------------

select count(*) from person p,member m
where m.purn = p.purn

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.73 0 0 0 0
Execute 1 0.00 1.81 0 3 0 0
Fetch 2 1.28 53.68 0 2753 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.28 56.22 0 2756 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 31

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=2753 r=0 w=0 time=53683445 us)
0 SORT AGGREGATE (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL PERSON (cr=0 r=0 w=0 time=0 us)
0 INDEX FAST FULL SCAN NEW_MEMBER_PURN (cr=0 r=0 w=0 time=0 us)(object id 40085)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
rdbms ipc reply 3 0.00 0.00
PX Deq: Join ACK 1 0.00 0.00
enqueue 1 0.00 0.00
PX Deq: Parse Reply 2 0.00 0.00
SQL*Net message to client 2 0.00 0.00
PX Deq: Execute Reply 37 2.06 11.44
PX Deq Credit: send blkd 541 0.35 0.98
PX Deq Credit: need buffer 1620 0.00 0.13
PX qref latch 49 1.03 37.91
PX Deq: Table Q Normal 1 0.00 0.00
PX Deq: Signal ACK 4 0.10 0.10
SQL*Net message from client 2 0.01 0.01
********************************************************************************

Tom Kyte
September 30, 2003 - 10:39 am UTC

can you do something "non parallel" -- things get scattered in many trace files with PQ. Be nice to see the stat records and all filled in.

Recursive SQL in tkprof

Sam, October 10, 2003 - 8:09 pm UTC

Tom,

Can you clarify whether the number of buffer gets reported in tkprof includes recursive sql or not. In ch 10 of the Oracle 9i Performance Tuning Guide, it says:
"The resources reported for a statement include those for all of the SQL issued while the statement was being processed. Therefore, they include any resources used within a trigger, along with the resources used by any other recursive SQL (such as that used in space allocation). With the SQL Trace facility enabled, TKPROF reports these resources twice. Avoid trying to tune the DML statement if the resource is actually being consumed at a lower level of recursion."

However, I did a simple test:
select * from sto_tab
where id = sto_select.get_id(id)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.02 0.01 0 32 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.03 0.02 0 32 0 1

Misses in library cache during parse: 1

Inside sto_select.get_id, I execute a query:
function get_id(x in number) return number is
xx number;
begin
select count(*) into xx from all_objects where rownum <= 100000;

return 1;
end;

I can see this sql in the trace file:
SELECT count(*)
from
all_objects where rownum <= 100000


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 8.34 8.49 556 579806 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 8.36 8.50 556 579806 0 2

So you can see the buffer gets are not included in my original query. Is the Performance Tuning Guide wrong? How about in DML statements? (I did my test on a 9.2.0.3 database)


Tom Kyte
October 11, 2003 - 10:01 am UTC

it "used" to be true.  Consider this example:



Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production
 
ops$tkyte@ORA817DEV> set echo on
ops$tkyte@ORA817DEV> @test
ops$tkyte@ORA817DEV> drop table t;
 
Table dropped.
 
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> create or replace trigger t_trigger
  2  before insert on t
  3  begin
  4          for x in ( select count(*) from all_objects )
  5          loop
  6                  null;
  7          end loop;
  8  end;
  9  /
 
Trigger created.
 
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> @trace
ops$tkyte@ORA817DEV> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA817DEV> insert into t values ( 1 );
 
1 row created.


tkprof says:

insert into t
values
 ( 1 )
                                                                                                                    
                                                                                                                    
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      2.54       5.83        359      64968         12           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.54       5.83        359      64968         12           1
                                                                                                                    
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 742
********************************************************************************
                                                                                                                    
SELECT COUNT(*)
FROM
 ALL_OBJECTS
                                                                                                                    
                                                                                                                    
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      2.54       5.79        358      64967          4           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      2.54       5.79        358      64967          4           1
                                                                                                                    

<b>so the consistent gets were aggregated.  but the same in 9ir2 reports:</b>

insert into t
values
 ( 1 )
                                                                                                                    
                                                                                                                    
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1         20           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1         20           1
                                                                                                                    
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 89
                                                                                                                    
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************
                                                                                                                    
select count(*)
from
 all_objects
             
             
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      3.52       3.43          0      75729          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      3.52       3.43          0      75729          0           1
                                                                                 

<b>so it no longer does the double count</b>


I'll file a doc bug. 

SQL performance issue...

Rafi, October 20, 2003 - 5:01 pm UTC

QUERY:
======

SELECT
AL1.MATERIAL_NUMBER,
SUM ( AL1.QUANTITY_IN_UNIT_OF_ENTRY * -1),
AL3.PROFIT_CENTER,
Decode(sign((trunc(last_day(add_months(sysdate,
-1))+1))- trunc(AL1.ENTRY_DATE)),
0,
AL1.ENTRY_DATE,
-1,
AL1.ENTRY_DATE,
'01/01/1900'),
trunc(sysdate,
'IW'),
Decode(AL1.MOVEMENT_TYPE,
'601',
'Trade',
'633',
'Trade',
'IC')
FROM
REPORT_VIEWER.Q_MM_DOCS_ODS_V AL1,
REPORT_VIEWER.Q_CATALOG_MATERIAL_V AL2,
REPORT_VIEWER.Q_CATALOG_MATERIAL_PLANT_V AL3
WHERE
(AL1.SOURCE_SYSTEM=AL2.SOURCE_SYSTEM AND
AL1.MATERIAL_NUMBER=AL2.MATERIAL_NUMBER AND
AL3.SOURCE_SYSTEM=AL1.SOURCE_SYSTEM AND
AL1.PLANT=AL3.PLANT AND
AL1.MATERIAL_NUMBER=AL3.MATERIAL_NUMBER_PLANT) AND
(AL1.SOURCE_SYSTEM='D7' AND
AL1.PLANT IN ('D723', 'D729', 'D750') AND
AL1.DIVISION IN ('8A', 'AK', 'C5', 'PQ') AND
AL1.ENTRY_DATE>=trunc(Sysdate,'IW') -91 and
AL1.ENTRY_DATE<=trunc(sysdate) AND
AL1.MOVEMENT_TYPE IN ('601', '633', '901', '905', '911', '933') AND
AL1.MATERIAL_NUMBER LIKE '%#ABA' AND
AL2.MATERIAL_TYPE='FERT' AND
AL3.MRP_TYPE IN ('X0', 'XP') AND AL3.OBJECT_VERSION='A' AND
AL2.OBJECT_VERSION='A')
GROUP BY AL1.MATERIAL_NUMBER, AL3.PROFIT_CENTER, Decode(sign((trunc(last_day(add_months(sysdate,-1))+1))- trunc(AL1.ENTRY_DATE)),0,AL1.ENTRY_DATE,-1,AL1.ENTRY_DATE,'01/01/1900'), trunc(sysdate,'IW'), Decode(AL1.MOVEMENT_TYPE,'601','Trade','633','Trade','IC')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.05 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 26 411.22 6929.18 1813780 2243392 0 361
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 28 411.25 6929.23 1813780 2243392 0 361

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: SYS

Rows Row Source Operation
------- ---------------------------------------------------
361 SORT GROUP BY
27018 NESTED LOOPS
27019 NESTED LOOPS
428 INLIST ITERATOR
430 TABLE ACCESS BY INDEX ROWID /BI0/PMAT_PLANT
1034 INDEX RANGE SCAN (object id 124058)
27445 TABLE ACCESS BY INDEX ROWID /BIC/AZMMDOCR00
3638693 INDEX RANGE SCAN (object id 3434085)
27018 TABLE ACCESS BY INDEX ROWID /BI0/PMATERIAL
54036 INDEX UNIQUE SCAN (object id 97361)




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.03 0.05 0 0 0 0
Execute 3 0.00 0.01 0 0 0 0
Fetch 26 411.22 6929.18 1813780 2243392 0 361
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 31 411.25 6929.24 1813780 2243392 0 361

Misses in library cache during parse: 2
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 15 0.01 0.00 0 0 0 0
Execute 45 0.01 0.00 0 0 0 0
Fetch 225 0.03 0.46 42 175 0 430
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 285 0.05 0.46 42 175 0 430

Misses in library cache during parse: 5

3 user SQL statements in session.
15 internal SQL statements in session.
18 SQL statements in session.
********************************************************************************
Trace file: mtd.trc
Trace file compatibility: 8.00.04
Sort options: default

1 session in tracefile.
3 user SQL statements in trace file.
15 internal SQL statements in trace file.
18 SQL statements in trace file.
12 unique SQL statements in trace file.
483 lines in trace file.
===========================================================

EXPLAIN PLAN:
=============

Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Hint=HINT: FIRST_ROWS 1 3021
SORT GROUP BY 1 99 3021
NESTED LOOPS 1 99 3016
NESTED LOOPS 1 79 3014
INLIST ITERATOR
TABLE ACCESS BY INDEX ROWID /BI0/PMAT_PLANT 8 240 86
INDEX RANGE SCAN /BI0/PMAT_PLANT~0 8 32
TABLE ACCESS BY INDEX ROWID /BIC/AZMMDOCR00 3 147 366
INDEX RANGE SCAN /BIC/AZMMDOCR00~05 3 106
TABLE ACCESS BY INDEX ROWID /BI0/PMATERIAL 1 K 30 K 2
INDEX UNIQUE SCAN /BI0/PMATERIAL~0 1 K 1

===========================================================
Hi Tom,

I have been having performance issues with the query given above. I asked the DBAs for tracing and TKPROF output which is given above. I am not experienced in interpreting TKPROF outputs.
Can you please help know what is wrong with this query?
How to interpret and investigate TKPROF output?

THanks in advance.

Regards
Rafi


question regarding trace file generation

A reader, November 06, 2003 - 11:58 am UTC

Hi Tom
Thanx for a wonderful site
Let us say amazon is using Oracle. A user is
running into a problem (for one moment, imagine we
are not using conneciton pooling.)
The user is one of the thousands who are logged in.
If I want to isolate the user session trace by creating
a new user, logging in, executing the problematic url,
waiting for it to complete and then logging out.
In this case, how do I find the session that needs to be traced in the first place. Say in the logon trigger, how would I find out which user is the one
that just logged in (since the users may not be
database users - they may be just email addresses?

Hope I am clear in my question.
thanx!


Tom Kyte
November 06, 2003 - 5:14 pm UTC

amazon is.... but they do not use a stateful session (it would kill them)

the application needs to help out here. my url's all allow for SQL TRACING and my own "instrumentation" to be turned on via flags on the fly.

so, if the application is correctly code, anticipating the need to be traced, this is easy.

if not, well, its really hard.

could you kindly answer the above q for 10G and 9ir2 ?

A reader, November 06, 2003 - 12:01 pm UTC

thanx!

Tom Kyte
November 07, 2003 - 8:20 am UTC

this is a mighty long page at this point in time.

"what above 'q'"

how about this?

A reader, November 06, 2003 - 7:17 pm UTC

we create a test user and log in, do the problem
user action and log out. Simultaneously, we have a logon trigger. Even if we are using proxy authentication,
is not it possible to 1
1. intercept the test username at the application server level and "tag" it in some way
2. find out the user at the db level when you log in
by looking at the tag.

in other words why do we loose the "email address user id"
info due to proxy authentication?

What am i missing?
thanx!

Tom Kyte
November 07, 2003 - 8:22 am UTC


i'm not sure at all what you are asking??

what is "email address user id"??

what is the tie in to proxy authentication?

what is a test username???

thanx tom!

A reader, November 07, 2003 - 9:23 am UTC

May be I am confused.
"i'm not sure at all what you are asking??
what is "email address user id"??"

the email address user id is the user id used in amazon.
From customer point of view, our user id is our
email address.


"what is the tie in to proxy authentication?"

I was assuming that amazon.com does not
create a database user for every "email user id" that we give. This may be a wrong assumption.


"what is a test username??? "


The test username is the user we create to generate
trace file for a performance problem scenario. If
amazon.com had created a database user for every
"email id", then would not tracing a session be
as trivial as doing the following in the logon trigger?:

if( user = "test_user" ) then
alter session to enable trace
end if

As far as proxy authentication, I admit I dont understand
it completely yet - will have to refer to your book.

Sorry for the confusion. Also, in 10G there is a dbms_monitor package using which it becomes easier
to generate and identify trace files relevant to
your performance problem. Does this in anyway help
us in identifying the problematic session?


thanx!



Tom Kyte
November 07, 2003 - 10:32 am UTC

yes, you could use "test user" accounts (accounts that always enabled trace) but the problem is -- the issue might only occur for SOME user or SOME USERS. test user might not be affected.


I will standby my statement "application better be able to help me here, if not, application is worthless"


dbms_monitor still needs an "invoker". the APPLICATION needs to be designed to be able to help in the tuning phase. It is all about instruementing the application itself.

I agree Tom!

A reader, November 07, 2003 - 11:32 am UTC

There is no better solution to this than application
instrumentation. But I am trying to find a way to do it
without application instrumentation.

"yes, you could use "test user" accounts (accounts that always enabled trace) but
the problem is -- the issue might only occur for SOME user or SOME USERS. test
user might not be affected.
"

The test user could be generated and the same data that
the "problem" user has could be copied to the test
user. Then at the same time that the problem user
has a problem, you run the test user through the
problem user action and get the trace file. Is
there any loophole in this method?
Basically, copy the "local" environment (such as user
data of the problem user to the test user.

The global env is retained same by running user action
for test user at the time of day when the problem user
sees the problem.

But this is possible only if there are real database
users for each of the user. If we log them in as a
proxy user, this may not be feasible, right?

many thanx!

Tom Kyte
November 07, 2003 - 2:49 pm UTC

you can do anything anyway...

everything is "possible"

it is just that there is the right way to do it and the really hard way...

creating a test user like that would be really really hard (and maybe not even possible!) does your application have a "clone user"? doubt it -- since it doesn't have "trace me" :)



thanx Tom! I agree..

A reader, November 07, 2003 - 3:25 pm UTC

One more question though.
having proxy authentication (not having
real db users ) - does it further complicate the
above "theoretical" approach of creating test user?
In other words, is there a way for me to figure
out that user a@amazon.com logged in, even if I
am using proxy authentication?

thank you!

Tom Kyte
November 07, 2003 - 6:04 pm UTC

test user, proxy user, whatever or not, in a login trigger -- yes it complicates (as in obviates) the login trigger, because the application would have to TELL the database what the user is (and the code that would have been in the login trigger would have to be called AT THAT point to enable tracing -- meaning, back to square one. It is all about the application)

analysis of the TKprof report

Yogeeraj, November 17, 2003 - 8:28 am UTC

Dear sir,

Please allow us ask a question concerning the analysis of our TKPROF report.

We have a case where the users are suffering from performance problems (SLOW) with an application and we are thus anlysing using SQLTRACE and TKPROF first...

In your book (Expert One-on-One Oracle; Page 460), you clearly explained the line "Misses in library cache during parse: 0".

<quote>
This is telling us that the query we executed was found in the shared pool - we did not generate a miss on the library cache during this parse. It indicates that we performed a soft parse of the query. The very first time a query is executed, we would expect this count to be 1. If almost every query you execute has a 1 for this value, it would indicate you are not using bind variables (and you need to fix that). You are not reusing SQL.
</quote>

we have 2 questions:
1. can there be any other cases where we have a '1' value(apart from when "not using the bind variables")?

2. What does the following line imply:
"Misses in library cache during execute: 1"

Can we correlate the CPU/Elapsed values with it? (Does it impact on performance?)

We are getting it with the following query:
============================================================
********************************************************************************

INSERT INTO WHT140 ( WRHNUM,TRANSTYPE,TRANSID,INPDATE,PALLETNO,PRODSEQ,SKU1QTY,SKU2QTY,SKU3QTY,PRODQTY1,PRODQTY2,OUTRACK,OUTBIN,OUTLVL,STATUS ) SELECT :b1,'OUT',:b2,SYSDATE,:b3,WHT100.PRODSEQ,WHT100.SKU1QTY,WHT100.SKU2QTY,WHT100.SKU3QTY,WHT100.PRODQTY1,WHT100.PRODQTY2,WHT120.RACK,WHT120.BIN,WHT120.LVL,'NEW'
FROM WHT100,WHT120 WHERE WHT100.WRHNUM = :b1
AND WHT100.PALLETNO = :b3 AND WHT100.PALLETSTATUS = 'BIN' AND WHT100.PALLETNO = WHT120.PALLETNO
AND WHT100.WRHNUM = WHT120.WRHNUM

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.01 0.01 0 19 9 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.01 0.01 0 19 9 1

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 41 (recursive depth: 1)
********************************************************************************

thank you in advance for your reply.

best regards
Yogeeraj


Tom Kyte
November 19, 2003 - 6:21 am UTC

in this case -- the entire report is suspect for this query since you didn't have sql_trace enabled during the parse phase at all!




More data

Yogeeraj, November 17, 2003 - 8:49 am UTC

$ grep -i "total" report.txt

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 74 0.02 0.04 0 90 0 30
total 25 0.02 0.06 3 27 0 9
total 33 0.02 0.04 4 50 0 10
total 51 0.00 0.00 0 68 0 24
total 105 0.00 0.03 1 29 0 80
total 42 0.01 0.06 10 50 0 14
total 15 0.00 0.00 0 10 0 0
total 36 0.00 0.00 0 36 0 21
total 77 0.00 0.01 0 102 0 30
total 18 0.07 0.24 96 474 48 219
total 127 0.03 0.03 1 168 0 56
total 2 0.07 0.10 1 6365 7 1
total 2 0.03 0.04 0 3616 5 1
total 2 0.12 0.18 25 13716 9 1
total 6 0.00 0.00 0 8 0 0
total 2 0.01 0.01 0 3 1 1
total 12 0.00 0.01 0 4 16 4
total 2 0.00 0.00 0 0 0 0
total 2 0.00 0.03 1 4 0 1
total 10 0.00 0.01 1 10 0 2
total 11 0.00 0.02 1 12 0 4
total 12 0.01 0.01 1 8 1 4
total 3 0.00 0.01 0 3 4 1
total 9 0.01 0.00 0 15 0 0
total 23 0.00 0.00 0 9 0 14
total 3 0.00 0.01 0 56 0 1
total 3 0.00 0.00 0 1 4 1
total 3 0.01 0.00 0 1 4 1
total 61 0.01 0.01 1 50 20 50
total 37 0.00 0.01 0 30 12 30
total 27 0.01 0.00 2 18 12 351
total 36 0.00 0.01 1 27 12 483
total 24 0.01 0.01 2 15 12 264
total 3 0.00 0.02 0 0 0 0
total 18 0.00 0.00 0 9 12 141
total 3 0.01 0.00 1 3 0 1
total 10 0.00 0.01 2 21 1 7
total 10 0.01 0.02 4 19 1 7
total 3 0.00 0.02 0 238 0 1
total 3 0.01 0.00 0 21 4 1
total 3 0.01 0.00 0 2 0 1
total 2 0.00 0.00 0 2 3 1
total 1 0.00 0.00 0 1 1 1
total 1 0.01 0.01 0 19 9 1
total 3 0.00 0.00 0 11 0 1
total 2 0.06 0.07 0 6364 7 1
total 2 0.00 0.00 0 2 1 1
total 2 0.12 0.13 0 12731 15 3
total 2 0.07 0.12 3 7231 25 3
total 2 0.23 0.29 3 27266 23 3
total 2 0.00 0.00 0 6 3 3
total 3 0.01 0.03 1 709 4 9
total 24 0.01 0.01 0 16 0 8
total 24 0.01 0.01 0 96 32 8
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
total 319 0.22 0.50 104 8151 204 1578
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
total 699 0.80 1.22 61 71691 104 332
$

Tom Kyte
November 19, 2003 - 6:31 am UTC

the totals are not extremely useful in diagnosing a hard parse issue.

tkprof output

A reader, November 20, 2003 - 10:04 am UTC

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.01 0.00 0 0 0 0
Execute 19012 3.10 3.56 0 0 0 0
Fetch 19012 1056.38 1517.15 0 43282981 76048 19012
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 38028 1059.49 1520.71 0 43282981 76048 19012

My understanding was that the above times are in
centi seconds for 8i and microseconds in 9i - is that correct?

thanx!!!

Tom Kyte
November 21, 2003 - 4:27 pm UTC

they are SECONDS in all versions.

the raw timing data is in centi/ micro for 8i/ 9i the tkprof is always seconds.

trace

Yogeeraj, November 24, 2003 - 8:16 am UTC

You are right!

the trace was initiated using:
exec sys.dbms_system.set_sql_trace_in_session(623,4339,TRUE);


best regards
Yogeeraj

No of rows processed for an Insert

Abu Ahmadh, December 08, 2003 - 10:13 pm UTC

Hi Tom,

TKPROF output shows:

INSERT INTO CDRCS_EO (col1, col2, col3, col4, col5)
VALUES
(:v1, :v2, :v3, :v4, :v5)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 291 1.25 14.77 0 0 0 0
Execute 291 15.40 1317.90 10011 1047 317037 105147
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 582 16.65 1332.67 10011 1047 317037 105147

Misses in library cache during parse: 0
Optimizer goal: RULE

Question: The statement was executed 291 times. That means, there should be a maximum of 291 rows (insert) processed. Why did it process more number of rows than number of times it is executed?

Secondly, what does it (Insert) have to do with disk?

Please explain.

Thanks!

Tom Kyte
December 09, 2003 - 6:21 am UTC

the developer did a nice array insert. about 370 rows per insert (too bad they didn't keep that cursor open!! should be 1 parse, 291 executes. then it would have been perfection)


as for "what does it have to do with disk" -- you need to insert the data into "something". When you insert, we go the free list, free list says "block 20 in file 5 has space". if block 20 isn't there -- we have to get it. also, when you insert into the table and the table has indexes, the data has to go into very very specific blocks. "name=frank" -- that goes on block 200 in file 20 (discovered that after reading index root and branch blocks). block 200 from file 20 is on disk -- read it so we can modify it.




batch job elapsed time

PINGU, December 19, 2003 - 4:28 am UTC

Hi

We have a batch job which takes around 5 hours, however we trace the job and tkprof the trace file and got this

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 595504 65.83 81.66 0 531 0 0
Execute 595508 346.99 452.31 5 25 21 8
Fetch 891520 1379.24 11759.50 1069624 6984725 3 605893
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2082532 1792.06 12293.48 1069629 6985281 24 605901


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 525 0.10 0.13 0 0 0 0
Execute 947 0.36 0.41 0 0 0 1
Fetch 1819 0.16 1.37 123 3457 0 1345
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3291 0.62 1.91 123 3457 0 1346


from TKPROF the process takes around 3.9 hours.... Is this the total elapsed time inside the database? Or it has nothing to do with database batch process elapsed time?

The 1.1 hour what can that be?

Cheers

Tom Kyte
December 19, 2003 - 6:57 am UTC

that 1.1 hours is network + client processing time. probably mostly in the client given the modest number of rows returned (unless you have a really slow network)

Your batch process should be parsing once and executing many times -- that the parse is almost equal to execute is a bad thing (developers need to correct that)



May be SOFT parse

A reader, December 19, 2003 - 2:59 pm UTC

<tom_answer>
Your batch process should be parsing once and executing many times -- that the
parse is almost equal to execute is a bad thing (developers need to correct
that)
</tom_answer>

In the above TKPROF output most of them may be SOFT PARSE. right? How the developers will fix that?

Tom Kyte
December 20, 2003 - 9:31 am UTC

the developers will:


subroutine X
static variable STMT


if ( stmt is not prepared in this subroutine )
then
prepare the statement
end if;

bind statement
execute statement

/* never close it, keep it open */
end subroutine


a parse is a parse is a parse. the fastest "parse" is NO PARSE

Ask the coders "would you COMPILE subroutine before each execution -- run it -- and then throw it away". of course they would not. so, why do they do that to SQL?


but in web based application

A reader, December 20, 2003 - 11:30 pm UTC

<asktom_response>
/* never close it, keep it open */
</asktom_response>

But in web based application it is very dangerous NOT TO close the statement. It is all stateless and short transactions. We don't know what is the next request is going to be.


Tom Kyte
December 21, 2003 - 10:18 am UTC

you can keep them open for as long as possible.

to do anything else is to impact in a massively negative fashion your performance.

No reason why -- if you parse and execute a statement 500 times in a single web page you would NOT parse once and execute it 500 times.

No reason (other then "i'm lazy").


One approach is to build a hash table associated with your connection -- every prepared/callable statement you open goes into this hash table. Before you prepare any statement -- you ask the hash table for the handle -- if you get it, reuse it. If not, prepare it and add it to the table. Then, right before you give the connection BACK to the pool (i'm assuming you are assuming a connection pool which is not a valid 100% assumption but it must be what you mean), you iterate over the hash table entries closing all statements.

You see, I use mod_plsql which nicely "resets the session state" at the end of each call. No cursor leakage from page to page. If you use a more complex language like Java - you'll have to manage it yourself but the price of NOT managing it is bad performance and limited scalability. So, in the end, it is up to you.

could you help me about this TLPROF result

Suhail, December 23, 2003 - 9:44 am UTC

TOm , here is my TKPROF result. We are integrating a 3rd party tool and its generating folloing TKPROF. It uses insert into global temp table. Why its taking that mush of time.

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 63 500.00 560.81 0 0 0 0
Execute 63 4100.00 4163.51 0 0 0 0
Fetch 2601240500.00 1255899.00 8289 1920975 0 19852
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3861245100.00 1260623.32 8289 1920975 0 19852

Misses in library cache during parse: 23


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 289 5700.00 5181.08 0 0 0 0
Execute 344 25800.00 30757.39 61 7615 3066 2773
Fetch 272 1100.00 799.90 1 625 0 107
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 905 32600.00 36738.37 62 8240 3066 2880

Misses in library cache during parse: 39

129 user SQL statements in session.
226 internal SQL statements in session.
355 SQL statements in session.
********************************************************************************
Trace file: dref_ora_21090.trc
Trace file compatibility: 8.00.04
Sort options: default

1 session in tracefile.
129 user SQL statements in trace file.
226 internal SQL statements in trace file.
355 SQL statements in trace file.
43 unique SQL statements in trace file.
3755 lines in trace file.




Thanks

Suhail

Tom Kyte
December 23, 2003 - 11:47 am UTC

gee, i don't know.

my car won't start this morning. any ideas?


(with those big numbers, I would suspect that you used the 8.x TKPROF against a 9i trace file tho. we went from centi to milli seconds. bet if you divided by 1,000 the numbers are "more meaningful")

elapsed time is less than CPU time?!?

Sami, December 28, 2003 - 9:38 pm UTC

Dear Tom,

How come elapsed time is less than CPU time?!?
Is it possible?

BEGIN
GENERATE_PRODUCT_KEYS (:1,:2,:3,:4) ;
END;

call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ----------
----------
Parse 2737 1213.00 1184.60 0 0 0
0
Execute 2737 28.57 28.08 0 0 0
2737
Fetch 0 0.00 0.00 0 0 0
0
------- ------ -------- ---------- ---------- ---------- ----------
----------
total 5474 1241.57 1212.68 0 0 0
2737


Tom Kyte
December 29, 2003 - 10:02 am UTC

because things are measured using "ticks"

when you have lots of very short operations -- things that take a fraction of a tick, you get measurement errors.

each of your parses took about .44318597 cpu seconds (funny -- you should be more worried about "why the heck did I parse this silly statment 2,737 times - why not just ONCE").

Now, say you are on 8i, where we measure things at a the centisecond level. that means we can accurately measure .44 and .45 but not .44318597. Hence, all of the parses might appear to contribute + or - their time (they might over contribute, they might undercontribute).

Same with elapsed -- things are happening so often that we over/under count.

The more we count, the more the "error" creeps in. So, these numbers are close, but not 100% accurate. An unavoidable issue when measuring LOTS of really small things.

But, the fix here is easy -- PARSE ONCE -- aren't you upset that your parse time (time not spent doing anything useful for you or your business) is 43 TIMES that spent doing useful work???



Thanks Tom,Understood.

Sami, December 29, 2003 - 10:39 am UTC

Already fixed PARSE MANY issue. Just I was curious to know about "elapsed time is less than CPU time".

cpu time and elapsed time

A reader, January 26, 2004 - 5:17 am UTC

Hi

I am wondering if elapsed time is the sum of CPU time plus wait time?

For example if I have this



OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3507 1.18 4.55 2 4 0 0
Execute 4092 32.29 132.90 15774 26866 50269 1496
Fetch 3166 0.52 2.35 269 6332 0 6760
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10765 33.99 139.80 16045 33202 50269 8256

Misses in library cache during parse: 36

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 4708 0.00 0.04
SQL*Net message from client 4708 2.13 54.90
db file sequential read 656 0.09 0.27
latch free 37 0.16 0.71
resmgr:waiting in end wait 112 0.10 0.83
SQL*Net more data to client 4 0.00 0.00
log file sync 112 0.99 1.25
db file scattered read 2 0.00 0.00
SQL*Net more data from client 34 0.00 0.00
SQL*Net message to dblink 303 0.01 0.01
SQL*Net message from dblink 303 0.15 0.76
resmgr:waiting in check 1 0.00 0.00
resmgr:waiting in check2 1 0.01 0.01
SQL*Net break/reset to client 2 0.00 0.00


cpu time is 33.99 seconds
elapsed time is 139.80 seconds
wait time roughly 60 seconds

From this result elapsed time is not the sum of cpu time + wait time... As you mentioned there can be errors because of small transactions "round ups". So I guess that´s the reason the sum dont match. Or simply Elepased time != cpu + wait?

I cannot find anywhere which explains this


Cheers

Tom Kyte
January 26, 2004 - 6:16 am UTC

you are missing the unreported wait time which is "waiting for CPU".


If you have a system with many users -- you are competing for that CPU. CPU "wait time" is not reported here.

cpu time and elapsed time

A reader, January 26, 2004 - 6:21 am UTC

Hi

So I guess cpu time is part of elapsed time right? I have a collegue who thinks total elapsed time is cpu time + elapsed time. I think he is incorrect but I cannot find any documentation to show him

Tom Kyte
January 26, 2004 - 6:24 am UTC

elapsed time is elapsed time.

cpu time is cpu time.


cpu time is a subset of the time in elapsed.


if you have "5.0" in cpu time and "5.5" in elapsed -- your elapsed time (looking at the clock on the wall) was 5.5 seconds -- 5 of which were spent on the CPU and 0.5 of which were spent doing some waiting -- on disk, on cpu, on something.

Query Time

Vinnie, February 04, 2004 - 10:37 am UTC

Tom,

I have the following TKPROF output...

UPDATE test SET STATUS = :1
WHERE
ID = :2 AND CLASS_NAME = :3 AND TYPE = :4 AND TIME = :5
AND ELAPSED_TIME = :6


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 6934 180.65 179.42 198 3253280 15359 6934
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6935 180.65 179.42 198 3253280 15359 6934

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 36

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE
6934 TABLE ACCESS BY INDEX ROWID OBJ#(54288)
27941318 INDEX RANGE SCAN OBJ#(54296) (object id 54296)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 198 0.13 0.25
SQL*Net message to client 6934 0.00 0.02
SQL*Net message from client 6934 0.01 4.42
log file switch completion 1 0.30 0.30
********************************************************************************

Why would the query count be so high? The table is analyzed but not the index,
could this help?

Plus the waits above does not show the wait time for 'log file sync'
as shown from a snippet of a level 12 trace below. Why not & what could be
causing this amount of wait besides basic disk configuration?

WAIT #14: nam='db file sequential read' ela= 72 p1=2 p2=5417 p3=1
EXEC #14:c=10000,e=4190,p=1,cr=4,cu=3,mis=0,r=1,dep=0,og=4,tim=3109543021141
WAIT #14: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
WAIT #14: nam='SQL*Net message from client' ela= 3549 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 15175 p1=444 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 9534 p1=1413697536 p2=1 p3=0
...
...
WAIT #14: nam='db file sequential read' ela= 75 p1=2 p2=1827 p3=1
EXEC #14:c=0,e=1799,p=1,cr=4,cu=3,mis=0,r=1,dep=0,og=4,tim=3109543052078
WAIT #14: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
WAIT #14: nam='SQL*Net message from client' ela= 2824 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 14320 p1=445 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 9170 p1=1413697536 p2=1 p3=0
...
...

THanks


Tom Kyte
February 04, 2004 - 5:13 pm UTC

you have a very non-selective index on there:

6934 TABLE ACCESS BY INDEX ROWID OBJ#(54288)
27941318 INDEX RANGE SCAN OBJ#(54296) (object id 54296)

you scanned 27,941,318 rows in the index that MIGHT be updated. After doing 27,941,318 table access by index rowid - you found (by doing the rest of the predicate) that only 6,934 of them were in fact to be updated.

so, find out what columns that index is one, ask "why do i have that index" and if your goal is to make this update "as fast as possible", look no further than the update predicate:

ID = :2 AND CLASS_NAME = :3 AND TYPE = :4 AND TIME = :5
AND ELAPSED_TIME = :6

and consider creating an index on id,class_name,type,time,elapsed_time

OR as least some subset of those columns that select out the least number of rows....

Log file sync is something you wait on during a COMMIT. Never on an update.

the XCTEND (transaction END) records are right before the log file sync waits...




New explain plan ???

BG, February 10, 2004 - 9:48 am UTC

Hi Tom,

I got this expl plan.

Operation Object Name Rows Bytes Cost

SELECT STATEMENT Optimizer Mode=CHOOSE 1 41
RECURSIVE EXECUTION SYS_LE_3_0
RECURSIVE EXECUTION SYS_LE_3_1
TEMP TABLE TRANSFORMATION
SORT ORDER BY 1 195 32
NESTED LOOPS 1 195 30
NESTED LOOPS 1 180 28
HASH JOIN 1 59 13
HASH JOIN 35 1 K 10
TABLE ACCESS FULL tab_a 35 735 7
TABLE ACCESS FULL tab_b 93 1 K 2
TABLE ACCESS FULL tab_c 238 4 K 2
TABLE ACCESS BY INDEX ROWID tab_d 9 1 K 15
INDEX RANGE SCAN tab_d_UK 9 10
TABLE ACCESS BY INDEX ROWID tab_e 1 15 2
INDEX UNIQUE SCAN tab_e_PK 3 K 1
(Sorry, I have tried to format the layout).
Could you plase explain me what does mean:
" RECURSIVE EXECUTION SYS_LE_3_0 RECURSIVE EXECUTION SYS_LE_3_1 TEMP TABLE TRANSFORMATION"

Even if it shown that cost of a query is 41 take him a lot of time to finish.

Thanks!

Tom Kyte
February 10, 2004 - 3:30 pm UTC

umm, new plan from what?

(looks like a star transformation to me) see
</code> http://dbazine.com/jlewis6.html <code>

Steve, February 10, 2004 - 4:28 pm UTC


Hi Tom,

I have the following trace data for the query I am going to tune.However, with very few PIO/LIOs but huge elapsed time, I am thinking it is not the problem of the query itself, What are possible reasons will cause this long response time?

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.27 0.30 0 0 0 0
Execute 1 0.30 4036.47 2 3 430 202431
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.57 4036.77 2 3 430 202431



Thanks!

Steve

Tom Kyte
February 11, 2004 - 8:43 am UTC

wait events.

use the 10046 trace event with level 12 to see binds and waits. if you have 9i, the waits will be reported in the tkprof itself. if you have 8i and before, you need to read the raw trace file to see what it was waiting on.

Since it has no fetch set, and all of the work was current mode gets really -- I'll say this was not a "query" but rather an insert/update/delete statement. That is easy to get a wait like that -- and the wait would be for ENQUEUE probably (eg: a lock). if you have my expert one on one Oracle -- in chapter 10, I use that example exactly. Easy to simulate just:

session1: update emp set ename = ename;
session2: set trace on, do same update, go to lunch
session1: come back from lunch and type commit;
session2: commit and exit

now run tkprof, you'll see something like this (in 9i)


update emp set ename = ename


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.30 104.42 0 18763 19155 18633
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.30 104.43 0 18763 19155 18633

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 101

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE (cr=18763 r=0 w=0 time=104429440 us)
18633 TABLE ACCESS FULL OBJ#(34749) (cr=18762 r=0 w=0 time=58511 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
enqueue 35 2.99 103.72

log buffer space 7 0.07 0.38
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00



more info for my question

steve, February 10, 2004 - 10:20 pm UTC

hi tom,

my sql is not special.

insert ...
select ...
from a,b,c,d
where ...
group by ...

and tkprof output:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.28 0.30 0 0 0 0
Execute 1 0.64 3699.82 4 1226 688 202433
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.92 3700.12 4 1226 688 202433

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
file open 7 0.00 0.00
db file sequential read 4 0.02 0.05
rdbms ipc reply 14 0.06 0.56
enqueue 11 0.35 0.46
PX Deq: Join ACK 24 0.00 0.00
PX Deq: Execute Reply 2311 2.06 3697.28
PX Deq Credit: send blkd 38 0.05 0.08
PX Deq: Parse Reply 4 0.01 0.01
control file sequential read 33 0.00 0.00
local write wait 9 0.06 0.33
PX Deq: Signal ACK 4 0.10 0.10
log file sync 1 0.00 0.00
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************

Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD AS SELECT
0 SORT GROUP BY
0 SORT GROUP BY
0 HASH JOIN
0 HASH JOIN
0 TABLE ACCESS FULL BN_STORE
0 HASH JOIN
0 TABLE ACCESS FULL IMM_LOOKUP_CODES
0 NESTED LOOPS OUTER
0 PARTITION RANGE ALL PARTITION: START=1 STOP=12
0 TABLE ACCESS BY LOCAL INDEX ROWID IMM_STORE_INVENTORY PARTITION
: START=1 STOP=12
0 INDEX RANGE SCAN PARTITION: START=1 STOP=12 (object id 5056581
)
0 PARTITION HASH ITERATOR PARTITION: KEY KEY
0 TABLE ACCESS BY LOCAL INDEX ROWID IMM_DIV_STRATEGY PARTITION: K
EY KEY
0 INDEX UNIQUE SCAN PARTITION: KEY KEY (object id 3464554)
0 PARTITION HASH ALL PARTITION: START=1 STOP=12
0 TABLE ACCESS FULL BN_ITEM PARTITION: START=1 STOP=12

'PX Deq: Execute Reply' is an idle event. why idle events slow down the performance? how to tune the sql based on tkprof output?

thanks!

Steve

Tom Kyte
February 11, 2004 - 9:05 am UTC

see, this is why you need to post *enough* information. I guessed above. My guess was wrong. wish I had this before I guessed.

ahh, the time was taken in the PQ slaves. Your session was the coordinator. It was idle waiting for the PQ slaves to finish. Very simple -- the time to actually do the query was in the other processes -- not yours. Your session had a very simple job to do. It was waiting for the PQ guys to finish their jobs.

more info for my question

Steve, February 11, 2004 - 10:25 am UTC

Thank you, Tom!

How should I change the enviroment to speed up parallel slaves?

I noticed in top session that there are several SELECT sessions but just one INSERT session, even if I added
'alter session enable parallel dml' and parallel hint to the insert into statement.

Thanks!

Steve



Tom Kyte
February 11, 2004 - 11:20 am UTC

you'll want to read much of
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96520/toc.htm

and in particular to find out what your pq processes are doing (only way to tune, to understand what it is you have to tune) see
http://docs.oracle.com/docs/cd/B10501_01/server.920/a96520/tuningpe.htm#57125 <code>


Help with tkprof output

Denise, February 16, 2004 - 1:19 pm UTC

Hi Tom

We've been told by the software vendor that there is nothing
wrong with the efficiency of the module that produced the
following tkprof output based on an import of 10 records
with no idnumbers that took 10 minutes to process.

some background:
There is a module called Lockbox that allows us to import
records from varying sources into our system. This module
also performs duplicate matching/checking. We import files
that may or may not contain idnumbers and the volumne
can be anywhere from 10,000 to 15,000 records.

This is the newest version of their product that we
had installed on our testserver for testing purposes.

When we import files with idnumbers the system processes
them reasonably fast( a file with 325 records took less than 1 minute).

We did a test importing another file with no idnumbers
containing 10 records...it took 10 minutes to process..
hence we logged in a support call and ran a trace/tkprof.

I'll attach the tkprof output in the thread after this one..
as it is quite lengthy.

The old version works fine...therefore I am going to need to
do some analysis and investigating as to what changes have been made
to the new version that is causing performance to slowdown considerably
in the absence of idnumbers

Any insight/help you can provide is immensely appreciated


here it is...thanks!

Denise, February 16, 2004 - 1:20 pm UTC

TKPROF: Release 8.1.7.0.0 - Production on Thu Feb 12 09:08:57 2004

(c) Copyright 2000 Oracle Corporation. All rights reserved.

Trace file: ora00187.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

SELECT distinct(p.idnumber) FROM phone ph, salutations s1, salutations s2, additional_demographics ad, prospect p, address_view a WHERE p.idnumber != :p_id and p.salutcode = s1.salutcode(+) and p.csalutcode = s2.salutcode(+) and p.idnumber = ad.idnumber(+) and
p.idnumber = a.idnumber and
p.idnumber = ph.idnumber(+) and p.idnumber in (select idnumber from prospect p1 where p1.last like SUBSTR(:p_last, 1, :p_num_last)||'%' UNION select idnumber from prospect p2 where p2.clast like SUBSTR(:p_last, 1, :p_num_last)||'%') AND (UPPER(NVL(a.line1, :p_def)) like UPPER(SUBSTR(:p_line1, 1, :p_num_line1))||'%' or UPPER(NVL(a.line2, :p_def)) like UPPER(SUBSTR(:p_line1, 1, :p_num_line1))||'%' or UPPER(NVL(a.line3, :p_def)) like UPPER(SUBSTR(:p_line1, 1, :p_num_line1))||'%' ) AND NVL(a.zip, :p_def) like SUBSTR(:p_zip, 1, :p_num_zip)||'%' and rownum < 1000

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 12 0.00 0.00 0 0 6 0
Fetch 143 0.00 0.00 72802 804055 290 136
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 155 0.00 0.00 72802 804055 296 136

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 12 (GUI) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
0 SORT (UNIQUE)
0 COUNT (STOPKEY)
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS (OUTER)
0 NESTED LOOPS
0 VIEW OF 'VW_NSO_1'
0 SORT (UNIQUE)
0 UNION-ALL
0 TABLE ACCESS GOAL: ANALYZED (BY
INDEX ROWID) OF 'PROSPECT'
0 INDEX (RANGE SCAN) OF
'LNAME_INDEX' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY
INDEX ROWID) OF 'PROSPECT'
0 INDEX (RANGE SCAN) OF
'CNAME_INDEX' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'PROSPECT'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN)
OF 'PK_PROSPECT_IDNUMBER' (UNIQUE)
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'PK_AD_IDNUMBER' (UNIQUE)
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'PK_SALUTATIONS_SALUTC' (UNIQUE)
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'PK_SALUTATIONS_SALUTC' (UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'ADDRESS'
0 INDEX (RANGE SCAN) OF 'ADDRESS_IDNUMBER_INDEX'
(NON-UNIQUE)
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'PK_STATES_STATEC' (UNIQUE)
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'PK_ADDTAB1_CODE' (UNIQUE)
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'PK_ADDRS_CODE' (UNIQUE)
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'PK_COUNTRY_CODE' (UNIQUE)
0 INDEX (RANGE SCAN) OF 'PHONE_IDNUMBER_INDEX' (NON-UNIQUE)

********************************************************************************

select length
from
fet$ where file#=:1 and block#=:2 and ts#=:3


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 55 0.00 0.00 0 0 0 0
Execute 55 0.00 0.00 0 0 0 0
Fetch 55 0.00 0.00 0 110 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 165 0.00 0.00 0 110 0 0

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 2)
********************************************************************************

delete from seg$
where
ts#=:1 and file#=:2 and block#=:3


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 0.00 0.00 0 0 0 0
Execute 7 0.00 0.00 1 35 21 7
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 14 0.00 0.00 1 35 21 7

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 2)
********************************************************************************

insert into fet$ (file#,block#,ts#,length)
values
(:1,:2,:3,:4)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 32 0.00 0.00 0 0 0 0
Execute 32 0.00 0.00 0 32 42 32
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 64 0.00 0.00 0 32 42 32

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 2)
********************************************************************************

delete from uet$
where
ts#=:1 and segfile#=:2 and segblock#=:3 and ext#=:4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 30 0.00 0.00 0 0 0 0
Execute 30 0.00 0.00 0 150 30 30
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 60 0.00 0.00 0 150 30 30

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 2)
********************************************************************************

delete from fet$
where
file#=:1 and block#=:2 and ts#=:3


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 32 0.00 0.00 0 0 0 0
Execute 32 0.00 0.00 0 144 32 32
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 64 0.00 0.00 0 144 32 32

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 2)
********************************************************************************

SELECT IDNUMBER.NEXTVAL nextseq
FROM
dual WHERE ROWNUM = 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 0.00 0.00 0 0 0 0
Execute 7 0.00 0.00 0 0 0 0
Fetch 7 0.00 0.00 3 7 35 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 21 0.00 0.00 3 7 35 7

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 12 (GUI) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
0 SEQUENCE OF 'IDNUMBER'
0 COUNT (STOPKEY)
0 TABLE ACCESS (FULL) OF 'DUAL'

********************************************************************************

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
cache=:7,highwater=:8,audit$=:9
where
obj#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 30 0.00 0.00 0 0 0 0
Execute 30 0.00 0.00 4 30 90 30
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 60 0.00 0.00 4 30 90 30

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 2)
********************************************************************************

SELECT IDNUMBER
FROM
PROSPECT WHERE IDNUMBER = :b1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 7 0.00 0.00 0 0 0 0
Fetch 7 0.00 0.00 0 21 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 14 0.00 0.00 0 21 0 0

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 12 (GUI) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'PK_PROSPECT_IDNUMBER'
(UNIQUE)

********************************************************************************

SELECT SYSDATE
FROM
SYS.DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------


Tom Kyte
February 16, 2004 - 1:33 pm UTC

you seem to have done it without timed statistics enabled.

it is "not useful" in that form.

enable timed statistics and do it again and you'll be able to SEE where the problem is (look at the total lines, look for a big total)

sorry Tom......

Denise, February 16, 2004 - 1:25 pm UTC

the tkprof file is far to lengthy to post here, only
a portion of it got posted.

I'll figure it out eventually.

take care..


thanks Tom!!!

Denise, February 16, 2004 - 1:44 pm UTC

errr...one of their support technicians logged in and
performed the trace.

They forbid us to set the timed_statictics parameter to
TRUE in the init.ora file...so everytime we need to do
a trace the parameter is set to TRUE through SQL Plus
and apparently the tech forgot to do that.

I will do it myself this time and re-run the trace.

I'm surprised none of the developers took note of that!

as always Tom thanks a million!

:~)





Tom Kyte
February 16, 2004 - 1:49 pm UTC

alter system set timed_statistics=true;


will turn it on.... your systems should be running with it on anyway. impossible to tune or just even look at a system without it.

I refuse to run databases with it off ;)

Tom...

Denise, February 16, 2004 - 3:07 pm UTC

I set the timed_statistics=true;
then did:
execute dbms_system.set_sql_trace_in_session(sid,serial,TRUE)...then executed the package through SQLPlus.

The latest trace/tkprof is about as lengthy as the first
one...you mention look for the big numbers???...do you
mean the summary numbers at the end of the report?

I see a lot of parsing from user SYS in the report???

I am not going to try and post the output here..it won't
fit.

I did see sections that related to the 10 records being
parsed,executed...etc, should I focus on those areas only
in trying to determine what is going on?

I'm still in the learning phase of trying to decipher
what the output represents.

thanks.






Tom Kyte
February 16, 2004 - 4:09 pm UTC

i mean "look for low hanging fruit". look at all of the totals (but not the summaries). Look for big totals. zero in on those elements.

Tom....

Denise, February 16, 2004 - 3:15 pm UTC

I went back and started looking at sections with big numbers and came across this one that is doing an insert that appears to be taking alot of time.



INSERT INTO BATCHPAY ( ACKNOWLEDGED,ACKNOWLEDGED_DATE,ACKNOWLEDGEMENT,
ANONYMOUS,APPEALCODE,BD_ID,BENEFIT,BILLING_PARTY,CERTIFICATE_NUM,CHECK_DATE,
CHECK_NUMBER,CLASSCODE,COMPLIMENTARY,CRACCOUNTNO,DBACCOUNTNO,
PREMIUM_DECLINED,T_AMOUNT,T_DATE,EVALUATION_DATE,EVALUATION_SOURCE,
FULL_AMOUNT,FUNDCODE,GIFTGIVERID,GIFTTYPE,HIGH_VALUE,HMFLAG,HM_REMARK,HONOR,
LOW_VALUE,MATCH,MEDIUM_VALUE,PAYAMOUNT,PAY_METHOD,PGTRANSNUM,PROPOSALNUMBER,
QUANTITY,REASON,RECEIPT,RECEIPT_PARTY,RECEIVED,RECIPIENT,REMARK,
SECURITY_NAME,SECURITY_TYPE,SEC_REMARK,SHARES,SOFT_CREDIT,SOLICITORCODE,
UNITID,USERCODE1,USERCODE2,USERCODE3,USERCODE4,USERCODE5,USERCODE6,
USERFLAG1,USERNUMBER1,USERTEXT1,VOLUNTEER_ID,YEAR,EST_VALUE,BATCHNO )
VALUES
( :b1,:b2,:b3,:b4,:b5,:b6,:b7,:b8,:b9,:b10,:b11,:b12,:b13,:b14,:b15,:b16,
:b17,:b18,:b19,:b20,:b21,:b22,:b23,:b24,:b25,:b26,:b27,:b28,:b29,:b30,:b31,
:b32,:b33,:b34,:b35,:b36,:b37,:b38,:b39,:b40,:b41,:b42,:b43,:b44,:b45,:b46,
:b47,:b48,:b49,:b50,:b51,:b52,:b53,:b54,:b55,:b56,:b57,:b58,:b59,:b60,:b61,
:b62 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 22.72 23.60 6 0 6 0
Execute 10 1.62 1.85 28 1 304 10
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11 24.34 25.45 34 1 310 10


Tom Kyte
February 16, 2004 - 4:11 pm UTC

hmm, a parse on an insert with values taking 23 cpu seconds. that is rather large. what can you say about that batchpay table? if you

exp userid=u/p tables=batchpay rows=n
imp userid=u/p full=y indexfile=batchpay.sql

what does that script look like -- lets see whats behind this table.

Tom...last one...

Denise, February 16, 2004 - 3:22 pm UTC

I think I'm beginning to understand this stuff.

here's another section:
It seems during inserts is where the performance is
getting bogged down..am I on the right track here?

promise not to bother anymore(at least for today ;~))

INSERT INTO BATCHDETAIL ( TRANSNUM,BATCHNO,SPLIT_TRANSNUM,PARENT_ID,TRANTYPE,
IDNUMBER,DOCUMENT_NUMBER,POST_STATUS,POST_RULE,POST_SEQUENCE,POST_DATE,
ERRMSG,TIMESTAMP,USERID_ADD,UPDATE_DATE,USERID_CHANGE,NEW_TRANSNUM )
VALUES
( :b1,:b2,:b3,:b4,:b5,:b6,:b7,:b8,:b9,:b10,:b11,:b12,:b13,:b14,:b15,:b16,
:b17 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 4.26 4.54 2 0 2 0
Execute 10 2.85 2.95 10 1 150 10
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11 7.11 7.49 12 1 152 10

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 12 (recursive depth: 1)

Tom Kyte
February 16, 2004 - 4:12 pm UTC

your parse time is excessively high -- it should not be. Not sure why you are getting misses in the library cache if it already ran once before -- unless you have massive bind variable issues that are squeezing it out.


A parse of an insert shouldn't be so big.

less LIO but more elapsed time

steve, February 16, 2004 - 3:30 pm UTC

Hi tom,

I am tuning a insert .. select /*+ index(t index_name) */ ... its tkprof out are:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.03 0 0 1 0
Execute 1 495.20 1302.50 2882183 1285075 13 78
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 495.21 1302.53 2882183 1285075 14 78

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 28

Rows Row Source Operation
------- ---------------------------------------------------
79 SORT GROUP BY
18939046 HASH JOIN
5505677 TABLE ACCESS BY INDEX ROWID XX_ITEM
6228575 INDEX FULL SCAN (object id 3569676)
22491942 TABLE ACCESS FULL XX_INVENTORY PARTITION: START=12 STOP=12


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
file open 5 0.00 0.00
db file sequential read 140772 0.17 5.99
db file scattered read 19037 1.59 103.70
db file parallel read 1361 0.06 1.21
direct path write 90402 0.34 33.99
latch free 157 0.02 1.60
direct path read 896434 2.95 720.95
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.01 0.01
********************************************************************************


and after I removed the index hint from the sql I got:


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 417.52 1914.01 2171938 1071811 21 78
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 417.53 1914.02 2171938 1071811 21 78

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 28


Rows Row Source Operation
------- ---------------------------------------------------
79 SORT GROUP BY
18939618 HASH JOIN
5505696 TABLE ACCESS FULL XX_ITEM
22492514 TABLE ACCESS FULL XX_INVENTORY PARTITION: START=12 STOP=12


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
file open 4 0.05 0.05
db file scattered read 26087 0.81 143.09
db file sequential read 770 0.08 0.26
direct path write 82283 0.43 63.17
latch free 144 0.02 1.46
direct path read 558661 1.05 1350.19
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.01 0.01
********************************************************************************

The sql with index hint run just 21 min,
the sql without index hint took 31 min.

You can see from the above output, the sql without hint has less CPU time, less LIOs that is what you always suggest. But it has more response time that is not the user wanted.
1) Does this means less LIOs are not always good?
2) elapsed = cpu + sum(total waited)?
but in this example
1914.01 < 417.52 + (1350.19 + 143.09 + 63.17+ ...)


Thanks!

Steve






Tom Kyte
February 16, 2004 - 4:15 pm UTC

1) "in general, you tune to reduce lio"

for a big batch like this, you want to bump up your hash_area_size which looks too small (for either query) given the number of direct path reads.

up the hash area size as big as you can and give both a whirl, you should see large differences.

2) you miss "wait for cpu" - that is the time spent in runnable mode but no cpu available to service you. there is no wait event recorded for that.

Here it is Tom

Denise, February 16, 2004 - 4:25 pm UTC

REM CREATE TABLE "GUI"."BATCHPAY" ("ACKNOWLEDGED" VARCHAR2(1) NOT NULL
REM ENABLE, "ACKNOWLEDGED_DATE" DATE, "ACKNOWLEDGEMENT" VARCHAR2(4),
REM "ANONYMOUS" VARCHAR2(1) NOT NULL ENABLE, "APPEALCODE" VARCHAR2(10),
REM "BD_ID" NUMBER(8, 0) NOT NULL ENABLE, "BENEFIT" NUMBER(13, 2),
REM "BILLING_PARTY" NUMBER(12, 0), "CERTIFICATE_NUM" VARCHAR2(40),
REM "CHECK_DATE" DATE, "CHECK_NUMBER" VARCHAR2(16), "CLASSCODE"
REM VARCHAR2(6), "COMPLIMENTARY" VARCHAR2(1), "CRACCOUNTNO"
REM VARCHAR2(100), "DBACCOUNTNO" VARCHAR2(100), "PREMIUM_DECLINED"
REM VARCHAR2(1) NOT NULL ENABLE, "T_AMOUNT" NUMBER(13, 2) NOT NULL
REM ENABLE, "T_DATE" DATE NOT NULL ENABLE, "EVALUATION_DATE" DATE,
REM "EVALUATION_SOURCE" VARCHAR2(6), "FULL_AMOUNT" NUMBER(13, 2),
REM "FUNDCODE" VARCHAR2(10), "GIFTGIVERID" NUMBER(12, 0), "GIFTTYPE"
REM VARCHAR2(4), "HIGH_VALUE" NUMBER(15, 4), "HMFLAG" VARCHAR2(1),
REM "HM_REMARK" VARCHAR2(2000), "HONOR" NUMBER(12, 0), "LOW_VALUE"
REM NUMBER(15, 4), "MATCH" VARCHAR2(1) NOT NULL ENABLE, "MEDIUM_VALUE"
REM NUMBER(15, 4), "PAYAMOUNT" NUMBER(13, 2) NOT NULL ENABLE,
REM "PAY_METHOD" VARCHAR2(4), "PGTRANSNUM" NUMBER(8, 0), "PROPOSALNUMBER"
REM NUMBER(8, 0), "QUANTITY" NUMBER(8, 0), "REASON" VARCHAR2(4),
REM "RECEIPT" VARCHAR2(1), "RECEIPT_PARTY" NUMBER(12, 0), "RECEIVED"
REM VARCHAR2(40), "RECIPIENT" NUMBER(12, 0), "REMARK" VARCHAR2(2000),
REM "SECURITY_NAME" VARCHAR2(60), "SECURITY_TYPE" VARCHAR2(6),
REM "SEC_REMARK" VARCHAR2(2000), "SHARES" NUMBER(8, 0), "SOFT_CREDIT"
REM VARCHAR2(1), "SOLICITORCODE" VARCHAR2(4), "UNITID" NUMBER(12, 0),
REM "USERCODE1" VARCHAR2(4), "USERCODE2" VARCHAR2(4), "USERCODE3"
REM VARCHAR2(4), "USERCODE4" VARCHAR2(4), "USERCODE5" VARCHAR2(4),
REM "USERCODE6" VARCHAR2(4), "USERFLAG1" VARCHAR2(1), "USERNUMBER1"
REM NUMBER(13, 2), "USERTEXT1" VARCHAR2(80), "VOLUNTEER_ID" NUMBER(12,
REM 0), "YEAR" NUMBER(4, 0), "EST_VALUE" NUMBER(11, 0), "BATCHNO"
REM NUMBER(8, 0) NOT NULL ENABLE) PCTFREE 10 PCTUSED 40 INITRANS 1
REM MAXTRANS 255 LOGGING STORAGE(INITIAL 440401920 NEXT 10485760
REM MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS
REM 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" ;
CONNECT GUI;
CREATE UNIQUE INDEX "GUI"."PK_BATCHPAY_BD_ID" ON "BATCHPAY" ("BD_ID" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 89178112 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_ACKNOWLEDGEMENT_INDEX" ON "BATCHPAY"
("ACKNOWLEDGEMENT" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
60440576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS
1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_APPEALCODE_INDEX" ON "BATCHPAY" ("APPEALCODE"
) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 124174336 NEXT
1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST
GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_BILLING_PARTY_INDEX" ON "BATCHPAY"
("BILLING_PARTY" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_CLASSCODE_INDEX" ON "BATCHPAY" ("CLASSCODE" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_CRACCOUNTNO_INDEX" ON "BATCHPAY"
("CRACCOUNTNO" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_DBACCOUNTNO_INDEX" ON "BATCHPAY"
("DBACCOUNTNO" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_GIFTGIVERID_INDEX" ON "BATCHPAY"
("GIFTGIVERID" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_EVAL_SOURCE_INDEX" ON "BATCHPAY"
("EVALUATION_SOURCE" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_FUNDCODE_INDEX" ON "BATCHPAY" ("FUNDCODE" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 94502912 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_GIFTTYPE_INDEX" ON "BATCHPAY" ("GIFTTYPE" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 94511104 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_HONOR_INDEX" ON "BATCHPAY" ("HONOR" ) PCTFREE
10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS
1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL
DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_PAY_METHOD_INDEX" ON "BATCHPAY" ("PAY_METHOD"
) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_PGTRANSNUM_INDEX" ON "BATCHPAY" ("PGTRANSNUM"
) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_PROPOSALNUMBER_INDEX" ON "BATCHPAY"
("PROPOSALNUMBER" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_REASON_INDEX" ON "BATCHPAY" ("REASON" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_RECEIPT_PARTY_INDEX" ON "BATCHPAY"
("RECEIPT_PARTY" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_RECIPIENT_INDEX" ON "BATCHPAY" ("RECIPIENT" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_SECURITY_TYPE_INDEX" ON "BATCHPAY"
("SECURITY_TYPE" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_SOLICITORCODE_INDEX" ON "BATCHPAY"
("SOLICITORCODE" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_UNITID_INDEX" ON "BATCHPAY" ("UNITID" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_USERCODE1_INDEX" ON "BATCHPAY" ("USERCODE1" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_USERCODE2_INDEX" ON "BATCHPAY" ("USERCODE2" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_USERCODE3_INDEX" ON "BATCHPAY" ("USERCODE3" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_USERCODE4_INDEX" ON "BATCHPAY" ("USERCODE4" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_USERCODE5_INDEX" ON "BATCHPAY" ("USERCODE5" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_USERCODE6_INDEX" ON "BATCHPAY" ("USERCODE6" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_VOLUNTEER_ID_INDEX" ON "BATCHPAY"
("VOLUNTEER_ID" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL
1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
CREATE INDEX "GUI"."BATCHPAY_BATCHNO_INDEX" ON "BATCHPAY" ("BATCHNO" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 105127936 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "PLEDGEMAKER" LOGGING ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "PK_BATCHPAY_BD_ID"
REM PRIMARY KEY ("BD_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
REM STORAGE(INITIAL 89178112 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 505
REM PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
REM TABLESPACE "PLEDGEMAKER" ENABLE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "CHK_BATCHPAY_ACKNOWLEDGED" CHECK (acknowledged IN ('S','Y','N'))
REM ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "CHK_BATCHPAY_ANONYMOUS"
REM CHECK (anonymous IN ('Y','N')) ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "CHK_BATCHPAY_COMPLIMENTARY" CHECK (complimentary IN ('Y','N'))
REM ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "CHK_BATCHPAY_PREMIUM_DECLINED" CHECK (premium_declined IN ('Y','N'))
REM ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "CHK_BATCHPAY_HMFLAG"
REM CHECK (hmflag IN ('H','M')) ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "CHK_BATCHPAY_MATCH"
REM CHECK (match IN ('Y','N')) ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "CHK_BATCHPAY_RECEIPT"
REM CHECK (receipt IN ('A','N','S','V','Y')) ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "CHK_BATCHPAY_SOFT_CREDIT" CHECK (soft_credit IN ('Y','N')) ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "CHK_BATCHPAY_USERFLAG1"
REM CHECK (userflag1 IN ('Y','N')) ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_ACKNOWLEDGEMENT" FOREIGN KEY ("ACKNOWLEDGEMENT")
REM REFERENCES "ACKNOWLEDGEMENTS" ("CODE") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_APPEALCODE"
REM FOREIGN KEY ("APPEALCODE") REFERENCES "APPEALS" ("APPEALCODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_BD_ID"
REM FOREIGN KEY ("BD_ID") REFERENCES "BATCHDETAIL" ("TRANSNUM") ON DELETE
REM CASCADE ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_BILLING_PARTY" FOREIGN KEY ("BILLING_PARTY") REFERENCES
REM "PROSPECT" ("IDNUMBER") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_CLASSCODE"
REM FOREIGN KEY ("CLASSCODE") REFERENCES "CLASSIFICATIONS" ("CLASSCODE")
REM ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_CRACCOUNTNO"
REM FOREIGN KEY ("CRACCOUNTNO") REFERENCES "ACCOUNTS" ("ACCOUNTNO")
REM ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_DBACCOUNTNO"
REM FOREIGN KEY ("DBACCOUNTNO") REFERENCES "ACCOUNTS" ("ACCOUNTNO")
REM ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_EVALUATION_SOURCE" FOREIGN KEY ("EVALUATION_SOURCE")
REM REFERENCES "EVAL_SOURCE" ("EVALUATION_SOURCE") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_FUNDCODE"
REM FOREIGN KEY ("FUNDCODE") REFERENCES "FUNDS" ("FUNDCODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_GIFTGIVERID"
REM FOREIGN KEY ("GIFTGIVERID") REFERENCES "PROSPECT" ("IDNUMBER") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_GIFTTYPE"
REM FOREIGN KEY ("GIFTTYPE") REFERENCES "GIFTTYPES" ("GIFTTYPE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_HONOR"
REM FOREIGN KEY ("HONOR") REFERENCES "PROSPECT" ("IDNUMBER") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_PAY_METHOD"
REM FOREIGN KEY ("PAY_METHOD") REFERENCES "PAY_METHOD" ("PAYCODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_PGTRANSNUM"
REM FOREIGN KEY ("PGTRANSNUM") REFERENCES "PLANNEDGIFT" ("TRANSNUM")
REM ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_PROPOSALNUMBER" FOREIGN KEY ("PROPOSALNUMBER")
REM REFERENCES "TRACKING" ("PROPOSALNUMBER") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_REASON"
REM FOREIGN KEY ("REASON") REFERENCES "GIFT_REASONS" ("CODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_RECEIPT_PARTY" FOREIGN KEY ("RECEIPT_PARTY") REFERENCES
REM "PROSPECT" ("IDNUMBER") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_RECIPIENT"
REM FOREIGN KEY ("RECIPIENT") REFERENCES "PROSPECT" ("IDNUMBER") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_SECURITY_TYPE" FOREIGN KEY ("SECURITY_TYPE") REFERENCES
REM "SECTYPES" ("SECURITY_TYPE") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_SOLICITORCODE" FOREIGN KEY ("SOLICITORCODE") REFERENCES
REM "SOLICITORS" ("SOLICITORCODE") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_UNITID"
REM FOREIGN KEY ("UNITID") REFERENCES "PROSPECT" ("IDNUMBER") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_USERCODE1"
REM FOREIGN KEY ("USERCODE1") REFERENCES "BDTABLE1" ("CODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_USERCODE2"
REM FOREIGN KEY ("USERCODE2") REFERENCES "BDTABLE2" ("CODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_USERCODE3"
REM FOREIGN KEY ("USERCODE3") REFERENCES "BDTABLE3" ("CODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_USERCODE4"
REM FOREIGN KEY ("USERCODE4") REFERENCES "BDTABLE4" ("CODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_USERCODE5"
REM FOREIGN KEY ("USERCODE5") REFERENCES "BDTABLE5" ("CODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_USERCODE6"
REM FOREIGN KEY ("USERCODE6") REFERENCES "BDTABLE6" ("CODE") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT
REM "FK_BATCHPAY_VOLUNTEER_ID" FOREIGN KEY ("VOLUNTEER_ID") REFERENCES
REM "PROSPECT" ("IDNUMBER") ENABLE NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ADD CONSTRAINT "FK_BATCHPAY_BATCHNO"
REM FOREIGN KEY ("BATCHNO") REFERENCES "BATCH" ("BATCHNO") ENABLE
REM NOVALIDATE ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "PK_BATCHPAY_BD_ID" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "CHK_BATCHPAY_ACKNOWLEDGED" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "CHK_BATCHPAY_ANONYMOUS" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "CHK_BATCHPAY_COMPLIMENTARY" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "CHK_BATCHPAY_PREMIUM_DECLINED" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "CHK_BATCHPAY_HMFLAG" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "CHK_BATCHPAY_MATCH" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "CHK_BATCHPAY_RECEIPT" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "CHK_BATCHPAY_SOFT_CREDIT" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "CHK_BATCHPAY_USERFLAG1" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_ACKNOWLEDGEMENT" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_APPEALCODE" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "FK_BATCHPAY_BD_ID" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_BILLING_PARTY" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_CLASSCODE" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_CRACCOUNTNO" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_DBACCOUNTNO" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_EVALUATION_SOURCE" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "FK_BATCHPAY_FUNDCODE" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_GIFTGIVERID" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "FK_BATCHPAY_GIFTTYPE" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "FK_BATCHPAY_HONOR" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_PAY_METHOD" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_PGTRANSNUM" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_PROPOSALNUMBER" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "FK_BATCHPAY_REASON" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_RECEIPT_PARTY" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_RECIPIENT" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_SECURITY_TYPE" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_SOLICITORCODE" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "FK_BATCHPAY_UNITID" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_USERCODE1" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_USERCODE2" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_USERCODE3" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_USERCODE4" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_USERCODE5" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_USERCODE6" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT
REM "FK_BATCHPAY_VOLUNTEER_ID" ;
REM ALTER TABLE "GUI"."BATCHPAY" ENABLE CONSTRAINT "FK_BATCHPAY_BATCHNO" ;


Tom Kyte
February 17, 2004 - 7:35 am UTC

Ok, that is why the hard parse the first time takes so darn long. Look at the number of indexes and constraints.

BUT -- that is a one time penalty. It should only occur after a cold start of the database -- ONCE. After that -- soft parse city and the parsing should not take so long.

So -- if you run this thing "more than once" and benchmark the second run (after the shared pool has been warmed up) -- what do you see?

Tom....

Denise, February 17, 2004 - 8:29 am UTC

The last trace/tkprof file I created was taken from a
second run yesterday afternoon.

It is the same amount of time regardless if the test run
is 1st,2nd 3rd...etc....it takes 10 minutes to process
10 records..on any given run.

Given the situation it is the software vendor that needs
to look at the design of this module and figure out what
needs to be fixed.

Even if you were able to pinpoint what is causing the
slowdown there is nothing I can do to fix it...heaven forbid I should offer any corrective measures based on any
recommendations provided by you. You know *what* would hit
the fan.

Thanks for helping at least I learned couple new things
especially using exp/imp to create a file that looks at
the index structure of a table...that's real neat.

have a good day!


Tom...for my understanding....

Denise, February 17, 2004 - 11:04 am UTC

In looking at the tkprof output from yesterday when I executed the package it generated the following trace output.

I recall your mentioning of misses to library cache during
parse. I notice here it equals 1....in looking at other
sections it = 0.

could you explain what this represents...thanks.

begin lockbox_package.run_lockbox(file_name => 'testjan14.txt', file_path =>
'G:\Pledge\PledgeMaker\bplog',def => 'GTELE_NOID'); end;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.41 0.42 0 0 0 0
Execute 1 169.56 200.09 0 0 5 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 169.97 200.51 0 0 5 1

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 12


Tom Kyte
February 17, 2004 - 11:22 am UTC

that means it was hard parsed -- probably due to the glaring lack of bind variables.

if it were run again, it would not have a miss.

thanks Tom..one last question...

Denise, February 17, 2004 - 12:02 pm UTC

I'm re-reading chapter 10 of your book, specifically the
section on TKPROF interpretation.

While I look at the tkprof from yesterday I'm referring
to your book. Now I totally understand the Library Cache
misses relating to hard parse vs soft parse.

you state:
A large cpu or elapsed time number indicates queries
that represent the 'lowest hanging fruit'.

can you expand a little on what that means.

This seems to be where the problem lies with our issue.

thanks a zillion!!!

Tom Kyte
February 17, 2004 - 2:08 pm UTC

It's like this. this morning, I noticed one of my pages on my site was getting slow (a page you guys don't see, one I use to see the "reviews").

So, i turned on tracing and ran the page. Once I got the tkprof report, I simply:

$ grep ^total tk.prf

total 1 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 0
...
total 28 2.01 2.09 3 5048 0 21
....

total 6 0.02 0.01 0 0 0 3
total 6 0.00 0.00 0 0 0 3
total 9 0.00 0.01 3 18 0 21
...
total 503 1.43 3.95 340 4955 0 501
...

total 4 0.00 0.00 0 3 0 1
total 46 0.02 0.01 0 60 0 15
....

Well, from there -- it is obvious what the "low hanging fruit" was. Those two statements consumed "the big time".

I fixed the first one by making a column NOT NULL (that should have been that way all along) and indexing it (so I do an INDEX FAST FULL SCAN now for 70 LIO's in 0.01 cpu seconds) and I fixed the 2cnd one by using the new HTMLDB pagination that doesn't try to count all of the stupid records (just gives prev and next buttons, like on my home page now). Web page runs in 0.40 instead of 5 or 6 seconds (which is at least 10 to 20 times what any respectable webpage should take)...


Those two statements were my "low hanging fruit -- fix these and problem goes away"

Oh!!..ok Tom

Denise, February 17, 2004 - 3:04 pm UTC

like you said yesterday...look for the big numbers...
that's where the problem lies...and that is what is called
'low hanging fruit'..(did you come up with that term?).

on a more humorous note:
everytime I read that phrase "fruit-of-the-loom" underwear
pops in my head..ho-hum...sad commentary on the state of
my brain trying to figure out this stupid software.

BYE!!!

I GOT IT!!!

Denise, February 17, 2004 - 3:31 pm UTC

did some research on 'low hanging fruit'...
(in all my years of existence I never heard that phrase).

here are a couple meanings I found:
"The thing that gets you most value for least investment.
Why stretch to reach those apples way up there when there's
one right within arm's reach."

"Proper collection of 'Low Hanging Fruit' depends on proper
profiling and optimization. Identify your bottlenecks and
concentrate your efforts on them. That will give you the most value for the least investment."

sooo...concentrate on the big numbers...that's telling you
where the problems are.

GOT IT!!

wait for cpu

steve, February 17, 2004 - 4:45 pm UTC

Hi tom,

you said that if
e != c + sum(wi), i =1,2,...n
then miss 'wait for cpu'
if this is true, then e > c + sum(Wi)
in my case, e < c + sum(Wi). so is it still 'miss cpu wait'?


Thanks!

Steve

Tom Kyte
February 17, 2004 - 7:20 pm UTC

ahh, ok, i see now.

sure -- problem is the granularity of clocks and how timing is done.

Each "event" is timed. Fast events might

a) contribute exactly what they took
b) contribute LESS than what they took
c) contribute MORE than what they took.

that is, if you do something like:


start_time = get_clock
for i in 1 .. 1000000
loop
start_time_2 = get_clock;
do_something_fast
elapsed_loop = elapsed_loop + ( get_clock - start_time_2 )
end loop
elapsed_out_of_loop = get_clock - start_time


you would reasonably expect that the two elapsed times would be nominally different (as yours are over the long long haul).

It is due to the different timing levels. In my tiny example above, the timer outside of the loop would be pretty much "dead on", but the one inside the loop not so -- well, we time different things at different levels -- giving you a very very good idea of the timing but sometimes these small differences add up in long running/big operations.

tkprof output

A reader, February 25, 2004 - 11:49 am UTC

Hi

I have following tkprof output, I am totally lost with this output,

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 367115 60624871 24776439 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 367115 60624871 24776439 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 155 0.04 0.04 0 0 0 0
Execute 11003 7.06 9.54 383 508 86224 2851
Fetch 13656 1753.80 1765.61 425 13410381 10848 10932
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 24814 1760.90 1775.19 808 13410889 97072 13783


From Trace Analyzer


SUMMARY OF CPU TIME, ELAPSED TIME AND WAITS PER CURSOR

cursor user non-idle idle
id id command type count cpu top elapsed top waits top waits top
------ ---- ----------------------- --------- --------- --- --------- --- --------- --- --------- ---
1..... 36.. insert................. 2712 5.14 2 8.12 2 2.25 1 0.00
2..... 36.. select................. 8136 0.83 3 2.48 3 1.85 2 0.00 4
3..... 36.. select................. 2712 0.18 0.66 4 0.29 4 0.00 2
4..... 36.. select................. 5424 0.40 4 0.22 0.02 0.00 1
5..... 36.. select................. 5424 1754.30 1 1763.62 1 1.82 3 0.00
6..... 0... update................. 266 0.23 5 0.35 5 0.01 0.00
7..... 0... select................. 10 0.00 0.00
8..... 0... select................. 18 0.00 0.09 0.08 5 0.00 3
9..... 0... select................. 88 0.01 0.00
10.... 0... select................. 6 0.00 0.00
11.... 0... select................. 6 0.01 0.05 0.04 0.00 5
12.... 0... insert................. 4 0.00 0.00
13.... 0... delete................. 4 0.00 0.01
14.... 0... update................. 4 0.00 0.01
15.... 36.. pl/sql execute......... 1 0.00 0.00 0.00 0.00
------ ---- ----------------------- --------- --------- --- --------- --- --------- --- --------- ---
total. .... ....................... 24815 1761.10 1775.61 6.36 0.00

SUMMARY OF PHYSICAL READS, LOGICAL READS, ROWS AND MISSES PER CURSOR

cursor user
id id command type disk query current rows misses
------ ---- ----------------------- ------------ ------------ ------------ ------------ ---------
1..... 36.. insert................. 400 561 86221 2712 0
2..... 36.. select................. 240 13560 0 2712 0
3..... 36.. select................. 118 5424 0 2712 0
4..... 36.. select................. 21 2712 10848 2712 0
5..... 36.. select................. 12 13388489 0 2712 0
6..... 0... update................. 20 255 420 133 0
7..... 0... select................. 0 10 0 4 0
8..... 0... select................. 18 46 0 2 0
9..... 0... select................. 0 112 0 76 0
10.... 0... select................. 0 10 0 2 0
11.... 0... select................. 16 18 0 0 0
12.... 0... insert................. 0 4 2 2 0
13.... 0... delete................. 0 22 2 2 0
14.... 0... update................. 0 22 2 2 0
15.... 36.. pl/sql execute......... 367115 60624871 24776439 1 1
------ ---- ----------------------- ------------ ------------ ------------ ------------ ---------
total. .... ....................... 367960 74036116 24873934 13784 1



What is pl/sql execute? How can I do what queries used
60624871 - 13410889 consistent gets????

Dont know where to start!

Tom Kyte
February 25, 2004 - 12:20 pm UTC

look at the details in the tkprof report for the "low hanging fruit"

don't look at the totals, look at the individual queries.

What is this?

Justin, February 27, 2004 - 8:43 am UTC

Tom,

I am benchmarking the execution a function in a package by executing:

set autotrace traceonly
alter system set sql_trace=true;

select pkg_name.func_name(parm) from dual;

alter system set sql_trace=false;

And in my tkprof output, I got this section, which was taking up a significant amount of the total time for the query to execute.

SELECT PT.VALUE
FROM
SYS.V_$SESSTAT PT WHERE PT.SID=:1 AND PT.STATISTIC# IN (7,40,41,42,115,236,
237,238,242,243) ORDER BY PT.STATISTIC#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 4 0.85 0.86 0 0 0 20
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8 0.85 0.86 0 0 0 20

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 56

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 4 0.00 0.00
SQL*Net message from client 4 0.00 0.00


When I executed the same process over again, I took out "set autotrace traceonly", and the section was gone.
I wanted to ask the obvious, and that is, when benchmarking with tkprof, should you not have autotrace running because it adds extra processing?


Thanks very much for your time.



Tom Kyte
February 27, 2004 - 8:58 am UTC

that is the stuff that reports back your statistics in the autotrace report.

You can use

set termout off
stuff to "benchmark"
set termout on

in your script instead (or just "ignore" the stuff that isn't yours -- both are equally valid)

Excellent review

Nasir Sarwar, March 01, 2004 - 8:47 am UTC

Thanks a zillion time for explaining in detail!
I am having hard time to understand how many time a query is executed by just looking at the aoutput of tkprof.
Question#1) Would you please tell me how many time this query is executed without going into detail of this tkprof output? Does it mean that the following query is executed a total number of 174 times?

select a.attribute_id ,b.name ,b.editable ,b.type ,b.length ,b.uniqueness ,
b.range_low ,b.range_high ,b.description ,b.num_columns ,b.num_rows ,
b.create_date ,b.create_by_role ,b.iuser_id ,b.default_value ,
b.perform_immediate ,b.perform_before ,b.perform_after ,b.perform_on_click
from
attribute_map a ,attributes b ,class_attribute_sequence c where
(((((a.attribute_id=b.attribute_id and a.attribute_id=c.attribute_id) and
(a.class_id=:b1 or a.class_id='TT')) and a.attribute_id=c.attribute_id) and
c.class_Id=:b1) and a.attribute_id not in (select distinct attribute_id
from nvisible_map where ((((class_id=:b1 or class_id='TT') and role_id<>:b4)
and :b4<>(-1)) and attribute_id not in (select distinct attribute_id
from nvisible_map where (((class_id=:b1 or class_id='TT') and :b4<>(-1))
and role_id=:b4))))) order by c.seq_num


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.00 0.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 166 0.12 0.13 0 1532 0 322
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 174 0.12 0.13 0 1532 0 322

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 27 (CENTRA3)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
0 SORT (ORDER BY)
0 CONCATENATION
0 FILTER
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS (BY INDEX ROWID) OF
'CLASS_ATTRIBUTE_SEQUENCE'
0 INDEX (RANGE SCAN) OF
'CUK1_CLASS_ATTRIBUTE_SEQUENCE' (UNIQUE)
0 INDEX (UNIQUE SCAN) OF 'CPK_ATTRIBUTE_MAP' (UNIQUE)
0 TABLE ACCESS (BY INDEX ROWID) OF 'ATTRIBUTES'
0 INDEX (UNIQUE SCAN) OF 'CPK_ATTRIBUTES' (UNIQUE)
0 CONCATENATION
0 FILTER
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)
0 CONCATENATION
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)
0 FILTER
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)
0 FILTER
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS (BY INDEX ROWID) OF
'CLASS_ATTRIBUTE_SEQUENCE'
0 INDEX (RANGE SCAN) OF
'CUK1_CLASS_ATTRIBUTE_SEQUENCE' (UNIQUE)
0 INDEX (UNIQUE SCAN) OF 'CPK_ATTRIBUTE_MAP' (UNIQUE)
0 TABLE ACCESS (BY INDEX ROWID) OF 'ATTRIBUTES'
0 INDEX (UNIQUE SCAN) OF 'CPK_ATTRIBUTES' (UNIQUE)
0 CONCATENATION
0 FILTER
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)
0 CONCATENATION
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)
0 FILTER
0 FILTER
0 INDEX (RANGE SCAN) OF 'CPK_NVISIBLE_MAP' (UNIQUE)

Question#2: How would I decrease the number of execute and fetch in the following select max(...)..;

select max(object_id) into :b1
from
objects where system_id=(select system_id from objects where object_id=:b2)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.00 0.00 0 0 0 0
Execute 16278 4.60 4.46 82 48834 0 0
Fetch 16278 105.06 103.28 7 2856550 0 16278
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 32561 109.66 107.74 89 2905384 0 16278

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 27 (CENTRA3)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
0 SORT (AGGREGATE)
0 TABLE ACCESS (BY INDEX ROWID) OF 'OBJECTS'
0 INDEX (RANGE SCAN) OF 'OBJECTS_IDX3' (NON-UNIQUE)
0 TABLE ACCESS (BY INDEX ROWID) OF 'OBJECTS'
0 INDEX (UNIQUE SCAN) OF 'CPK_OBJECTS' (UNIQUE)

Tom Kyte
March 01, 2004 - 9:09 am UTC

q1) the query was executed 4 times. (execute = 4)

q2) the only way to do that would be to change your algorithm. You'd need to look at the surrounding code and ask yourself "why, why am I doing it this way". Perhaps you need to take a procedural approach and thing "in sets"



Thanks! You are a "Savior" for me!

Nasir Sarwar, March 01, 2004 - 9:38 am UTC

Otherwise I would be walking on the plank...

Way to go Tom!

David, March 01, 2004 - 10:14 am UTC

I have a followup question based upon your reply above "execute=4" means the query was executed 4 times. Does it means that the following query was execute '0' times? My understanding is that it probably fetching the data from shared pool for the already executed query? Right? Would you please explain it in detail if it is so that is execute '0' is this case?
SELECT object_id, object_name, title, system_id, system_version,
system_revision, version, revision
FROM
objects WHERE object_type IN ('D','C') AND ( class_id LIKE 'GGAGAA') AND
EXISTS (select object_id from members WHERE objects.object_id =
members.object_id AND node_id IN (SELECT link_node_id FROM node_map WHERE
node_id LIKE 'ABAAAAAI' ) )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 1593 4.14 4.57 0 74393 0 15923
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1593 4.14 4.57 0 74393 0 15923

Tom Kyte
March 01, 2004 - 10:35 am UTC

it means that tracing was turned on AFTER the query was parsed and after the query was opened -- but before the query finished fetching data.  like this:

ops$tkyte@ORA9IR2> variable x refcursor
ops$tkyte@ORA9IR2> exec open :x for select * from dual;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> alter session set sql_trace=true;
 
Session altered.
 
ops$tkyte@ORA9IR2> print x
 
D
-
X
 
ops$tkyte@ORA9IR2> exit


and tkprof would show:

SELECT *
from
 dual
                                                                                              
                                                                                              
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        1      0.00       0.00          0          3          0           1




 

Excellent

Dillip, March 10, 2004 - 8:51 pm UTC

tkprof indicates that it took 1.45 sec to execute the block, is it correct ? why is it so expensive to execute a block ? What are the measures for tuning this, for a larger bloock I have seen it takes 90 sec for 12500 executions.

declare
begin
insert into test1(id, value) values ('01',10);
insert into test2(id, value) values ('01',20);
update test1 set value = 100 where id = '01';
update test2 set value = 200 where id = '20';
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 505 0.24 0.32 2 14 0 0
Execute 505 1.09 1.13 0 0 0 505
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1010 1.33 1.45 2 14 0 505

Tom Kyte
March 10, 2004 - 9:12 pm UTC

it is not expensive to execute a block.

time begin null; end;

you'll see. You executed the block 505 times. (and parsed it 505 times -- thats really bad, should be 1 parse, 505 executes in a well written application)

that is 0.17 elapsed seconds per execute (4 statements being executed). Get rid of the parse and that is 0.13 seconds per execute or about 0.03 seconds per statement.








Excellent

Dillip, March 11, 2004 - 2:05 am UTC

Tom, Thanks for the explanation, what I do not understand is I am excuting the block (same block) for 505 times, so why it is parsing 505 times, and the time for the inserts and updates are shown separately in the trace again (as indivisual statement), this is the time only to execute the block (you can see in the rows processed it is showing as 505 not 505*4). Can you pls explain this ?

Tom Kyte
March 11, 2004 - 10:42 am UTC

why is it parsing? -- that is 100% under YOUR control. if you are using sqlplus, it is a rather "dumb" tool and parses each statement. if you wrote a program, it can be "smart" and parse once. at the end -- it is up to you.

as for the timing, it depends on your release how much and whether the timings of the statements in plsql sql (the recursive sql) is in your timnig or not. some of it will be and in some releases -- it'll be 100%.

You can see this if you grep your trace file for the elapsed times on that statement, for example I ran:


create table test1 (id varchar2(2), value int );
create table test2 (id varchar2(2), value int );

alter session set sql_trace=true;
begin
for i in 1 ..&1
loop
execute immediate 'declare begin
insert into test1(id, value) values (''01'',10);
insert into test2(id, value) values (''01'',20);
update test1 set value = 100 where id = ''01'';
update test2 set value = 200 where id = ''20'';
end;';
end loop;
end;
/


Now, that parse/executed that block N times -- if you grep out the execute times for that cursor (mine was always EXEC #8 if you log in clean and exit before grepping) you'll see the times go up - as you update more and more and more records each time.

I'd use the dbms_profiler tool to analyze plsql performance.

tkprof not found

A Reader, March 22, 2004 - 4:19 am UTC

hi,
I have installed oracle 9i on my win box.
I did not find tkprof on my pc.

Tom Kyte
March 22, 2004 - 7:11 am UTC

Oracle 9i release 1 with no patches did not have tkprof on windows. there is a patch on metalink you may retrieve to get it.

Oracle 9iR2 has it by default.

A reader, April 23, 2004 - 10:08 am UTC

Tom,

we are using bind variables, but still the parse is high


select *
from
mxUser where mxOid = :va


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 10 0.00 0.00 0 0 0 0
Execute 37 0.00 0.00 0 0 0 0
Fetch 37 100.14 100.00 0 111 0 37
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 84 100.14 100.00 0 111 0 37

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 69

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID MXUSER
1 INDEX UNIQUE SCAN (object id 35752)


I took the above lines from tkprof output and the schema is analysed.

how to avoid the parsing...

Please let us know

thanks a lot

Tom Kyte
April 23, 2004 - 1:41 pm UTC

look at the code and ask the coder why they parsed it more than once.

the code is doing the parsing, the code is asking oracle to parse it, we only do what the code asks of us.

If that query came from plsql (doesn't look like it, we don't use bind names like that), then you would up open_cursors in order to allow plsql to cache more of them.

sql's are generated from the 3rd party tool

A reader, April 26, 2004 - 7:54 am UTC

Tom,

the query doesnt come from plsql, its from a 3rd party tool, in order to make use to bind variable from sql's generated from the 3rd party tool - should i have to set any init parameter with bigger value, than i am using currently.

sga_max_size unknown 143727516
buffer_pool_keep string
buffer_pool_recycle string
java_pool_size unknown 33554432
large_pool_size unknown 8388608
olap_page_pool_size integer 33554432
shared_pool_reserved_size unknown 2516582
shared_pool_size unknown 50331648
db_block_buffers integer 0
db_block_checking boolean FALSE
db_block_checksum boolean TRUE
db_block_size integer 4096
db_file_multiblock_read_count integer 8
db_cache_advice string ON
db_cache_size unknown 33554432
db_keep_cache_size unknown 0
db_recycle_cache_size unknown 0
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
session_cached_cursors integer 0
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE

Thanks for your help on this

Tom Kyte
April 26, 2004 - 9:15 am UTC

the query is using binds, :va is a bind.

it is being PARSED more than it should be -- and nothing but fixing the code will fix that (sesson cached cursors can "help" if parsing is a performance issue for you)

A reader, April 26, 2004 - 8:11 am UTC

In addition to the above, please see the below lines from tkprof...they are pasring every time.

********************************************************************************

select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece
from
idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by piece#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.00 0.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 10 0.00 500.00 3 18 0 6
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18 0.00 500.00 3 18 0 6

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
2 SORT ORDER BY
2 TABLE ACCESS BY INDEX ROWID IDL_SB4$
2 INDEX RANGE SCAN (object id 123)

********************************************************************************

select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece
from
idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by piece#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 100.14 100.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 8 0.00 300.00 6 23 0 6
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16 100.14 400.00 6 23 0 6

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT ORDER BY
1 TABLE ACCESS BY INDEX ROWID IDL_UB1$
1 INDEX RANGE SCAN (object id 120)

But it says Misses in library cache during parse: 0 - but still its parsing - what does that mean?

are these tables from Oracle dictionary - if so, i need to increase shared pool size ?


Tom Kyte
April 26, 2004 - 9:21 am UTC

oracle parses when you tell it to parse. they told it "parse" and this recursive sql was executed. it had a hit in the shared pool -- but I'm thinking given the timing numbers you are using 9i the database BUT the 8i tkprof tool??? times look "large"

Recusrive SQL in tkprof

Sam, May 19, 2004 - 6:48 pm UTC

Tom,

You mentioned previously that in 9iR2 tkprof the resources used by recursive SQLs are not reported in the SQL that caused the recursive SQLs.

How about for PL/SQL calls? The following is from a 9.2.0.3 tkprof output file:

begin :itemKey := my_pkg.my_proc(:reqId,
:cardProgramId, :startDate, :endDate); end;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1.77 2.16 290 7244 846 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.77 2.16 290 7244 846 1

Do the time and buffer gets reported include all the SQLs this package executes? How about recursive SQLs from those SQLs?

Tom Kyte
May 20, 2004 - 10:07 am UTC

looks like sometimes it is a little -- if I run a test like:

begin
for x in ( select * from all_objects )
loop
null;
end loop;
end;


and flush the shared pool -- i see query and current mode gets -- but if I rerun it - nothing, so some of the really recursive sql (sys recursive sql) seems to contribute to it.

In general, you are using tkprof to analyze SQL performance. the plsql profiler for plsql performance. raw trace files from 10046 level 12 traces for program execution and wait performance.

from your book

Marvin, May 20, 2004 - 4:44 pm UTC

hi in page 125 of effective oracle by design page 125 there is this example of TKPROF report, you said

CPU = cpu time consumed in thousand of seconds
Elapsed = wall clock time

Then the example shows

CPU => 12.77
ELAPSED => 12.60

Does this mean that the total time spent on the operation is 12.60 seconds + 0.01277 seconds?

Tom Kyte
May 20, 2004 - 8:36 pm UTC

that would be "errata" ;)

it should be in seconds, not thousands of seconds. I'll file that.

so elapsed time is this?

A reader, May 21, 2004 - 1:36 am UTC

CPU => 12.77
ELAPSED => 12.60


TOTAL ELAPSED TIME = 12.77 + 12.60 = 25.37s?

Well basically my question is if elapsed time is the sume of cpu and elapsed :-P

Tom Kyte
May 21, 2004 - 9:55 am UTC

elapsed time is just that, elapsed.

12.77 for cpu time
12.60 for elapsed (Total elapsed)

(the text on page 125 and 126 goes into why the cpu > elapsed in this case, i choose this example "on purpose" for that reason)

Looking for doco reference

Matt, May 28, 2004 - 4:22 am UTC

I'm trying to find where the following records are documented for 9.2.0.4 : "(cr=110013 r=80 w=0 time=366437514 us)"

I am working on tuning the query below and am trying to understand the relationship between each "time=" components.

Additionally, the query below takes 368.23 seconds to run (and sometimes longer - I assume because the data in our running test system keeps changing status). However, I cannot account for where the time goes (there are insufficient waits). Could this be explained by a user defined lock?

SELECT total.messageKey from
(
select t.message_record_id messageKey, count(t.message_record_id) as num
from transactions t, acknowledgement a, messages m
where t.status='started'
and t.receipt_id = a.receipt_id
and a.status = 'pending'
and t.message_record_id = m.record_id
and m.in_out = 'I'
and m.tct_content_type_id = :b1
group by t.message_record_id
) toBeSent,
(
select record_id as messagekey, count(xx.message_record_id) as num from
messages,
(
select message_record_id from transactions t, acknowledgement a
where t.status<>'started'
or a.status <> 'pending'
and t.receipt_id = a.receipt_id
) xx
where record_id = xx.message_record_id(+)
group by record_id
) processed,
(
select t.message_record_id messageKey, count(t.message_record_id) as num
from transactions t, messages m
where
t.message_record_id = m.record_id
and m.in_out = 'I'
group by t.message_record_id
) total
where
toBeSent.messageKey = total.messageKey
and processed.messageKey = total.messageKey
and
(
tobesent.num >= :b2
OR tobesent.num >= total.num - processed.num
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.28 0 0 0 0
Execute 1 0.00 1.51 0 0 0 0
Fetch 1 0.00 366.43 80 110013 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 368.23 80 110013 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 99 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 HASH JOIN (cr=110013 r=80 w=0 time=366437514 us)
4 VIEW (cr=82 r=48 w=0 time=743241 us)
4 SORT GROUP BY (cr=82 r=48 w=0 time=743229 us)
109 HASH JOIN (cr=82 r=48 w=0 time=743028 us)
109 HASH JOIN (cr=5 r=0 w=0 time=1882 us)
109 INDEX RANGE SCAN ACKNOWLEDGEMENT_INDX (cr=2 r=0 w=0 time=159 us)(object id 47043)
140 INDEX RANGE SCAN TRANS_INDX (cr=3 r=0 w=0 time=181 us)(object id 47044)
21 TABLE ACCESS FULL MESSAGES (cr=77 r=48 w=0 time=740103 us)
4 HASH JOIN (cr=109931 r=32 w=0 time=365693656 us)
21 VIEW (cr=109 r=32 w=0 time=48279 us)
21 SORT GROUP BY (cr=109 r=32 w=0 time=48237 us)
3462 HASH JOIN (cr=109 r=32 w=0 time=44026 us)
21 TABLE ACCESS FULL MESSAGES (cr=77 r=32 w=0 time=10006 us)
8957 INDEX FAST FULL SCAN TRANS_INDX (cr=32 r=0 w=0 time=8042 us)(object id 47044)
3065 VIEW (cr=109822 r=0 w=0 time=365634892 us)
3065 SORT GROUP BY (cr=109822 r=0 w=0 time=365629073 us)
30252166 HASH JOIN OUTER (cr=109822 r=0 w=0 time=285834894 us)
6550 INDEX FAST FULL SCAN PK_MESSAGES (cr=17 r=0 w=0 time=6706 us)(object id 47014)
30251127 VIEW (cr=109805 r=0 w=0 time=155082277 us)
30251127 NESTED LOOPS (cr=109805 r=0 w=0 time=97252122 us)
3431 INDEX FAST FULL SCAN ACKNOWLEDGEMENT_INDX (cr=13 r=0 w=0 time=11670 us)(object id 47043)
30251127 INDEX FAST FULL SCAN TRANS_INDX (cr=109792 r=0 w=0 time=40474749 us)(object id 47044)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 4 0.00 0.00
db file sequential read 53 0.00 0.04
********************************************************************************


INDEXES FOR TABLE: TRANSACTIONS
================================================


INDEX_NAME COLUMN_NAME COLUMN_POSITION
------------------------------ ------------------------------ ---------------
PK_TRANSACTIONS RECEIPT_ID 1
TRANS_INDX STATUS 1
MESSAGE_RECORD_ID 2
RECEIPT_ID 3


TABLE STATISTICS FOR : TRANSACTIONS
===================================================


NUM_ROWS BLOCKS AVG_ROW_LEN LAST_ANALYZED
---------- ---------- ----------- -------------------
8957 990 811 05/28/2004 13:18:38


COLUMN STATISTICS FOR : TRANSACTIONS
====================================================


COLUMN_NAME NUM_DISTINCT NUM_NULLS NUM_BUCKETS DENSITY LAST_ANALYZED
------------------------------ ------------ ---------- ----------- ---------------- -------------------
ACK_DATETIME 0 8957 1 .0001116445 05/28/2004 10:48:27
ACK_MESSAGE_RECORD_ID 125 5594 1 .0080000000 05/28/2004 10:48:27
ASEXML_VERSION 0 8957 1 .0001116445 05/28/2004 10:48:27
BP_IRID 3676 0 1 .0002720348 05/28/2004 10:48:27
DUP_RECEIPT_ID 19 8938 1 .0526315789 05/28/2004 10:48:27
FROM_PARTICIPANT 3 0 1 .3333333333 05/28/2004 10:48:27
INITIATING_TRANSACTION_ID 1273 7684 1 .0007855460 05/28/2004 10:48:27
MESSAGE_RECORD_ID 5516 0 52 .0007084580 05/28/2004 13:18:38
RECEIPT_ID 8957 0 75 .0001116445 05/28/2004 13:18:38
RECEIPT_OR_SENT_DATETIME 1 5494 1 1.0000000000 05/28/2004 10:48:27
STATUS 3 0 2 .0000558223 05/28/2004 13:18:38
SUBSYSTEM_TRANSACTION_ID 0 8957 1 .0001116445 05/28/2004 10:48:27
TCT_CONTENT_TYPE_ID 1 0 1 1.0000000000 05/28/2004 10:48:27
TM_ACK_DATETIME 125 5594 1 .0080000000 05/28/2004 10:48:27
TO_PARTICIPANT 4 0 1 .2500000000 05/28/2004 10:48:27
TRANSACTION_DATETIME 511 0 1 .0019569472 05/28/2004 10:48:27
TRANSACTION_GROUP 2 0 1 .5000000000 05/28/2004 10:48:27
TRANSACTION_ID 8957 0 1 .0001116445 05/28/2004 10:48:27
TT_TRANSACTION_TYPE_ID 5 0 1 .2000000000 05/28/2004 10:48:27
WFP_WORKFLOW_ID 3443 5514 1 .0002904444 05/28/2004 10:48:27

20 rows selected.


INDEX STATISTICS FOR : TRANSACTIONS
===================================================


NAME NUM_ROWS DISTINCT LEAF_BLOCKS CF LEVEL ALFBPKEY LAST_ANALYZED
------------------------------ ---------- ---------- ----------- ---------- ---------- ---------- -------------------
PK_TRANSACTIONS 8957 8957 19 1615 1 1 05/28/2004 13:18:39
SYS_IL0000047021C00012$$
TRANS_INDX

INDEXES FOR TABLE: ACKNOWLEDGEMENT
================================================


INDEX_NAME COLUMN_NAME COLUMN_POSITION
------------------------------ ------------------------------ ---------------
ACKNOWLEDGEMENT_INDX STATUS 1
RECEIPT_ID 2
PK_ACKNOWLEDGEMENT RECEIPT_ID 1


TABLE STATISTICS FOR : ACKNOWLEDGEMENT
===================================================


NUM_ROWS BLOCKS AVG_ROW_LEN LAST_ANALYZED
---------- ---------- ----------- -------------------
3431 20 34 05/28/2004 13:18:37


COLUMN STATISTICS FOR : ACKNOWLEDGEMENT
====================================================


COLUMN_NAME NUM_DISTINCT NUM_NULLS NUM_BUCKETS DENSITY LAST_ANALYZED
------------------------------ ------------ ---------- ----------- ---------------- -------------------
RECEIPT_ID 3431 0 75 .0002914602 05/28/2004 13:18:37
STATUS 2 0 1 .0001457301 05/28/2004 13:18:37


INDEX STATISTICS FOR : ACKNOWLEDGEMENT
===================================================


NAME NUM_ROWS DISTINCT LEAF_BLOCKS CF LEVEL ALFBPKEY LAST_ANALYZED
------------------------------ ---------- ---------- ----------- ---------- ---------- ---------- -------------------
ACKNOWLEDGEMENT_INDX 3431 3431 8 30 1 1 05/28/2004 13:18:38
PK_ACKNOWLEDGEMENT 3431 3431 8 28 1 1 05/28/2004 13:18:37
SYS_IL0000046997C00002$$
INDEXES FOR TABLE: MESSAGES
================================================


INDEX_NAME COLUMN_NAME COLUMN_POSITION
------------------------------ ------------------------------ ---------------
PK_MESSAGES RECORD_ID 1
UK_MESSAGES_FROM_MSG MESSAGE_ID 1
FROM_ID 2


TABLE STATISTICS FOR : MESSAGES
===================================================


NUM_ROWS BLOCKS AVG_ROW_LEN LAST_ANALYZED
---------- ---------- ----------- -------------------
6550 75 76 05/28/2004 13:18:38


COLUMN STATISTICS FOR : MESSAGES
====================================================


COLUMN_NAME NUM_DISTINCT NUM_NULLS NUM_BUCKETS DENSITY LAST_ANALYZED
------------------------------ ------------ ---------- ----------- ---------------- -------------------
ASEXML_VERSION 0 6550 1 .0001526718 05/28/2004 10:48:27
EMAIL_ADDRESS 0 6550 1 .0001526718 05/28/2004 10:48:27
EMAIL_SUBJECT 0 6550 1 .0001526718 05/28/2004 10:48:27
FILENAME 0 6550 1 .0001526718 05/28/2004 10:48:27
FROM_ID 3 0 2 .0000763359 05/28/2004 13:18:38
IN_OUT 2 0 1 .5000000000 05/28/2004 10:48:27
MARKET_ID 1 0 1 1.0000000000 05/28/2004 10:48:27
MESSAGE_DATETIME 1604 0 1 .0006234414 05/28/2004 10:48:27
MESSAGE_ID 6550 0 75 .0001526718 05/28/2004 13:18:38
PRIORITY 1 0 1 1.0000000000 05/28/2004 10:48:27
RECEIPT_OR_SENT_DATETIME 2 6529 1 .5000000000 05/28/2004 10:48:27
RECORD_ID 6550 0 75 .0001526718 05/28/2004 13:18:38
SECURITY_CONTEXT 0 6550 1 .0001526718 05/28/2004 10:48:27
TCT_CONTENT_TYPE_ID 1 0 1 1.0000000000 05/28/2004 10:48:27
TO_ID 4 0 1 .2500000000 05/28/2004 10:48:27
TRANSACTION_GROUP 2 0 1 .5000000000 05/28/2004 10:48:27

16 rows selected.


INDEX STATISTICS FOR : MESSAGES
===================================================


NAME NUM_ROWS DISTINCT LEAF_BLOCKS CF LEVEL ALFBPKEY LAST_ANALYZED
------------------------------ ---------- ---------- ----------- ---------- ---------- ---------- -------------------
PK_MESSAGES 6550 6550 14 71 1 1 05/28/2004 13:18:38
UK_MESSAGES_FROM_MSG 6550 6550 35 83 1 1 05/28/2004 13:18:38


Tom Kyte
May 28, 2004 - 9:07 am UTC

"(cr=110013 r=80 w=0 time=366437514 us)"

cr = logical IO
r = physical read
w = physical write
time = point in time in millionth's of a second (subtract two of them, get elapsed time)

Looks like you don't have timed_statistics on (no cpu times). You sort of need timed_statistics on to do any tracing at all.

This is something else...

Matt, May 28, 2004 - 9:11 am UTC

I really can't account for this time. I have timings on:

SQL> show parameter timed

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------
timed_os_statistics                  integer     0
timed_statistics                     boolean     TRUE
SQL>

sometimes the query takes 6 minutes and sometimes 3. Each time I can't see sufficient waits to make up the lost time.

Any ideas? 

Tom Kyte
May 28, 2004 - 11:34 am UTC



according to that, you took zero cpu seconds:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.28 0 0 0 0
Execute 1 0.00 1.51 0 0 0 0
Fetch 1 0.00 366.43 80 110013 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 368.23 80 110013 0 1

I'm not aware of a machine that would do that many LIO's without *some* cpu.

so, something is amiss here. timing does not appear to be on when this sample was taken.

Misses in library cache during parse: 1

Reader, June 15, 2004 - 9:17 am UTC

Misses in library cache during parse: 1

Why is this atleast one .

Thanks

tkprof timing

A reader, June 17, 2004 - 11:30 am UTC

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.01 0.00 0 0 0 0
Execute 19012 3.10 3.56 0 0 0 0
Fetch 19012 1056.38 1517.15 0 43282981 76048 19012
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 38028 1059.49 1520.71 0 43282981 76048 19012


Tom in the above, how should we convert the cpu and elapsed timings to seconds or minutes.

What is he unit in which timings are displayed in tkprof output.

Further, If I want the sql's in tkprof output to include values for the bind variables , how should I trace, and what options I should use in the tkprof.

Thanks

Tom Kyte
June 17, 2004 - 2:09 pm UTC

"divide"

they are in seconds, that is 1056.38 cpu seconds. divide by 60 and you have minutes.

search for 10046 on this site, to see how to set a level 12 10046 trace. the trace file (but NOT the tkprof report) will have the bind variable values.

tkprof output in 10g versus 9i

A reader, July 17, 2004 - 6:32 pm UTC

Hi Tom
Consider the following sql:
---

alter session set timed_statistics=true;
alter session set events '10046 trace name context forever, level 12';
alter session set session_cached_cursors=0;

select /* query 1 */ count(*) from dual;
select /* query 2 */ count(*) from dual;
declare
cursor l_c is
select /* query in cursor 1 */ count(*)
from dual;
begin
open l_c;
close l_c;
end;
/
declare
cursor l_c is
select /* query in cursor 2 */ count(*)
from dual;
begin
open l_c;
close l_c;
end;
/
-----------

When I run it in 10gr1 - the tkprof output for
the last two static cursors are shown as
one query (with the comment removed) as in:
SELECT COUNT(*)
FROM
DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.01 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.00 0 0 0 0

--
The same corrsponding portion of tkprof
for 9i shows two separate sections with the
comments in the query intact as:
SELECT /* query in cursor 1 */ count(*)
from dual

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

and

SELECT /* query in cursor 2 */ count(*)
from dual

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0


--
Does this mean that in 10g for static cursors
comments are irrelevant in considering two
queries the same (from parsing point of view)?
Apparently not - since parse counts are the same
in both cases (2).. So is this a tkprof output
quirk in 10g?

Thanx!




Tom Kyte
July 18, 2004 - 11:45 am UTC

PLSQL static cursor caching works with named compiled objects (except triggers).

What has happened here is that

a) plsql has always rewritten your SQL -- uppercasing, parsing out binds and so on.

b) in 10g, they appear to be removing "comments that add nothing to the execution of the sql"

c) this makes your two different select count(*)'s be the same in the shared pool (nice) but plsql had to parse (soft at least once) them twice -- since it was just an anonymous block


if you wanted to see the actual program flow, you would use aggregate=no on the tkprof command line.



All you are seeing is the normal side effect of plsql having rewritten your query so that two queries that were coded differently actually end up "being the same". Nice for soft parsing, but didn't do anything to reduce the actual parse count.

thanx!

A reader, July 18, 2004 - 1:23 pm UTC

"this makes your two different select count(*)'s be the same in the shared
pool (nice) but plsql had to parse (soft at least once) them twice -- since it
was just an anonymous block"

That means you can't tag sql by using comments anymore -
or may be you can use a non functioning hint I suppose..
"aggregate=no" is what I was looking for - remember
readning it in your book (one of them)...


Tom Kyte
July 18, 2004 - 2:51 pm UTC

ops$tkyte@ORA10G> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> begin
  2          for x in ( select * /* tagged */ from dual ) loop null; end loop;
  3          for x in ( select * /* tagged */ from    dual ) loop null; end loop;
  4          for x in ( select * /*+ tagged */ from dual ) loop null; end loop;
  5  end;
  6  /
 
PL/SQL procedure successfully completed.


tkprof says:

SELECT *
FROM
 DUAL
                                                                                              
                                                                                              
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          6          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.00       0.00          0          6          0           2
********************************************************************************
SELECT * /*+ tagged */
FROM
 DUAL
                                                                                              
                                                                                              
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          3          0           1
                                                                                              
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL DUAL (cr=3 pr=0 pw=0 time=41 us)
                                                                                              


<b>so, the two statements with regular comments were "normalized" -- consistent case, white space, linefeeds, and comments removed.  they end up being the same statement now in the shared pool

The third statement kept the comment since the comment resembled a hint -- therein lies a technique to keep the tags...</b>

 

yup - thanx!

A reader, July 18, 2004 - 3:00 pm UTC

I tried the same trick and it works to tag a
statement...

Thanx!

Explain plans

Sanjaya Balasuriya, August 10, 2004 - 11:42 pm UTC

Hi Tom,

In some of my tkprof outputs, for some statements there are not execution plans and for some statements there are.
So it makes it difficult to get an idea about how those statements are run.

What's the reason for no execution plans?

Is it about lack of statistics ? But I run dbms_stats.gather_database_stats daily.



Tom Kyte
August 11, 2004 - 9:34 am UTC

nothing to do with stats.

everything to do with "the cursor was not closed when you grabbed the trace file"

consider:


ops$tkyte@ORA9IR2> @trace 
Session altered.

<b>enabled sql_trace..</b>
 
ops$tkyte@ORA9IR2> select * from dual;
 
D
-
X

<b>run a query -- now tk is a script I have that finds the newest trace file and tkprofs it, i call this from within sqlplus itself:</b>
 
ops$tkyte@ORA9IR2> !tk
/home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_15986.trc
 
TKPROF: Release 9.2.0.5.0 - Production on Wed Aug 11 09:24:09 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
<b>that tkprof is below, now I ctl-d out:</b>

 
ops$tkyte@ORA9IR2> Disconnected from Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production

<b>run tkprof again (*same tracefile*)</b>

[tkyte@xtkyte-pc tkyte]$ tk
/home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_15986.trc
 
TKPROF: Release 9.2.0.5.0 - Production on Wed Aug 11 09:24:23 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
[tkyte@xtkyte-pc tkyte]$

<b>here are the two results:</b>

********************************************************************************
                                                                                                                
select *
from
 dual
                                                                                                                
                                                                                                                
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          3          0           1
                                                                                                                
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 206
                                                                                                                
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
                                                                                                                
                                                                                                                
                                                                                                                <b>no query plan -- cursor was not closed, sqlplus had it open still</b>

********************************************************************************

********************************************************************************
                                                                                                                
select *
from
 dual
                                                                                                                
                                                                                                                
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          3          0           1
                                                                                                                
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 206
                                                                                                                
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL OBJ#(222) (cr=3 r=0 w=0 time=49 us)
                                                                                                                
                                                                                                                
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2       16.08         16.08
                                                                                                                
                                                                                                                
                                                                                                                
********************************************************************************
<b>query plan this time, sqlplus closed the cursor upon exit</b>
 

blocks in tkprof output

READER, August 24, 2004 - 1:36 pm UTC

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1186 0.05 0.04 0 0 0 0
Fetch 1185 112.55 117.71 195 1598565 9480 1185
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2371 112.60 117.75 195 1598565 9480 1185

Total LIO = 1598565+195+9480 = 1608240

I did not get a plan in TKPROF .
Took the statement and found out from toad that it is doing a RANGE scan on an INDEX .

I found out from user_segments that the number of blocks = 1365
So this means that the LIO should be atleast 1365*1185 =
1617525

Why this difference 1617525 - 1608240 = 9285 ?

Also , why is that I am not seeing the plan in TKPROF ?
(the program is still executing )

Thanks


Tom Kyte
August 24, 2004 - 3:32 pm UTC

total lio = 1598565+9480, the 195 is already counted in the query or current gets.

you didn't get the query because the cursor was not closed -- only when the cursor is closed are the STAT records written out with the plan that was used. You turned on tracing, and turned it off -- but the application being traced still had the query open.

also, you came in the middle of a running program and turned on tracing (i can tell that since parse=0, you started tracing AFTER this query was parsed and perhaps in the middle of being executed). so you could be missing LIO's from the execution that was happening while you turned on trace and part of the LIO's from the execution/fetch that happened as you turned off trace --eg no way I'd expect anything to "add up" exactly.


but I'd be pretty bummed that I was diong a full scan of 1,365 blocks just to get A SINGLE RECORD -- "indexing" comes to mind.

Reg TKPROF output

Ramanan, August 24, 2004 - 8:53 pm UTC

Hi Tom,
I am attaching part of the TKPROF output . I would like to know
1. For some of the tables , table or view does not exist. I don't know why?
2. And there are some queries which have gone for Rule based optimisation. I am not able to understand what those queries are.
3. what does the below part of the output mean. ?
4. When is the last query in given output executed.

Regards
Ramanan
***************************
"select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring*/ count(*)
,sum(vsize("CHR_KEY"))
from
"IT4ITDM"."TC_CHARGE_STG" t

*****************************************

TKPROF OUTPUT
=============

INSERT INTO /*+append */ tc_charge_stg nologging
(
CHR_KEY,
CHR_CODE,
CHR_DESC ,
CHR_STATUS,
CHR_PRJ
)
SELECT
CHR_KEY,
CHR_CODE,
CHR_DESC ,
upper(CHR_STATUS),
CHR_PRJ
FROM charge

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 5.26 3 604 4217 26771
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 5.26 3 604 4217 26771

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 177 (IT4ITDM) (recursive depth: 1)
error during execute of EXPLAIN PLAN statement
ORA-00942: table or view does not exist

parse error offset: 290
********************************************************************************

SELECT /*+ rule */DECODE(BITAND(T.PROPERTY,2147483648),0,0,1)
FROM
SYS.OBJ$ TOB,SYS.USER$ U,SYS.TAB$ T WHERE TOB.NAME = :b1 AND U.NAME = :b2
AND TOB.OBJ# = T.OBJ# AND TOB.OWNER# = U.USER#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 16 0.00 0.00 0 0 0 0
Fetch 16 0.00 0.01 2 131 0 16
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 32 0.00 0.02 2 131 0 16

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************

SELECT /*+ rule */'"' || OI.NAME || '"'
FROM
SYS.USER$ U,SYS.OBJ$ OT,SYS.IND$ I,SYS.OBJ$ OI WHERE U.NAME = :b1 AND
OT.OWNER# = U.USER# AND OT.NAME = :b2 AND I.BO# = OT.OBJ# AND I.TYPE# =
4 AND I.OBJ# = OI.OBJ#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 16 0.00 0.00 0 0 0 0
Fetch 16 0.00 0.00 0 131 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 32 0.00 0.00 0 131 0 0

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************

SELECT /*+ rule */C.NAME COL_NAME,C.TYPE# COL_TYPE,C.CHARSETFORM COL_CSF,
C.DEFAULT$ COL_DEF,C.NULL$ COL_NULL,C.PROPERTY COL_PROP,C.COL# COL_UNUM,
C.INTCOL# COL_INUM,C.OBJ# COL_OBJ,C.SCALE COL_SCALE,H.BUCKET_CNT H_BCNT,
H.DISTCNT H_PNDV,C.LENGTH COL_LEN,CU.TIMESTAMP CU_TIME,CU.EQUALITY_PREDS
CU_EP,CU.EQUIJOIN_PREDS CU_EJP,CU.RANGE_PREDS CU_RP,CU.LIKE_PREDS CU_LP
FROM
SYS.USER$ U,SYS.OBJ$ O,SYS.COL$ C,SYS.COL_USAGE$ CU,SYS.HIST_HEAD$ H WHERE
U.NAME = :b1 AND O.OWNER# = U.USER# AND O.TYPE# = 2 AND O.NAME = :b2
AND O.OBJ# = C.OBJ# AND C.OBJ# = CU.OBJ# (+) AND C.INTCOL# = CU.INTCOL#
(+) AND C.OBJ# = H.OBJ# (+) AND C.INTCOL# = H.INTCOL# (+)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 32 0.00 0.00 0 0 0 0
Fetch 340 0.00 0.07 10 2568 0 308
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 372 0.00 0.07 10 2568 0 308

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************

SELECT /*+ rule */OI.NAME
FROM
SYS.USER$ U,SYS.OBJ$ OT,SYS.COL$ C,SYS.ICOL$ IC,SYS.OBJ$ OI WHERE U.NAME =
:b1 AND OT.OWNER# = U.USER# AND OT.NAME = :b2 AND OT.TYPE# = 2 AND
OT.OBJ# = C.OBJ# AND C.NAME = :b3 AND OT.OBJ# = IC.BO# AND IC.INTCOL# =
C.INTCOL# AND IC.OBJ# = OI.OBJ#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 308 0.00 0.01 0 0 0 0
Fetch 308 0.00 0.07 6 3596 0 48
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 616 0.00 0.09 6 3596 0 48

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************

SELECT /*+ rule */CD.TYPE#
FROM
SYS.CCOL$ CC,SYS.CDEF$ CD WHERE CC.OBJ# = :b1 AND CC.INTCOL# = :b2 AND
CD.CON# = CC.CON# AND CD.OBJ# = CC.OBJ# AND CD.ENABLED IS NOT NULL AND
CD.INTCOLS = 1 AND CD.TYPE# IN ( 2,3 ) UNION ALL SELECT /*+ rule */3
FROM SYS.IND$ I,SYS.ICOL$ IC WHERE I.BO# = :b1 AND IC.INTCOL# = :b2 AND
I.OBJ# = IC.OBJ# AND I.INTCOLS = 1 AND BITAND(PROPERTY,1) = 1 AND
BITAND(FLAGS,1025) = 0


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 178 0.00 0.01 0 0 0 0
Fetch 178 0.00 0.04 3 1343 0 8
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 356 0.00 0.05 3 1343 0 8

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************

SELECT /*+ rule */O.SUBNAME PART_NAME,O.OBJ# OBJ_NUM
FROM
SYS.USER$ U,SYS.OBJ$ O WHERE U.NAME = :b1 AND O.OWNER# = U.USER# AND
O.NAME = :b2 AND O.TYPE# = 19 AND ((:b3 IS NULL ) OR (:b3 = O.SUBNAME ))
ORDER BY PART_NAME


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 16 0.00 0.00 0 0 0 0
Fetch 16 0.00 0.00 0 80 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 32 0.00 0.00 0 80 0 0

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring*/ count(*)
,sum(vsize("CHR_KEY"))
from
"IT4ITDM"."TC_CHARGE_STG" t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.03 0 75 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.04 0 75 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 177 (IT4ITDM) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=75 r=0 w=0 time=39125 us)
26771 INDEX FAST FULL SCAN XPK_TC_CHARGE_STG (cr=75 r=0 w=0 time=14891 us)(object id 26517)

error during execute of EXPLAIN PLAN statement
ORA-00942: table or view does not exist

parse error offset: 192
********************************************************************************
select min(minbkt),maxbkt,substrb(dump(min(val),16,0,32),1,120) minval,
substrb(dump(max(val),16,0,32),1,120) maxval,sum(rep) sumrep, sum(repsq)
sumrepsq, max(rep) maxrep, count(*) bktndv
from
(select val,min(bkt) minbkt, max(bkt) maxbkt, count(val) rep, count(val)
*count(val) repsq from (select /*+ cursor_sharing_exact dynamic_sampling(0)
no_monitoring*/ "CHR_KEY" val, ntile(75) over (order by "CHR_KEY") bkt
from "IT4ITDM"."TC_CHARGE_STG" t where "CHR_KEY" is not null) group by val)
group by maxbkt order by maxbkt


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.57 0 52 0 75
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.59 0 52 0 75

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 177 (IT4ITDM) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
75 SORT GROUP BY (cr=52 r=0 w=0 time=572507 us)
26771 VIEW (cr=52 r=0 w=0 time=417792 us)
26771 SORT GROUP BY (cr=52 r=0 w=0 time=381933 us)
26771 VIEW (cr=52 r=0 w=0 time=221837 us)
26771 WINDOW BUFFER (cr=52 r=0 w=0 time=207144 us)
26771 INDEX FULL SCAN XPK_TC_CHARGE_STG (cr=52 r=0 w=0 time=16042 us)(object id 26517)

error during execute of EXPLAIN PLAN statement
ORA-00942: table or view does not exist

parse error offset: 504


Tom Kyte
August 25, 2004 - 7:11 am UTC

1) because the explain=user/pw you used did not have the necessary privs to explain the query. that is OK however as explain= should never be used anymore, it isn't relevant. the trace file has the correct and proper plan. you'll find tkprof to be tons faster without explain=

2) umm? they asked to be rbo optimized? not sure what you mean there.

3) not sure what you mean there. if you are asking "so, how do i read a tkprof report", please see the performance guide (otn.oracle.com) or "Expert One on One Oracle" -- chapter 10

4) not sure what you mean.

TKPROF output

Ramanan, August 27, 2004 - 10:29 am UTC

Thanks tom. Let me put the scenario as simple case.
The main issue we now have is the production box is taking more time than the development box. The main process which we do is bring the sysbase data to oracle using the DB link. I am using oracle 9.2.0.4.

This is one of the statment is which is taking more time in production than the development. I am also giving the tkprof output for the statement.

INSERT INTO /*+append */ tc_tab1 nologging
(
PSD_KEY,
PSD_PSL,
PSD_MIN,
ENTRY_DATE
)
SELECT
PSD_KEY,
PSD_PSL,
PSD_MIN,
TO_DATE(PSD_DATE,'YYYYMMDD')
FROM t1@data1.world

TKPROF IN DEVELOPMENT
======================
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.17 0 0 1 0
Execute 1 0.00 442.83 0 17269 125393 4094345
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 443.00 0 17269 125394 4094345

TKPROF IN PRODUCTION
======================

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.09 0 0 1 0
Execute 1 80.53 2314.13 0 18654 125631 4101783
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 80.54 2314.22 0 18654 125632 4101783

I am not able to guess why the same statement is taking CPU time in production. Another doubt i have is, since i have used direct load, shouldn't there be a disk acitivity.
Also can you tell me what could be the reason for more elapsed time.



Tom Kyte
August 27, 2004 - 10:38 am UTC

you didn't have timed statistics on in development -- something was done wrong there -- no way that took "0" cpu seconds in development.

In any case -- use 10046 level 12 trace and see what WAIT events are you encountering (if no waits, then you are just waiting on CPU in production -- meaning, you have more users doing stuff)... but get the wait events.

TKprof output

Ramanan, August 28, 2004 - 6:43 pm UTC

Thanks tom, while i am trying to find out why the process is taking time, another problem has popped out.
I am getting "Oracle error 257
archiver error. Connect internal only, until freed." Sorry for posting a unrelated question in this forum.

There are many inserts in whole process i am doing. In all the inserts i have given /*+append */ and nologging . And after the insertion, i am using the dbms_stats package to analyze the table. Then why am i getting this error. Below is the part of the code from the process.

INSERT INTO /*+append */ t_tab_stg nologging
(
CHR_KEY,
CHR_CODE,
CHR_DESC ,
CHR_STATUS,
CHR_PRJ
)
SELECT
CHR_KEY,
CHR_CODE,
CHR_DESC ,
upper(CHR_STATUS),
CHR_PRJ
FROM ta1@dbl.com;



Tom Kyte
August 28, 2004 - 7:06 pm UTC

nologging is an attribute of a table.

nologging is meaningless when used in the insert statement like that.

and the append hint is in the wrong location...

you could as easily code:

insert into t_tab_stg Hello_World ( ....

and get the same results!!

alter table t_tab_stg nologging;
insert /*+ append */ into t_tab_stg ....



but for now, get your DBA to allocate sufficient storage for archiving to take place.

RD, August 30, 2004 - 1:57 am UTC

Hi Tom,

I am going through chapter 5 of "effective oracle by design"
and noticed that on my database when every I generate a tkprof my elapsed time is always lots more than my cpu time.
I am using oracle8i (windows2000.

SELECT *
FROM
BIG_TABLE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.09 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 1 4 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.09 0 1 4 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 5 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL BIG_TABLE

This is always the case when ever I try to do any example from the book. How should I go about trying to reduce this ?
Actually where should I start?

Thanks as always.
Regards,
RD.



Tom Kyte
August 30, 2004 - 8:37 am UTC

just means "to get 1/100th of a cpu second, we needed 9/100th of a second by the wall clock -- we waited for something".

maybe you waited for cpu, your machine is doing many things at once.
maybe you waited for a latch

maybe you actually didn't wait for anything -- and you are just seeing the natural side effect of two clocks being used to measure the same thing.

the cpu clock and the wall clock.

for all intents and purposes, these numbers are SOOOO close as to be "the same"

goto chapter 2, "The Query and Execution Statistics". There I discuss how CPU time could be larger than Elapsed time (impossible in reality, happens all of the time in 'practic'). It is caused by this measurement error.

These numbers are "close enough to be the same".

Understanding the output

A reader, September 02, 2004 - 12:31 pm UTC

Hi Tom,

I am trying to understand the output generated by tkprof. The output is generated but I see this error at the start:

error connecting to database using: scott/tiger
ORA-12560: TNS:protocol adapter error

I am able to connect using above uid/pwd in sql*plus.

I also see, EXPLAIN PLAN option disabled. I did not set/unset any option. The command line that I used is:

tkprof ora_3144.trc ora_3144.txt sys=no explain=scott/tiger

Is that why before each statement in the trace output, I see "(recursive depth: 1)"?

Thanks for all your help.

Tom Kyte
September 02, 2004 - 1:44 pm UTC

leave off "explain=user/pass" and it'll go away.


you do not want to ever use explain=, the plans are in there (as long as you exited the session/closed the cursors) and the plans that are in there are 100% accurate, whereas explain plan will "lie" to you

Reg TKPROF

Ramanan, September 03, 2004 - 7:52 pm UTC

Hi Tom,
I told the DBA to increase the Archieve log size. And i have also changed the insert script as given below. but i feel its generating lots of redo and also taking 25 minutes.
INSERT /*+append */ INTO t_tab_stg nologging
(
CHR_KEY,
CHR_CODE,
CHR_DESC ,
CHR_STATUS,
CHR_PRJ
)
SELECT
CHR_KEY,
CHR_CODE,
CHR_DESC ,
upper(CHR_STATUS),
CHR_PRJ
FROM ta1; -- This table is analyzed.

Both the tables(t_tab_stg,ta1 ) used are oracle tables only
To add further, this table is going to get populated with 3596659 records and is also having an index.

Note:
Once the t_tab_stg table gets populated , it is going to be used to populate another table. So i need the index created before populating the second table.

My question is
1. From the different explainations you had given on /*+append */ hint , i understand , since i am having index on the table, index will generate a redo log , but the insertion of data , shouldn't generate redo log. correct me if i am wrong.
2. if i disable the index while inserting and then enable it after its done, will the time get reduced.






Tom Kyte
September 04, 2004 - 10:52 am UTC

that nologging is no different than

insert /*+ append */ into t_tab_stg does_this_make_you_feel_better ( ....


it is just a correlation name, not doing anything for you.


my first question would be "well, why are you copying all rows from t1 to t2, why not just work on them in t1 OR put them into t2 in the first place"

second, if you want this to be "fast", you would/could

a) disable the index
b) do the insert
c) re-enable (rebuild) the index

the index is ALWAYS going to be logged, regardless
the index maintainence probably accounts for most of the runtime


1) append has nothing but nothing to do with REDO generate in itself. You need to alter table t nologging for the append hint to skip redo generation *for the table only, not the index*.


2) the time to insert surely will, yes.

TPROF output

Ramanan, September 04, 2004 - 7:53 pm UTC

Thanks tom.

1.Sorry tom . Let me be more specific. The table t2 would be a summary table/ a table which will be populated with the join with another table. Acutally these tables are used in reports. so i need to populate t1 and then to populate t2 with a join of t1 and some other table.

2.here our DBA's dont allow to give alter table with nologging, as they say , if something happens to the database while i am doing the insertion, they will have problems with recovering the database.so if i give nologging before insertion and then once insertion is over, if i give the logging ,will it be a problem??




Tom Kyte
September 05, 2004 - 10:16 am UTC



2) DBA's are near correct -- the operation you would perform with nologging (funny, they said "don't do that" and you tried to find a way to do that anyway....) would not be recoverable. If after you performed that nonlogged operation - they needed to recovery a part of the database that included your objects -- they would discover "hey, we cannot do that".

Now, redo log generation should not be affecting your performance *at all* really (lgwr would be streaming the redo out in the background, you do not wait for lgwr until you commit and you'll find the commit of 1,000,000,000 newly inserted rows takes about the same amount of time as the insert and commit of a single row). That is, unless you are subject to "checkpoint not complete", "archival required" or redo log buffer space waits -- again, suggest 10046 level 12 trace and tkprof to see what, if anything, you might be waiting on.

Tkprof output

Sanjaya Balasuriya, September 06, 2004 - 12:39 am UTC

Hi Tom,

In some tkprof output files, for some statements, I have noticed that there is no "optimizer goal".

Is that implies that those statements are execute with RBO ?

Thanks in advance.

-San

Tom Kyte
September 06, 2004 - 10:06 am UTC

give us a "for example"

Here are the examples

Sanjaya Balasuriya, September 07, 2004 - 1:57 am UTC

Hi Tom,

Regarding my earlier review, here I have posted example extracts from my tkprof output.

Example 1 :

This statements uses hints.

Select /*+ Index (Acc_Inward_Rem_Det I_ACC_INWARD_REM_KNOCKOFF) */
CompanyCode,FundType,
Sum (Decode (InsuranceType,'GENERAL',Nvl(AmountPaid,0),0)) General_Premium_Income,
Sum (Decode(InsuranceType,'LIFE',Decode(InsertionType,Null,Nvl(AmountPaid,0),0),0)) Life_FY_Premium_Income,
Sum (Decode(InsuranceType,'LIFE',Decode(InsertionType,'RENEW',Nvl(AmountPaid,0),0),0)) Life_Renewal_Premium_Income,
(Sum (Decode (InsuranceType,'GENERAL',Nvl(AmountPaid,0),0)) + Sum (Decode(InsuranceType,'LIFE',Decode(InsertionType,Null,Nvl(AmountPaid,0),0),0)) + Sum (Decode(InsuranceType,'LIFE',Decode(InsertionType,'RENEW',Nvl(AmountPaid,0),0),0))) Total_Premium_Income
From Acc_Inward_Rem_Det
WHERE ( CompanyCode = : PCompanyCode And InsuranceType In ( 'GENERAL' , 'LIFE' ) And Nvl ( BatchStatus , '*' ) In ( 'BATCH' , '*' ) And Receipt_TxnType = 'PRM' And Status In ( 'OPEN' , 'ONHLD' , 'CANCL' ) And TransType <> 'ADV' And DocumentLinkId Is Not Null And ReceiptDate Between : PFromDate And : PToDate ) AND ( :companycode1 = CompanyCode) AND ( :branchcode = FundType) Group By CompanyCode , FundType

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 101 0.01 0.01 0 0 0 0
Fetch 101 1320.71 2783.41 1134314 37304827 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 204 1320.72 2783.43 1134314 37304827 0 1

Misses in library cache during parse: 1
Parsing user id: 427

=========================================================

Example 2 :

This statement user SQL functions.

Select CompanyCode, UnitCode, AgentCode AdoAgentCode , MiddleName,LastName ,
Ltrim (ShortName) ShortName
From Agent_Master
Where CompanyCode = :PCompanyCode
And UnitCode In (Select UnitCode From Unit_Master)
And ( AgentCode = :PFrom_AgentCode1 Or :PFrom_AgentCode1 is Null )
-- And AgentClass In ('BPAM','BPO','BPA','BPE','BPSE','MGR','AMGR')
And (AgentClass In ('MGR','AMGR') or agentclass like 'BP%')
-- And InsuranceType In ('GENERAL','BOTH')
Order By Ltrim (ShortName)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 85 0.08 0.22 10 1102 0 1602
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 88 0.08 0.22 10 1102 0 1602

Misses in library cache during parse: 1
Parsing user id: 427

========================================================

For other statements it is like;

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 427 (recursive depth: 1)

Thanks for a explanation in advance.

-San




Tom Kyte
September 07, 2004 - 7:22 am UTC

I cannot reproduce that behavior, using your very queries, under 9ir1, 9ir2, 10gr1.

Any chance these are "edited files"

Tkprof output

Sanjaya Balasuriya, September 07, 2004 - 11:48 pm UTC

Hi Tom,

Those statements and stats tables were directly extracted from the output files.

And I suspect those statements are executed with RBO since they are using SQL functions and hints in the statements.
Am I correct ?

Regards.

-San

Tom Kyte
September 08, 2004 - 9:08 am UTC

the first one used a well formed hint. It is using the CBO for sure? hints -> CBO pretty much.


Functions - you cannot say that. the RBO does "functions", the CBO does "functions"

I would say however with output like:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 101 0.01 0.01 0 0 0 0
Fetch 101 1320.71 2783.41 1134314 37304827 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 204 1320.72 2783.43 1134314 37304827 0 1

I would be seriously concerned about the much larger fish you have to fry.

37,304,827 and 46 minutes to retreive a single record -- ouch. you've got an issue there.



look in your raw trace file:

....
PARSING IN CURSOR #1 len=18 dep=0 uid=48 oct=3 lid=48 tim=1068992375930646 hv=1333943659 ad='5d86c1c4'
select * from dual
END OF STMT
PARSE #1:c=0,e=1096,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=1068992375930635
BINDS #1:
EXEC #1:c=0,e=55,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1068992375930855
WAIT #1: nam='SQL*Net message to client' ela= 3 p1=1650815232 p2=1 p3=0
........


you look for the cursor #, #1 in this case. Find the following EXEC line....

on the exec line, there is an og=

og=1 -- all rows
og=2 -- first rows
og=3 -- rule
og=4 -- choose

what does yours show?

TKPROF output

Ramanan, September 08, 2004 - 8:33 pm UTC

Tom,
Finally i was able to get the TKPROF output file for the event 10046 at level 12.
I am just attaching the small part of the file , the insert statement takes 41 minutes to finish.

INSERT INTO tc_psdetail_stg nologging
(
PSD_KEY,
PSD_PSL,
PSD_MIN,
ENTRY_DATE
)
SELECT
PSD_KEY,
PSD_PSL,
PSD_MIN,
TO_DATE(PSD_DATE,'YYYYMMDD')
FROM PSDETAIL

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.10 0 0 1 0
Execute 1 81.53 2352.47 0 18876 127418 4160320
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 81.54 2352.57 0 18876 127419 4160320

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 177 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
log file switch completion 6 0.06 0.32
--------------------------------------------------------------
Also attaching the end of the TKPROF file.

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 12918 2.14 3.64 0 5 64 0
Execute 21726 2024.88 8561.61 3376440 2224527 5181202 16164628
Fetch 18322 647.07 1683.88 381399 549972 265 14253
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 52966 2674.09 10249.14 3757839 2774504 5181531 16178881

Misses in library cache during parse: 94
Misses in library cache during execute: 2

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to dblink 15 0.00 0.00
SQL*Net message from dblink 15 0.02 0.06
log file sync 218 1.00 87.13
rdbms ipc reply 231 2.00 86.25
db file sequential read 14221 0.49 55.39
local write wait 68 1.00 22.97
log buffer space 4955 0.95 263.30
direct path write 15163 0.58 610.96
direct path read 354024 0.61 649.62
log file switch completion 272 1.00 57.43
db file scattered read 50365 0.46 128.19
write complete waits 101 1.00 99.46
free buffer waits 1908 1.14 1775.55
buffer busy waits 2 0.00 0.00
latch free 15 0.01 0.02
log file switch (checkpoint incomplete) 59 1.00 55.44

Will you be able to give your imputs with the aboe details.





Tom Kyte
September 09, 2004 - 7:48 am UTC

did you prematurely chop off the insert wait report in this tkprof cut and paste?

but given the extremely high free buffer waits -- I'd guess your IO is the bottleneck here -- there isn't sufficient free space in the buffer cache to hold the inserted data, and you are waiting for dbwr to write it out and free up some buffers so you can insert more.

what happened to your APPEND? that would do a direct path write (avoid buffer cache)....

when you look at the IO on your system there -- do you see a problem?

you do have checkpoint not completes in there as well -- indicating dbwr is not keeping up -- are you using async IO, do you have multiple dbwr's/io slaves? are you trying to write all of this data to a single disk or is it nicely striped?


direct path read/writes -- is your sort area too small perhaps as well.

Look at your major wait events --

free buffer waits (waiting for an unused buffer -- make dbwr better),

direct path read/write (temp, hows your sort/hash areas),

log buffer space (your log buffer isn't getting flushed fast enough indicating either a) it's too small, b) your IO is again the issue - your log devices)



Looks like its alot about "poorly performing IO operations" -- that is where you want to spend your time looking first.



tkprof output err=1

Winston, September 12, 2004 - 2:15 am UTC

I was using HTML DB to execute a query. The region expected an item and based on that item to retrieve a Clob and output to the GUI. Here is the PL/SQL code:

begin
if :P11_X_ID is null then
     return;
end if;
for x in (
select  column_value
      from table( query_cfile( cursor( select theClob from demo where id = TO_NUMBER(:P11_X_ID) ) 
     ) ))loop

     htp.prn(x.column_value);
end loop;

end;


When I first time ran a page I was able to get the page back without any issue; however when I entered the second ID, I got "Error ERR-1802 Unable to find item id "187846523855309166"  
OK 
Unexpected error". Then I tried running the same page on different browser (switch from firefox to IE ) and entered the second ID I was able to get the page back. The same problem happened for the second browser, that is, if I entered another ID to search, I got the same error.

And I turned on the trace using p_trace=YES


grep -in err stra_ora_10201.trc 

83:            error_template,
205:                where parameter in ('NLS_LANGUAGE', 'NLS_TERRITORY', 'NLS_SORT')
221:alter session set NLS_TERRITORY = "AMERICA"
600:        process_error_message,
634:        process_error_message,
873:ERROR #34:err=1 tim=2122883685
968:   value="wwv_flow.find_item_err2"
979:   value="wwv_flow.unexpected_err"
1038:select error_page_template 
1102:   value="wwv_flow.err"
1113:   value="wwv_flow.err"
1304:   value="wwv_flow.find_item_err2"
1315:   value="wwv_flow.unexpected_err"
1358:   value="wwv_flow.err"
1369:   value="wwv_flow.err"
1549:   value="wwv_flow.find_item_err2"
1560:   value="wwv_flow.unexpected_err"
1603:   value="wwv_flow.err"
1614:   value="wwv_flow.err"
1818:ERROR #34:err=1 tim=2122883705
1874:   value="wwv_flow.find_item_err2"
1885:   value="wwv_flow.unexpected_err"
1928:   value="wwv_flow.err"
1939:   value="wwv_flow.err"
2143:ERROR #34:err=1 tim=2122883713
2199:   value="wwv_flow.find_item_err2"
2210:   value="wwv_flow.unexpected_err"
2253:   value="wwv_flow.err"
2264:   value="wwv_flow.err"
2757:            error_template,
2879:                where parameter in ('NLS_LANGUAGE', 'NLS_TERRITORY', 'NLS_SORT')
2895:alter session set NLS_TERRITORY = "AMERICA"
3274:        process_error_message,
3308:        process_error_message,
3575:            error_message,
3582:            nvl(error_display_location,'ON_ERROR_PAGE') error_display_location
3640:   value="wwv_flow.no_page_to_branch_err"

Since the error happened on Cursor #34, here I retrieved the #34 from the trace file:

=====================
PARSING IN CURSOR #34 len=461 dep=1 uid=63 oct=2 lid=63 tim=1069307285981395 hv=698873200 ad='5bdf916c'
INSERT into wwv_flow_data (
                flow_instance,
                item_id,
                item_element_id,
                item_name,
                name_length,
                item_value,
                session_state_status,
                flow_id
            ) values (
                :b6,
                :b5,
                :b4,
                :b3,
                length(:b3),
                :b2,
                'I',
                :b1)
END OF STMT
PARSE #34:c=0,e=132,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=1069307285981373
BINDS #34:
 bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=1 size=24 offset=0
   bfp=4063ffe8 bln=22 avl=11 flg=05
   value=3707546874081609208
 bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=24 offset=0
   bfp=0ae3d23c bln=22 avl=09 flg=09
   value=1486322312855973
 bind 2: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=24 offset=0
   bfp=0ae3e0d0 bln=22 avl=02 flg=09
   value=1
 bind 3: dty=1 mxl=32(08) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=32 offset=0
   bfp=0ae3e13c bln=32 avl=08 flg=09
   value="P11_X_ID"
 bind 4: dty=1 mxl=32(08) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=32 offset=0
   bfp=0ae3e13c bln=32 avl=08 flg=09
   value="P11_X_ID"
 bind 5: dty=1 mxl=128(02) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=128 offset=0
   bfp=0adbb4e0 bln=128 avl=02 flg=09
   value="57"
 bind 6: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=1 size=24 offset=0
   bfp=4063ffc4 bln=22 avl=03 flg=05
   value=105
=====================
PARSING IN CURSOR #35 len=119 dep=2 uid=0 oct=3 lid=0 tim=1069307285985274 hv=3286148528 ad='5bfee648'
select c.name, u.name from con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and c.owner# = u.user#
END OF STMT
PARSE #35:c=0,e=176,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=1069307285985249
BINDS #35:
 bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=1 size=24 offset=0
   bfp=407975e0 bln=22 avl=04 flg=05
   value=32720
EXEC #35:c=0,e=514,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=1069307285986292
FETCH #35:c=0,e=88,p=0,cr=2,cu=0,mis=0,r=0,dep=2,og=4,tim=1069307285986517
STAT #35 id=1 cnt=0 pid=0 pos=1 obj=0 op='NESTED LOOPS  (cr=2 r=0 w=0 time=78 us)'
STAT #35 id=2 cnt=0 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=2 r=0 w=0 time=74 us)'
STAT #35 id=3 cnt=0 pid=2 pos=1 obj=31 op='TABLE ACCESS BY INDEX ROWID CDEF$ (cr=2 r=0 w=0 time=70 us)'
STAT #35 id=4 cnt=0 pid=3 pos=1 obj=53 op='INDEX RANGE SCAN I_CDEF4 (cr=2 r=0 w=0 time=62 us)'
STAT #35 id=5 cnt=0 pid=2 pos=2 obj=28 op='TABLE ACCESS BY INDEX ROWID CON$ (cr=0 r=0 w=0 time=0 us)'
STAT #35 id=6 cnt=0 pid=5 pos=1 obj=49 op='INDEX UNIQUE SCAN I_CON2 (cr=0 r=0 w=0 time=0 us)'
STAT #35 id=7 cnt=0 pid=1 pos=2 obj=22 op='TABLE ACCESS CLUSTER USER$ (cr=0 r=0 w=0 time=0 us)'
STAT #35 id=8 cnt=0 pid=7 pos=1 obj=11 op='INDEX UNIQUE SCAN I_USER# (cr=0 r=0 w=0 time=0 us)'
=====================
PARSING IN CURSOR #35 len=84 dep=2 uid=0 oct=3 lid=0 tim=1069307285988552 hv=2458412332 ad='5b779008'
select o.name, u.name from obj$ o, user$ u  where o.obj# = :1 and o.owner# = u.user#
END OF STMT
PARSE #35:c=0,e=142,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=1069307285988532
BINDS #35:
 bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=1 size=24 offset=0
   bfp=407975e0 bln=22 avl=04 flg=05
   value=32720
EXEC #35:c=0,e=431,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=1069307285989485
FETCH #35:c=0,e=187,p=0,cr=5,cu=0,mis=0,r=1,dep=2,og=4,tim=1069307285989805
STAT #35 id=1 cnt=1 pid=0 pos=1 obj=0 op='NESTED LOOPS  (cr=5 r=0 w=0 time=169 us)'
STAT #35 id=2 cnt=1 pid=1 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID OBJ$ (cr=3 r=0 w=0 time=74 us)'
STAT #35 id=3 cnt=1 pid=2 pos=1 obj=36 op='INDEX UNIQUE SCAN I_OBJ1 (cr=2 r=0 w=0 time=37 us)'
STAT #35 id=4 cnt=1 pid=1 pos=2 obj=22 op='TABLE ACCESS CLUSTER USER$ (cr=2 r=0 w=0 time=75 us)'
STAT #35 id=5 cnt=1 pid=4 pos=1 obj=11 op='INDEX UNIQUE SCAN I_USER# (cr=1 r=0 w=0 time=16 us)'
EXEC #34:c=0,e=8619,p=0,cr=8,cu=10,mis=0,r=0,dep=1,og=4,tim=1069307285991108
ERROR #34:err=1 tim=2122883685
=====================

I looked out your book page 473, but I could not find a solution as I didn't get any error number.

Here is the version:
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
PL/SQL Release 9.2.0.4.0 - Production
CORE    9.2.0.3.0       Production
TNS for Linux: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production



I also searched HTML DB discussion forum but I couldn't get any useful information. My page is very simple, it contains one region, one item, one button (click to search).

Would you please advise what next step should be?

Thanks in advance, 

Tom Kyte
September 12, 2004 - 11:11 am UTC

why didn't you post the question in the HTMLDB forum? that would be the best place for it don't you think? the guys who WROTE htmldb monitor that forum and I just happen to know they would be the best resources for answering this

And besides, I would just send this onto them anyhow.

So, since you know where the forum is -- ask it over there.

trace file contents

Sanjaya Balasuriya, September 12, 2004 - 2:16 am UTC

Hi Tom,

This regards to the 07 th September review.

The og value for the above statement in the trace file is 4.
That means optimizer_goal was choose.

But optimizer_mode have been set to first_rows.

Since first_rows also comes under CBO. Does this mean the above statements was executed under RBO ?

Another question Tom;

Is it correct if I say a database would run under RBO instead of CBO because the stats have been outdated ?
For example, I gather stats on Saturday morning. The database which is an OLTP is under a heavy load throught out the week. And when it is Friday database is running under RBO ?
Or a database runs under RBO only when there is no stats at all ?


And I think stats for OLTP databases should be gathered daily. But some poeple argue that it would make the performence worse and stats should be gathered only weekly.
What do you think ?

Thank in advance.

-San


Tom Kyte
September 12, 2004 - 11:18 am UTC

how do I know the optimizer goal was set to first rows?


the "above statement", you have two statements -- the trace file says 'they were optimized using og=4, choose'.  the first one DEFINITELY used the cbo because of the well formed hint you have (well formed but ill advised).  The second one would use RBO if none of the myriad of reasons for invoking the cbo applied (eg: a table was analyzed, one of the tables is partitioned, one of the tables has a non-defualt Degree of Parallelism, one of the table is an iot, .........)


So, i think the optimizer goal was NOT first rows.  It was choose.

ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set optimizer_goal=rule;
 
Session altered.
 
ops$tkyte@ORA9IR2> select * from dual og_rule;
 
D
-
X
 
ops$tkyte@ORA9IR2> alter session set optimizer_goal=first_rows;
 
Session altered.
 
ops$tkyte@ORA9IR2> select * from dual og_FR;
 
D
-
X
 
ops$tkyte@ORA9IR2> alter session set optimizer_goal=all_rows;
 
Session altered.
 
ops$tkyte@ORA9IR2> select * from dual og_AR;
 
D
-
X
 
ops$tkyte@ORA9IR2> alter session set optimizer_goal=choose;
 
Session altered.
 
ops$tkyte@ORA9IR2> select * from dual og_C;
 
D
-
X
 
=====================
PARSING IN CURSOR #1 len=26 dep=0 uid=190 oct=3 lid=190 tim=1069337079731167 hv=4011106869 ad='5716ee64'
select * from dual og_rule
END OF STMT
PARSE #1:c=0,e=1081,p=0,cr=0,cu=0,mis=1,r=0,dep=0,<b>og=3</b>,tim=1069337079731159
=====================
PARSING IN CURSOR #1 len=24 dep=0 uid=190 oct=3 lid=190 tim=1069337079763349 hv=2521806656 ad='571c81e4'
select * from dual og_FR
END OF STMT
PARSE #1:c=10000,e=1632,p=0,cr=3,cu=0,mis=1,r=0,dep=0,<b>og=2</b>,tim=1069337079763342
=====================
PARSING IN CURSOR #1 len=24 dep=0 uid=190 oct=3 lid=190 tim=1069337079801330 hv=3870449361 ad='5717f3e4'
select * from dual og_AR
END OF STMT
PARSE #1:c=10000,e=9184,p=0,cr=1,cu=0,mis=1,r=0,dep=0,<b>og=1</b>,tim=1069337079801320
=====================
PARSING IN CURSOR #1 len=23 dep=0 uid=190 oct=3 lid=190 tim=1069337079813553 hv=2979642908 ad='57148050'
select * from dual og_C
END OF STMT
PARSE #1:c=0,e=388,p=0,cr=0,cu=0,mis=1,r=0,dep=0,<b>og=4</b>,tim=1069337079813544

 

Optimizer goal

Sanjaya Balasuriya, September 12, 2004 - 10:06 pm UTC

Hi Tom,

The parameter optimizer_mode=first_rows in the init file. (spfile is not used for this database.)

Can optimizer_goal different from optimizer_mode ?

Thanks in advance.

-San

Tom Kyte
September 13, 2004 - 7:34 am UTC

optimizer goal/mode are synonymous.

if first_rows WAS in fact in the pfile, the session must have changed it.

You can verify this behavior on your own system here. just log in, trace, alter session, trace -- you'll see it change on your own system.

QUESTIONS

PRS, September 18, 2004 - 4:34 pm UTC

Tom,
Thanks for helping the oracle community.
I have some questions.
1. We have a front-end application (using .NET) make a direct
searche on ORACLE(9.2.0.4) database with bind variables in
the query. Front-end is passing all bind variables
as character string only for number columns. Qyueries do ruturn the
result pretty fast. But I was thinking that oracle will do a full
table scan, if you pass a character for a number value inspite of
having indexes.

Example select * from ps_rc_case where case_id = :1

where :1 should be numeric values as data type in the table is
numeric. But front end is passing as '12345'. Still query comes
back pretty fast. So I am guessing somewhere :1 is converted into
number either at .NET ORACLE connector or somewhere in ORACLE
database. CASE_ID is an indexes column.

Is this guess right?

2. We currently on RAID (1+0) for the data disks(/data1,/data2,/data3,etc.).
on which we have control files, redo logs, data files.
But we are getting new hardware with Storage Area Network(SAN) on RAID 5.
On SAN all data disks goes to one mount point /oracle/<SID>/oradata.
So we are going to have control files, redo logs and data files all under
/oracle/<SID>/oradata. Archive is going to be on /oracle/<SID>/arc.

I have asked my system administrator about io problem or disk contention
problem by having RAID 5 and one direcotry for everything.
But he says do not worry about that. There should not be any io or disk
contention problem.

What do you think?

3. We have some queries in which oracle cost based optimizer explain plan
shows hash join for some of the table. It does a full table scan on one
of the table of the query. This particular table has 8 million rows. So
query takes long time 5 minutes to retrieve rows(~300 rows).

As soon as I say ALTER SESSION SET HASH_JOIN_ENABLED=FALSE and flush the
shared pool. Query comes back in 20 seconds. In that case it does use an
index for the same table.

So Hash join was a problem.

So should I set HASH_JOIN_ENABLED parameter as FALSE in INIT.ORA?


Thanks,
PRS

Tom Kyte
September 18, 2004 - 5:23 pm UTC

what does this have to do with the original question?

1) there will be an implicit cast in this case.  a simple explain plan or tkprof on your part would have shown you one way or the other:


ops$tkyte@ORA9IR2> variable x varchar2(5)
ops$tkyte@ORA9IR2> create table t ( x int primary key, data char(20) );
 
Table created.
 
ops$tkyte@ORA9IR2> insert into t select object_id, 'x' from all_objects;
 
29541 rows created.
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> delete from plan_table;
 
2 rows deleted.
 
ops$tkyte@ORA9IR2> explain plan for select * from t where x = :x;
 
Explained.
 
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
 
----------------------------------------------------------------------------
| Id  | Operation                   |  Name        | Rows  | Bytes | Cost  |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |     1 |    25 |     2 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T            |     1 |    25 |     2 |
|*  2 |   INDEX UNIQUE SCAN         | SYS_C006656  | 29541 |       |     1 |
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("T"."X"=TO_NUMBER(:Z))
 
Note: cpu costing is off
 
15 rows selected.


2) you must have a very low use system if they are going to put all on one raid 5 volume and say "no worries".  you must not write very much.

Ask them "how is Oracle multi-plexing of redo going to work out here -- what happens <b>when that mount point fails</b> (when).  All of the control files gone, all of the online redo log gone -- should we not partition -- raid 5 is not failsafe, it will fail.  It is highly available, not bullet proof.

3) no no no no no.  not a chance.  you would look at your other optimizer related parameters and say "why did we change them".  
 

Pedantic (nuance?) questions...

Mark A. Williams, September 18, 2004 - 6:52 pm UTC

Tom,

Interesting that in 9i the ":x" in the statement becomes ":z" in the plan whereas in 10g it is only ":x". Any reason for this?

Also, why would the explain plan show 29541 rows for the index unique scan step in your sample? Shouldn't the equality predicate return 1 row?

on my system:

SQL> create table t (x number primary key, data char(20));

Table created.

SQL> insert into t select object_id, 'x' from all_objects;

48092 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);

PL/SQL procedure successfully completed.

SQL> delete from plan_table;

0 rows deleted.

SQL> commit;

Commit complete.

SQL> explain plan set statement_id = 'p1' for
  2  select * from t where x = :x;

Explained.

SQL> commit;

Commit complete.

SQL> select * from table(dbms_xplan.display);

[ time column snipped for formatting ]

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 3325272791

--------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |     1 |    25 |     2   (0)
|   1 |  TABLE ACCESS BY INDEX ROWID| T           |     1 |    25 |     2   (0)
|*  2 |   INDEX UNIQUE SCAN         | SYS_C006496 |     1 |       |     1   (0)
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"=TO_NUMBER(:X))

14 rows selected.

Cheers,

Mark 

Tom Kyte
September 19, 2004 - 10:15 am UTC

As Albert points out -- i was in fact using 9204 (gotta get the home machine patched :)

that is the :x vs :z


The rows appears to be an issue in the index selectivity computation getting botched. In the 10053 trace from 9204 and 9205 given the same example, we'd see:

9204 ....
IX_SEL: 1.0000e+00 TB_SEL: 1.0000e+00
Access path: index (eq-unique)
Index: SYS_C001705
......


9205 .....
IX_SEL: 3.5815e-05 TB_SEL: 3.5815e-05
Access path: index (eq-unique)
Index: SYS_C002979
TABLE: T
........


well 1 * cardinality = cardinality
3.5e-5 * cardinality = 1.0 (in this example)


so, it was a miscomputation that was corrected in 9205 (hate it when the answer gives rise to more questions! :)


Alberto Dell'Era, September 19, 2004 - 8:54 am UTC

I bet that Tom is using 9.2.0.4.

On 9.2.0.5:

dellera@ORACLE9I> variable x varchar2(5)
dellera@ORACLE9I> create table t ( x int primary key, data char(20) );

Table created.

dellera@ORACLE9I> insert into t select object_id, 'x' from all_objects;

30303 rows created.

dellera@ORACLE9I> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true);

PL/SQL procedure successfully completed.

dellera@ORACLE9I> delete from plan_table;

40 rows deleted.

dellera@ORACLE9I> explain plan for select * from t where x = :x;

Explained.

dellera@ORACLE9I> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 2 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 25 | 2 |
|* 2 | INDEX UNIQUE SCAN | SYS_C004597 | 1 | | 1 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T"."X"=TO_NUMBER(:Z))

Note: cpu costing is off

15 rows selected.

":Z" is still there anyway.

bye
Alberto

table name in trace file

Michael, October 20, 2004 - 8:53 pm UTC

At one time, the trace files actually contained the name of the table:

STAT #28 id=2 cnt=34056 pid=1 pos=1 obj=222 op='TABLE ACCESS FULL DUAL '

Now, it only contains the object id: STAT #1 id=1 cnt=1 pid=0 pos=1 obj=222 op='TABLE ACCESS FULL OBJ#(222) '

I am not sure what caused the changes and hoping that you could provide me with some insight.

Thank you.


Tom Kyte
October 20, 2004 - 9:21 pm UTC

if the application closes the cursor nicely, it should be "there"

eg:

ops$tkyte@ORA9IR2> set echo on
ops$tkyte@ORA9IR2> @gettrace
ops$tkyte@ORA9IR2> column trace new_val TRACE
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  2    from v$process a, v$session b, v$parameter c, v$instance d
  3   where a.addr = b.paddr
  4     and b.audsid = userenv('sessionid')
  5     and c.name = 'user_dump_dest'
  6  /
 
TRACE
-------------------------------------------------------------------------------
/home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_1184.trc
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @trace
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2> select * from dual;
 
D
-
X
 
ops$tkyte@ORA9IR2> disconnect
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

<b>the cursor was closed, but "abruply"</b>

ops$tkyte@ORA9IR2> grep FULL /home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_1184.trc
 
ops$tkyte@ORA9IR2> !grep FULL /home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_1184.trc
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=222 op='TABLE ACCESS FULL OBJ#<b>(222)</b> (cr=3 r=0 w=0 time=59 us)'
 
ops$tkyte@ORA9IR2> connect
Enter user-name: /
Connected.
ops$tkyte@ORA9IR2> @gettrace
ops$tkyte@ORA9IR2> column trace new_val TRACE
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  2    from v$process a, v$session b, v$parameter c, v$instance d
  3   where a.addr = b.paddr
  4     and b.audsid = userenv('sessionid')
  5     and c.name = 'user_dump_dest'
  6  /
 
TRACE
-------------------------------------------------------------------------------
/home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_1186.trc
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @trace
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2> select * from dual;
 
D
-
X
 
ops$tkyte@ORA9IR2> select 1 from dual;
 
         1
----------
         1
 
ops$tkyte@ORA9IR2> @disconnet
SP2-0310: unable to open file "disconnet.sql"
ops$tkyte@ORA9IR2> disconnect
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
ops$tkyte@ORA9IR2> !grep FULL /home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_1186.trc
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=222 op='TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=63 us)'
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=222 op='TABLE ACCESS FULL OBJ#(222) (cr=3 r=0 w=0 time=62 us)'

<b>Note one with names, one without</b>

Now, this is not perfectly scientific -- there are other cases probably where this can pop up, but this is one I know of... 

very useful thread

Sankara, October 21, 2004 - 2:45 pm UTC

select length
from
fet$ where file#=:1 and block#=:2 and ts#=:3


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 55 0.00 0.00 0 0 0 0
Execute 55 0.00 0.00 0 0 0 0
Fetch 55 0.00 0.00 0 110 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 165 0.00 0.00 0 110 0 0

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 2)
I have been reading through this thread and in one of the posts, there is a recursive SQL which has parse equal to the number of executes. I also see it frequently in some of the trace files i look in our site.
Why is it so ?

Tom Kyte
October 21, 2004 - 3:53 pm UTC

because recursive sql like that is not cached -- it would tend to overfill the sessions ability to open their own cursor (imagine if we did 100 recursive sql's and used up your 100 slots....)

Reference clarification

Ronald, October 27, 2004 - 7:39 am UTC

Tom,
Since you have refereced,

</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96520/tuningpe.htm <code>

above
wish to clarify,

1) · Convert subqueries to joins.
is this true and why?

2) · Use the CREATE TABLE ... AS SELECT statement to break a complex operation into smaller pieces. With a large query referencing five or six tables, it may be difficult to determine which part of the query is taking the most time. You can isolate bottlenecks in the query by breaking it into steps and analyzing each step.

I have read thru the whole page but cannot find an example for that, could you please provide a detail example for the same?

Tom Kyte
October 27, 2004 - 8:04 am UTC

1) the optimizer does that, looks like "legacy" documentation from the days of the RBO.

2) i don't think I'd use CTAS -- that is dubious. I would perhaps break the query into "built upon pieces" (like I do with inline views when building analytics) to see how it is going "at each step" -- being well aware that view merging and stuff will change the aggregate plan.

Re: Reference clarification

Ronald, October 27, 2004 - 1:49 pm UTC

Re 2)
Still confused,
I have your book expert one on one, Is there an example there or somewhere which will show how to “break a query”?
how to “to see how it is going "at each step"”?

Tom Kyte
October 27, 2004 - 4:02 pm UTC

they are talking about doing it a piece at a time.

if you have "effective Oracle by design", i have a "concrete" example. there we have two queries.

Each when run by themselves (broken down) goes fast, returns small result sets

Join them together and it is like setting slow=yes_please

go on to show how to correct an issue like that. but that is what "break the query down" means.


not sure how useful it is.

tkprof output q

sankar, October 29, 2004 - 3:06 pm UTC

declare p varchar2(32767);
begin p := ACCOUNTSPKG.SECUREEXTERNALUSERS(:sn, :on);
:v1 := substr(p,1,4000);
:v2 := substr(p,4001,4000);
:v3 := substr(p,8001,4000);
:v4 := substr(p,12001,4000);
:v5 := substr(p,16001,4000);
:v6 := substr(p,20001,4000);
:v7 := substr(p,24001,4000);
:v8 := substr(p,28001,4000);
:v9 := substr(p,32001,767);
:v10 := substr(p, 4000, 1);
:v11 := substr(p,8000,1);
:v12 := substr(p, 12000, 1);
:v13 := substr(p,16000,1);
:v14 := substr(p, 20000, 1);
:v15 := substr(p,24000,1);
:v16 := substr(p, 28000, 1);
:v17 := substr(p,32000,1);
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 9928 8.21 7.79 0 0 0 0
Execute 9928 36.70 37.39 0 0 0 9928
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 19856 44.91 45.19 0 0 0 9928

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 248 (recursive depth: 3)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 44 0.01 0.10



Why is this parsing every time i execute? Is it because of all the substr's ?

SELECT sys_context(:b2,:b1)
from
sys.dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 16978 9.66 10.41 0 0 0 0
Fetch 16978 2.92 3.00 0 50951 0 16978
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 33956 12.58 13.42 0 50951 0 16978

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 248 (recursive depth: 4)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 129 0.15 0.63

Curious about why select from dual use so much cpu ?


Tom Kyte
October 29, 2004 - 5:08 pm UTC

where does that plsql come from?  is that recursive sql from FGAC?

 
ops$tkyte@ORA9IR2> select 12.58/16978 from dual;
 
12.58/16978
-----------
 .000740959

7 one ten thousands of a second per execution -- hey, I'm impressed - aren't you?  do anything really fast *alot* and it adds up.

 

following up

sankar, October 29, 2004 - 5:18 pm UTC

yes it is from FGAC.

Tom Kyte
October 29, 2004 - 5:52 pm UTC

recusive sql cannot be cached (you would run out of cursors if we did)

count

APL, December 20, 2004 - 4:29 am UTC

SELECT TBL_WBCT_USER_MASTER.USER_ID, TBL_WBCT_USER_GROUP.GROUP_ID,
TBL_WBCT_USER_MASTER.FIRST_NAME, TBL_WBCT_USER_MASTER.LAST_NAME,
TBL_WBCT_USER_MASTER.EMAIL_ID
FROM
TBL_WBCT_USER_MASTER, TBL_WBCT_USER_GROUP WHERE
TBL_WBCT_USER_MASTER.GM_ID = :1 AND TBL_WBCT_USER_MASTER.GM_ID =
TBL_WBCT_USER_GROUP.GM_ID AND TBL_WBCT_USER_MASTER.IS_DELETED = 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 259 0.14 0.12 0 0 0 0
Execute 259 0.14 0.09 0 0 0 0
Fetch 259 0.06 0.06 0 775 0 260
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 777 0.34 0.28 0 775 0 260

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 212

Rows Row Source Operation
------- ---------------------------------------------------
0 NESTED LOOPS
0 TABLE ACCESS BY INDEX ROWID TBL_WBCT_USER_MASTER
0 INDEX UNIQUE SCAN UNIQUE_GMID (object id 243849)
0 INDEX RANGE SCAN UNIQUE_GROUPID_GMID (object id 243847)

this is a java application and my consern is why the parse,execute and fetch count is equal. where should i look to find the problem since our query is using bind variables and index also?


Tom Kyte
December 20, 2004 - 8:22 am UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:30433844824883 <code>


because the programmer programmed it that way.

ORA-30076: invalid extract field for extract source

Pravesh Karthik from India, December 31, 2004 - 11:38 am UTC

Tom,

I need your help. Please let me know, why this error is coming in tkprof output


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.11 0.18 0 0 0 0
Execute 1 0.74 0.85 0 0 0 0
Fetch 1 604.43 3856.77 1364375 3240918 0 23657
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 605.28 3857.81 1364375 3240918 0 23657

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 36 (DW) (recursive depth: 1)
error during execute of EXPLAIN PLAN statement
ORA-30076: invalid extract field for extract source

parse error offset: 7272

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1353560 1.07 3276.78
db file scattered read 1034 0.08 2.18
latch free 241 0.01 0.53
buffer busy waits 22894 0.04 63.22


I gave tkprof tstorax_ora_18826.trc tstorax_ora_18826.txt explain=dw/dwpwd@tstorax sys=no

If i ignore,explain=dw/dwpwd@tstorax, i am not getting the explain plan. The above tkprof output has explain plan for all tables that doesnt have partition. I mean, if there is a partition table in the query, its not giving the explain plan.
Thanks for your consideration.
Pravesh Karthik

Tom Kyte
December 31, 2004 - 12:29 pm UTC

i said DON"T use explain= -- -that is not worthwhile.

You want the plan that is in the trace file -- you need to

a) run report with trace
b) make sure report process has logged off database
c) run tkprof tracefilename outputname

that is all -- the plan will be there, for free - and it'll be the PLAN that was in place at the time reports ran the query.

without explain= , i am not getting any execution plan.

Pravesh Karthik from India, January 01, 2005 - 1:52 am UTC

Tom,

If i do not use explain= , i am not getting any execution plan.

Is there any way, i can get the explain plan now. The problem is , i cannot run the trace again. Its quaterly process and i cannot run that again. Please help me, in getting the explain plan.

Sorry for the ignorance.

Pravesh Karthik

Tom Kyte
January 01, 2005 - 11:37 am UTC

then you have not exited the report process, it has not closed its cursors.

stop_trace_in_session -->not take care of make the trc file complete ?

Pravesh Karthik from India, January 02, 2005 - 11:52 am UTC

Tom,

I basically did this ..
1. Asked my developer to open a oracle session
2. Noted down his sid,serial#
3. EXECUTE dbms_support.start_trace_in_session(15,49,waits=>TRUE, binds=>TRUE);
4. requested him to issue a sql command, to verify if trace is getting generated,it did. this is just for confirmation.
5. He ran the report process
6. once, the developer told me that the report process is over .. i issued the following command.
EXECUTE dbms_support.stop_trace_in_session(15,49);

Does this not take care of making the trc file complete ?

One more thing, i noticed

...
...
Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 SORT (AGGREGATE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'TB_COM02111_DIM_TIME'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'UX_COM02111_18'
(NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'TB_SNB04450_LKP_WF_REASON_CD'
..
..
..

the rows column was always 0

i wanted wait events, so i used this dbms_support pkg, .. please let me know, if i missed out anything?

Thanks a lot for all your valuable suggestions
Pravesh Karthik

Tom Kyte
January 02, 2005 - 7:46 pm UTC

that is an explain plan (it says Execution Plan, not Row Source Operation)

it was "not enough", the cursors must be CLOSED to get the information written to the trace file.

start report process
turn on tracing
run report
EXIT report process
analyze trace file.

Exiting the session would close the cursor?

Pravesh Karthik from India, January 02, 2005 - 8:53 pm UTC

Tom,

Just wanted your confirmation, i will try as you suggested in Test Env. Now my question is

cursors must be CLOSED -- Exiting the session would close the cursor? or i need to close the cursor explicitly in the plsql which is running the rerport process?

Sorry for the ignorance. Thanks again

Pravesh Karthik

Tom Kyte
January 02, 2005 - 10:20 pm UTC

if you exit the report process, all will be closed -- do not ask to "end trace", just start trace, run that to be traced, EXIT PROCESS WHERE TRACING is taking place.



Need basic help on TKPROF

A reader, January 03, 2005 - 7:15 am UTC

Hi Tom,

can you refer me to a link where i can find Basic information about interpreting TKPROF Output ????

Would be very helpful as i see your responses by giving facts using TKPROF...

Thanks Tom....


Tom Kyte
January 03, 2005 - 8:54 am UTC

tkprof output

A reader, January 04, 2005 - 12:33 pm UTC

Hi Tom,

I was reading the beginning of this article where you have specified how to calc # of blocks. My tkprof output looks like this

INSERT into table1
SELECT a.col1, a.col2, b.col3,
:b1, :b2
FROM table2 a, table3 b
where a.col1 = b.col2
and a.col2 = :b2
and a.col3 = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.25 2 44 0 0
Execute 44 784.67 5883.31 2227856 1815267 110 44
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 45 784.70 5883.56 2227858 1815311 110 44

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 44 (recursive depth: 1)
********************************************************************************

This table table1 has only 1 row. What can I do to speed this up? When I execute the query alone without the insert it comes back in millisec. Pls help.

Thanks.

Tom Kyte
January 05, 2005 - 8:50 am UTC

compare the plans, whats different about the insert plan vs just the select.

tkprof output interpretation

RMM, January 31, 2005 - 12:58 pm UTC

I am afraid this may be incomplete info - if so
please comment as much as you can.. I am getting
the following for a query:

call count cpu elapsed disk
query current rows
------- ------ -------- ---------- ----------
---------- ---------- ----------
Parse 1 0.00 0.00 0
0 0 0
Execute 2 0.22 0.21 0
1 0 0
Fetch 1 8.58 282.68 43784
53916 0 1
------- ------ -------- ---------- ----------
---------- ---------- ----------
total 4 8.80 282.90 43784
53917 0 1

.....

Elapsed times include waiting on following events:
Event waited on Times
Max. Wait Total Waited
---------------------------------------- Waited
---------- ------------
direct path write temp 7172
0.00 0.10
db file scattered read 680
0.00 0.11
direct path read temp 4601
0.00 0.04
db file sequential read 1
0.00 0.00
********************************************************************************



Please see the above tkprof. The problem is that
the elapsed is much much higher than the cpu taken.
(Bind variables are being used - that is not the issue
here)

I think it may be mainly due to high number of
physical I/O. I was able to resolve the issue by
removing a distinct which turned out to be unnecessary.
The distinct was causing sort which I presume was
using the temp sort area. But I still have the following
questions:

Why does the
"Total Waited" does not add up anywhere near to the
difference in elapsed time and cpu time? Or is it not
supposed to?

The system is on 10g. Its setting for
pga_aggregate_target was 0 and hash_area_size=1048576
and
hash_area_size=1048576. From your book, my guess is that we should
1. use pga_aggregate_target since we are on 10g
2. set pga_aggregate_target to a higher value say
50MB.
3. set workarea_size_policy to AUTO.

Many thanx.

PS: In the raw trace (when I generated the tkprof) I
saw LOTS of the following lines
(which also seems to indicate that the temp area is
not sized properly - or of course the query is sorting
too big a data set)

---
WAIT #20: nam='direct path write temp' ela= 10 p1=201
p2=559104 p3=7
WAIT #20: nam='direct path write temp' ela= 21 p1=201
p2=802057 p3=7
WAIT #20: nam='direct path write temp' ela= 16 p1=201
p2=802078 p3=7
WAIT #20: nam='direct path write temp' ela= 26 p1=201
p2=802085 p3=7
-----------

Tom Kyte
January 31, 2005 - 1:16 pm UTC

was the machine cpu at 100% during this? when elapsed-sum(waits) is much larger than cpu, it typically indicates you are waiting for CPU the remainder of the time.

Thanx!

RMM, January 31, 2005 - 1:25 pm UTC

Yes - it is very much possible since the machine
cpu still seems to be very close to 100%.
So this may indicate that machine is short of CPU
anyways. Perhaps a good old fashioned reboot is
called for (this is Linux, btw:))




A reader

A, February 04, 2005 - 4:16 am UTC

select a,b,c from d where x=:1

call count cpu elasped disk query current rows
parse 139 0.03 0.03 1 1 0 0
execute 139 0.03 0.01 0 0 0 0
Fetch 278 0.18 1.24 335 2089 0 1802

Misses in lib cache during parse 1
goal : choose
user id :52

rows row source Operation
10 Table access by index rowid d
10 Index Range scan sys_c003754(object id 35298)

This comes from GREAT JAVA program for which our developer has no control(they say it is third party program).Here I wants to points out column c is of data type blob.Why it is doing 278 times fetch and also elapsed time for fetch is high.Another thing which is puzzling me is how the elapsed time is less then the cpu time in case of execute.
I think they are closing the cursor very soon.I can see many same sql with bind values(without blob column) in trc files.
As java guys cann't do anything ,I was just thinking to set
cursor_sharing=force along with session_cached_cursor=100
As always your help will be appreciated.

Tom Kyte
February 04, 2005 - 11:41 am UTC

cursor sharing would not do anything, the cursor is using binds.

they are parsing for every execute, not much we can do about that -- without FIXING the code. no magic (session_cached_cursors might help if not set)

do NOT set cursor_sharing for this.

A reader

A, February 04, 2005 - 4:35 am UTC

For the question above ,we are using oracle 9.2.0.5 on solaris.

A Reader

A, February 04, 2005 - 4:53 pm UTC

Thanks for your reply !!
1) How elapsed time is higher then cpu time in execute column ?? when this can happen ??

2)Why the elapsed time is very high in fetch column(1.24).

3)Why fetch is 278.??

Appreciate your help.

Tom Kyte
February 05, 2005 - 5:17 am UTC

1) you wait for stuff.   IO for example.  

2) you were waiting for stuff.  IO maybe.  if you have Expert one on one Oracle, see chapter 10.


if you use

alter session set events '10046 trace name context forever, level 12';


you'll get trace and waits and binds in the trace file, in 9ir2, tkprof will summarize the waits right in the report for you.


3) because they called fetch 278 times.


ops$tkyte@ORA9IR2> select 139*2 from dual;
 
     139*2
----------
       278
 
ops$tkyte@ORA9IR2> select 1802/139 from dual;
 
  1802/139
----------
12.9640288


Using those numbers and the fact that the JDBC default prefetch is 10, I can hypothesize this:

a) when they fetch row 1, jdbc is getting 1..10 really (prefetch).  when they get 11, they get 11..20 really.

b) they get about 13 rows per execution.

c) they called the database to fetch 2 times per execution, first fetch gets 1..10, second fetch gets 11..13 and also notified jdbc that "end of data" was hit


 

To "A Reader" above

A reader, February 05, 2005 - 8:01 am UTC

You can look at statement caching in JDBC to remove
excessive soft parsing being done

</code> http://download-west.oracle.com/docs/cd/B10501_01/java.920/a96654/toc.htm <code>


Tom Kyte
February 05, 2005 - 8:56 am UTC

<quote>
This comes from GREAT JAVA program for which our developer has no control(they
say it is third party program)
</quote>


not with that constraint placed on them....

true...

Menon, February 05, 2005 - 2:08 pm UTC

enabling disabling is done at connection level...I was assuming
the logic of obtaining connection is not done by
third party code? Come to think of it, I was assuming (incorrectly most likely)that the reader's code is invoking third party from their code:)...In either case, it should be a trivial change for the third party guys to make... (forgot
to sign my name in last post)

Missing query plan in tkprof output

Eric Peterson, February 17, 2005 - 4:04 pm UTC

ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';
<<query>>
ALTER SESSION SET EVENTS '10046 trace name context OFF';

unix> tkprof srpt01_ora_5753.trc a.a sys=no

I get the following report. From your _Expert One-on-One Oracle_ book, I should have a query plan. But as you can see below I don't. Am I setting my session wrong?

I also tried "ALTER SESSION SET SQL_TRACE=TRUE|FALSE;" and get the same output.

Thanks much
Eric




call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.11 0.13 1 180 3 0
Execute 1 444.24 12398.47 188484 192266 44 22523
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 444.35 12398.60 188485 192446 47 22523

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 26

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
control file sequential read 4 0.00 0.01
rdbms ipc reply 8 0.00 0.00
enqueue 1 0.39 0.39
PX Deq: Join ACK 9 0.00 0.00
PX Deq: Parse Reply 9 0.08 0.12
PX Deq: Execute Reply 7513 2.11 11184.01
db file sequential read 21 0.03 0.24
db file scattered read 47119 0.23 747.50
PX Deq Credit: send blkd 249 1.96 41.22
PX Deq Credit: need buffer 52 0.34 2.97
latch free 15 0.00 0.00
PX Deq: Signal ACK 5 0.00 0.00
direct path write 2 0.00 0.00
log file sync 1 0.00 0.00
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.53 0.53
********************************************************************************


Tom Kyte
February 17, 2005 - 7:32 pm UTC

sqlplus won't close the cursor until you run another one -- and closing is what gets the STAT records written out.

so, either

a) turn on trace
b) run query
c) select * from dual;
d) exit sqlplus

(that is what i like to do, the select * from dual seems to get the object name bumped out in the tkprof report instead of object#'s for the last query)


a) turn on trace
b) run query
c) run select * from all_users where 1=0 ... or something
d) turn off trace



Missing query plan in tkprof output - CTAS

Ken Nelson, March 02, 2005 - 5:09 pm UTC

Tom


I am using Oracle 9.2.0.6.0 - 64bit

connecting into sqlplus and setting:

ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';

..then run a package which creates a number of tables using
'create-table-as-select' ...

when package completed I exitg sqlplus.

I do not get an explain plan in the tkprof report for any of the CTAS statements.

Is there a different setting/method I can use?

As always, thanks for the invaluable help!



Tom Kyte
March 02, 2005 - 5:40 pm UTC

it looks like the trace file does not contain the stat records for the CREATE's, regardless of being in a package or not.


I do not see a way to get the stat records to come out either, no.


I suppose one approach (if you need to tune the SQL for the CTAS) might be:

a) get the CTAS from the tkprof
b) using explain plan, verify that the plan for the CTAS and just the SELECT are the same
c) trace just the SQL query

as long as the source tables are the same -- the amount of work done by the query should be about the same.

alter session ..

reader, March 14, 2005 - 9:22 am UTC

Why pl/sql does not support using alter session or alter system commands within a proc? Any reasons?

Tom Kyte
March 14, 2005 - 9:48 am UTC

because alter is not part of the syntax.

execute immediate 'alter ....';

works as would dbms_sql.



HTML DB

A reader, April 10, 2005 - 8:57 pm UTC

Tom, could you please do me a favor and look at a thread on the HTML DB forum for me?

</code> http://forums.oracle.com/forums/thread.jsp?nav=false&forum=137&thread=298628 <code>

It has to do with pagination and why the same query takes longer in HTMLDB than in sqlplus.

Thanks

Tom Kyte
April 11, 2005 - 8:24 am UTC

they seem to be saying "we didn't do that particular pagination query that way", using my technique.

In looking at the tkprof, I see you are doing more PIO, could it be that your sort area size/pga_aggregate_target is too "small"?



Not showing execution plan

Ashiq Shamsudeen A, April 11, 2005 - 8:34 am UTC

Hi tom,

In my trace file I'm not getting execution plan for some queries , but in the same file i've execution plan for other queries. I trace the session using the 10046 events. the oracle version 9.2.0.5

The query which has execution plan
********************************************************************************

SELECT TO_NUMBER( O.OBJECT_NAME )
FROM
IBO_OBJECT_NUMBER O
WHERE NOT EXISTS ( SELECT TO_NUMBER( SUBSTR ( S.OBJECT_NAME, 1, INSTR( S.OBJECT_NAME, '_' )-1 ))
FROM IBO_SUBRACK S
WHERE S.PARENT_ID = :B1
AND TO_NUMBER( SUBSTR ( S.OBJECT_NAME, 1, INSTR( S.OBJECT_NAME, '_' )-1 ) )= O.OBJECT_NAME )
ORDER BY TO_NUMBER( O.OBJECT_NAME ) ASC


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.31 0.32 0 7667 0 241
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.31 0.32 0 7667 0 241

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 92 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
241 SORT ORDER BY
241 FILTER
251 NESTED LOOPS
251 TABLE ACCESS BY INDEX ROWID OBJECTS
251 INDEX RANGE SCAN I_OBJECTS_FK (object id 33113)
251 INDEX RANGE SCAN CNS_OBL_UNQ (object id 33116)
10 NESTED LOOPS
2465 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
2465 INDEX RANGE SCAN I_OBJECT_LINKS_CB2 (object id 33117)
10 TABLE ACCESS BY INDEX ROWID OBJECTS
2465 INDEX UNIQUE SCAN CNS_OBJ_PRM (object id 43353)

********************************************************************************

and the query which doesn't have execution plan

********************************************************************************

SELECT UNIQUE_ID
FROM CLASSES
WHERE NAME = :b1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 172 0.07 0.07 0 0 0 0
Fetch 172 0.00 0.01 0 344 0 172
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 345 0.07 0.08 0 344 0 172

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 101 (recursive depth: 1)


Is any reason for this ?

Tom Kyte
April 11, 2005 - 9:09 am UTC

the stat records are written out after the cursor is closed.

so, if you start tracing
open cursor1
open cursor2
close cursor1
stop tracing

the trace file will have the stat (plan) for cursor 1 but not cursor 2.

Not showing execution plan

Ashiq Shamsudeen A, April 12, 2005 - 1:38 am UTC

Hi tom,

In my trace file I'm not getting execution plan for some queries , but in the same file i've execution plan for other queries. I trace the session using the 10046 events. the oracle version 9.2.0.5

The query which has execution plan
********************************************************************************

SELECT TO_NUMBER( O.OBJECT_NAME )
FROM
IBO_OBJECT_NUMBER O
WHERE NOT EXISTS ( SELECT TO_NUMBER( SUBSTR ( S.OBJECT_NAME, 1, INSTR( S.OBJECT_NAME, '_' )-1 ))
FROM IBO_SUBRACK S
WHERE S.PARENT_ID = :B1
AND TO_NUMBER( SUBSTR ( S.OBJECT_NAME, 1, INSTR( S.OBJECT_NAME, '_' )-1 ) )= O.OBJECT_NAME )
ORDER BY TO_NUMBER( O.OBJECT_NAME ) ASC


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.31 0.32 0 7667 0 241
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.31 0.32 0 7667 0 241

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 92 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
241 SORT ORDER BY
241 FILTER
251 NESTED LOOPS
251 TABLE ACCESS BY INDEX ROWID OBJECTS
251 INDEX RANGE SCAN I_OBJECTS_FK (object id 33113)
251 INDEX RANGE SCAN CNS_OBL_UNQ (object id 33116)
10 NESTED LOOPS
2465 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
2465 INDEX RANGE SCAN I_OBJECT_LINKS_CB2 (object id 33117)
10 TABLE ACCESS BY INDEX ROWID OBJECTS
2465 INDEX UNIQUE SCAN CNS_OBJ_PRM (object id 43353)

********************************************************************************

and the query which doesn't have execution plan

********************************************************************************

SELECT UNIQUE_ID
FROM CLASSES
WHERE NAME = :b1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 172 0.07 0.07 0 0 0 0
Fetch 172 0.00 0.01 0 344 0 172
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 345 0.07 0.08 0 344 0 172

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 101 (recursive depth: 1)


Is any reason for this ?

Tom Kyte
April 12, 2005 - 7:08 am UTC

I answered this exact same question yesterday, right above?

create index

A reader, April 14, 2005 - 1:12 pm UTC

Hi,

I have started a create index statement and it has been going on for more than 2 hrs. How do I see why it is taking so long?

Thanks.

Tom Kyte
April 14, 2005 - 1:28 pm UTC

v$session_longops would be useful
v$session_event would be too
v$sesstat can many times provide interesting statistics.

RE: create index

Vinayak, April 14, 2005 - 10:20 pm UTC

also, it'd be helpful to see v$session_wait

traceanalyzer output( more or less same as tkprof)

deba, May 10, 2005 - 1:00 pm UTC

I have following tkprof(traceanalyzer) output :

CURSOR_ID:2 LENGTH:1097 ADDRESS:b75c6c8c HASH_VALUE:1142491738 OPTIMIZER_GOAL:CHOOSE USER_ID:61

select "FACT_LIVE_BREAK_INVENTORY"."DHR_DIM_HALF_HOUR_ID" ,
"FACT_LIVE_BREAK_INVENTORY"."DDE_DIM_DAY_TYPE" ,
"DIM_MACRO_CHANNEL_SCOPE"."DIM_CHANNEL_SC_ID" ,
"FACT_LIVE_BREAK_INVENTORY"."DAE_DIM_AUDIENCE_ID" ,
( "DIM_CALENDAR_PERIOD"."YEAR_MONTH" || '01') ,
(sum(( "FACT_LIVE_BREAK_INVENTORY"."IMPACTS_AVAILABLE" + "FACT_LIVE_BREAK_INVENTORY"."IMPACTS_SOL
D" )) / 10) , NULL
from "DWADMIN"."DIM_CALENDAR_PERIOD" "DIM_CALENDAR_PERIOD",
"DWADMIN"."DIM_MACRO_CHANNEL_SCOPE" "DIM_MACRO_CHANNEL_SCOPE",
"DWADMIN"."FACT_LIVE_BREAK_INVENTORY" "FACT_LIVE_BREAK_INVENTORY"
where ( "FACT_LIVE_BREAK_INVENTORY"."DCE_DIM_CHANNEL_SC_ID" = "DIM_MACRO_CHANNEL_SCOPE"."BASE_CHAN
NEL_SC_ID" )
and ( "DIM_CALENDAR_PERIOD"."DIM_CALENDAR_ID" = "FACT_LIVE_BREAK_INVENTORY"."DCD_DIM_CALENDAR_ID"
) and ( "DIM_CALENDAR_PERIOD"."YEAR_MONTH" >= 200504)
group by "FACT_LIVE_BREAK_INVENTORY"."DHR_DIM_HALF_HOUR_ID" ,
"DIM_MACRO_CHANNEL_SCOPE"."DIM_CHANNEL_SC_ID" ,
"FACT_LIVE_BREAK_INVENTORY"."DAE_DIM_AUDIENCE_ID" ,
"FACT_LIVE_BREAK_INVENTORY"."DDE_DIM_DAY_TYPE" , "DIM_CALENDAR_PERIOD"."YEAR_MONTH"

call count cpu elapsed disk query current rows misses
------- --------- --------- --------- ------------ ------------ ------------ ------------ ---------
Parse 1 0.03 0.03 0 0 0 0 1
Execute 1 0.00 0.00 0 0 0 0 0
Fetch 21483 131.13 188.72 202921 211071 0 322218 0
------- --------- --------- --------- ------------ ------------ ------------ ------------ ---------
total 21485 131.16 188.75 202921 211071 0 322218 1

| Rows Row Source Operation
| ------------ ---------------------------------------------------
| 322218 SORT GROUP BY
| 15785888 .HASH JOIN
| 37 ..TABLE ACCESS FULL DIM_MACRO_CHANNEL_SCOPE (object id 5280141 )
| 4341312 ..HASH JOIN
| 2101 ...TABLE ACCESS FULL DIM_CALENDAR_PERIOD (object id 35967 )
| 23938716 ...TABLE ACCESS FULL FACT_LIVE_BREAK_INVENTORY (object id 5991274 )

Event Times Count Max. Total Blocks
waited on Waited Zero Time Wait Waited Accessed
----------------------------------------------------------------- --------- --------- --------- --------- ---------
db file scattered read (multiblock full scan).................... 7495 0 0.26 70.85 202719
db file sequential read (single block scan)...................... 202 0 0.05 0.26 202
latch free (098)................................................. 3 0 0.02 0.02
SQL*Net message from client (idle)............................... 21483 0 0.01 14.45
SQL*Net message to client (idle)................................. 21483 1193 0.00 0.02
----------------------------------------------------------------- --------- --------- --------- --------- ---------
total............................................................ 50666 1193 0.26 85.60 202921

non-idle waits................................................... 7700 0 0.26 71.13 202921
idle waits....................................................... 42966 1193 0.01 14.47

| parsing recur
| trc_line depth user
| -------- ----- ----
| 26 0 61

Now my questions :
1) What is the total time for this sql ? Is it cpu time + elapsed time ? Or Elapsed time is the total time for this sql out of which 131.16 is for cpu ?

2) The total time waited on different events for this sql is 85.60. How can you link this time to the above cpu time/elapsed time ? For example if I say elapsed times is the total time out of which cpu time is 131.16 secs, then if I add total waited time (85.60) with cpu time (131.16) the it is crossing elpased time (188.75). This is not clear. So I want to know the relationships among cpu time , elapsed time and total event wait time.

3) If possible what is the meaning of following section :

| parsing recur
| trc_line depth user
| -------- ----- ----
| 26 0 61

This is very important because I need to understand this traceanalyzer file ( or tkprof file ). Please help me .


Tom Kyte
May 10, 2005 - 1:57 pm UTC

1) total run time in the database is just elapsed time, elapsed is inclusive of cpu.

2) the sqlnet message events typically happen before/after the statement (while it is waiting to be told to do something).  It can also happen in the middle as well (between fetches for example)


watch:

ops$tkyte@ORA10G> set serveroutput off
ops$tkyte@ORA10G> set time on
13:46:03 ops$tkyte@ORA10G> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
13:46:03 ops$tkyte@ORA10G> select * from dual d1;
 
D
-
X
 
13:46:03 ops$tkyte@ORA10G> pause
 
13:46:12 ops$tkyte@ORA10G> select * from dual d2;
 
D
-
X
 
13:46:12 ops$tkyte@ORA10G>

So, I hung out for about 9 seconds...

elect * from dual d1
                                                                                                                           
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          3          0           1
                                                                                                                           
                                                                                                                           
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        8.37          8.37
********************************************************************************


recursive depth is the level of the sql -- SQL run in plsql for example is at "depth 1" -- the depth 0 sql is the PLSQL call, then it (that plsql) runs SQL (that is depth 1), that sql in turn may cause other SQL to run (update seq$ for example to update a sequence -- that would be depth 2, which in turn might need space so it does more recursive SQL and so on)

the user is the user_id from all_users of the person parsing/executing the sql.


(hint: tkprofs fit nice on this site) 

execution plan in tkprof report

Jianhui, May 10, 2005 - 5:24 pm UTC

Tom,
It's interesting that some of tkprof reports do NOT have execution plan(i dont use command line explain option), see below example, for privacy purpose i replaced the actual names with 'test', view name with 'test_VIEW'. Is the view a problem for tkprof not extracting execution plan from trace file or there is no execution plan in trace file at all when views are involved? I did other tests without using views, i did see execution plans in the report. Any idea?

<<

UPDATE test SET c = (SELECT c FROM test_VIEW
WHERE
test.ID = test_VIEW.ID)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 3254.74 2467.47 10795290 34190425 5137749 2543268
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3254.74 2467.48 10795290 34190425 5137749 2543268

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 36 (recursive depth: 1)
********************************************************************************

INSERT INTO test SELECT * FROM test_VIEW


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
>>


Tom Kyte
May 10, 2005 - 11:27 pm UTC

that means the cursor was not closed. the STAT records (with the plans) are written when the cursor is CLOSED, not before.

traceanalyzer output( more or less same as tkprof)

deba, May 11, 2005 - 12:39 am UTC

Hi,

Tom , many many thanks for the quick reply and excellent example. But my second question is still not clear.
You have given me the example "SQL*Net message from client". Fine I think it comes under idle waits ,
not in non-idle waits like ( db file scattered read ). Now my second question was what is the relationship among
cpu time , elapsed time and total time on waited event ? To make it clear once again I am pasting output.

call count cpu elapsed disk query current rows miss
------- ----- ------ ------- ------ ------ ------- ------- ----
Parse 1 0.03 0.03 0 0 0 0 1
Execute 1 0.00 0.00 0 0 0 0 0
Fetch 21483 131.13 188.72 202921 211071 0 322218 0
------- ----- ------ ------- ------ ------ ------- ------- ----
total 21485 131.16 188.75 202921 211071 0 322218 1

Event Times Count Max. Total Blocks
waited on Waited Zero Time Wait Waited Accessed
--------------------------------------------- ------ --------- ---- ------ ---------
db file scattered read (multiblock full scan) 7495 0 0.26 70.85 202719
db file sequential read (single block scan).. 202 0 0.05 0.26 202
latch free (098)............................. 3 0 0.02 0.02
SQL*Net message from client (idle)........... 21483 0 0.01 14.45
SQL*Net message to client (idle)............. 21483 1193 0.00 0.02
--------------------------------------------- ------ --------- ---- ------ ---------
total........................................ 50666 1193 0.26 85.60 202921

non-idle waits............................... 7700 0 0.26 71.13 202921
idle waits................................... 42966 1193 0.01 14.47

My question :

1) You are saying that elapsed time is the total time for sql. Fine , so I can say that elapsed time = cpu time + db file scattered read + db file sequential read + latch free (098).
But then the total summation of those non-idle wait events + cpu time is exceeding Elapsed time. How this is possible

2) I have notice another thing in this thread. I want to know one thing - how elapsed time is lower than cpu time ?
What is reason for such behaviour ? ( reference : in this thread, dated Oct 10, 2003 , the person is Sam from San Mateo, CA)

select count(*)
from
all_objects


call count cpu elapsed disk query current rows
------- ---- ---- ------- ---- ----- ------- ----
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3.52 3.43 0 75729 0 1
------- ---- ---- ------- ---- ----- ------- ----
total 4 3.52 3.43 0 75729 0 1

In this case I can not say elapsed time is the total time taken for the sql .

Please help me . This is very urgent. Once agian thanks for the reply to my first thread.

Deba

Tom Kyte
May 11, 2005 - 7:24 am UTC

do you have access to Effective Oracle by Design, or Expert one on one Oracle? I covered tkprof reading in great detail.

cpu time = cpu used to perform that operation
elapsed time = amount of wall clock time used to perform that operation

IN GENERAL

cpu+wait=elapsed

but it won't always work that way since you don't "wait" for CPU -- so on a cpu bound system you could take 5 seconds of elapsed time to get 1 second of cpu time -- but there would be no waits


and since things are timed at different levels of granularity, you'll never get them to add up exactly (cpu and elapsed are accounted for by the OS differently)

If you want the pages of information -- books, I have two, Cary Millsap has and excellent one.


open cursor?

Jianhui, May 11, 2005 - 8:03 am UTC

Tom,
I dont quite understand your answer to my question about NO execution plan in tkprof report
you said
<<
Followup:

that means the cursor was not closed. the STAT records (with the plans) are
written when the cursor is CLOSED, not before.
>>

How do i close the implicit cursor, these UPDATE & INSERT statements are in a PL/SQL stored procedure exactly the same as they appear in the tkprof report, how come i can control the cursor (implicit cursor i believe)?

The procedure looks like
create procedure foo
is
begin
UPDATE ...
INSERT ...
end;
and my sql*plus trace session looks like

alter session set sql_trace=true
exec foo
alter session set sql_trace=false



Tom Kyte
May 11, 2005 - 8:06 am UTC

you exit the application

instead of turning off sql_trace, just exit sqlplus, that'll really close all cursors. otherwise, they are cached and you won't get the stat records.

tkprof output

deba, May 13, 2005 - 12:25 am UTC

I have seen than cpu time is greater than elapsed time in trprof /trace analyzer report where cpu time+wait time is not
matching with the total elapsed time.I know about the rounding error issue in 9i in tkprof/trace analyzer report.

But your answer "and since things are timed at different levels of granularity, you'll never get
them to add up exactly (cpu and elapsed are accounted for by the OS differently)" - is not clear.

In our case we have 12 cpus. Lets say I execute a sql ( not parallel ). Oracle may not split this sql (since I am not using
parallel features ) but OS may split this job into 6 pieces and each piece gets executed by each of the designated 6 cpus.
Our OS administrator has told that though we are not using Oracle parallel query features still OS has the capability of
parallelising the tasks. So in this example lets say the sql takes only 4 seconds to complete. So in this case ( according
to your words "cpu and elapsed are accounted for by the OS differently" ) , cpu time will be 6 seconds where as elapsed time
will be 4 seconds. Am I right ? If so then on mulitiprocessers machine if any database is running, the statspack report
will not give us correct statistics esp. 'top 5 wait events' (which are expressed in terms of elapsed time !!!).We can not
say that total elapsed time = (cpu time from 'top 5 wait events' * 100 ) / % elapsed time for cpu for any multi-processor
machine. Am I right ?

I want your valuable explanation in this regard. How can I trap elapsed time and cpu time correctly in multi-processor
machine ?

Is this true that though I am not using Oracle Parallel query features still OS has ability to parallelise the process ?
If it is so then what is the need of using Oracle parallel features?

Tom Kyte
May 13, 2005 - 9:17 am UTC

the "rounding" issue is in all software, all releases, forever.


cpu is measured say in 1/1000000ths of a second and elapsed time in 1/10000ths perhaps (could be at different granularities)

some OS's appear to return both as 1/1000000ths, but they are at different levels

And Oracle is measuring elapsed time (using gettimeofday() for example) but OS measures CPU -- we time things at different levels -- at different points.

Our call to gettimeofday takes CPU even...


You get what you got -- the numbers, when very close, are in fact "the same". There will always be a small difference between "reality" and "reported". Unless you have a gross mismatch between the two values (1.0 cpu second, 60 seconds elapsed time), consider them "the same")

this has nothing at all to do with the use of parallel features or not.

some clarification

Deba, May 17, 2005 - 4:50 am UTC

Still my question is not answered .

Q: Is this true that though I am not using Oracle Parallel query features still OS
has ability to parallelise the process ?
If it is so then what is the need of using Oracle parallel features? What are the extra things I will be getting when we use Oracle Parallel query features ?

Deba

Tom Kyte
May 17, 2005 - 8:52 am UTC

if you want a single query to run in parallel, that would be parallel query.

otherwise a single process/thread will do the query and do it serially.


see
</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c20paral.htm#365 <code>
for "will be getting" information.

Oracle Consultant

Dawar Naqvi, May 17, 2005 - 2:26 pm UTC

Hello tom,

I am trying to run below query to display segmnets with less than 10% free blocks but getting error as below.

select owner, table_name,BLOCKS,EMPTY_BLOCKS
from dba_tables
where EMPTY_BLOCKS /(BLOCKS+EMPTY_BLOCKS) > .1;

where EMPTY_BLOCKS /(BLOCKS+EMPTY_BLOCKS) > .1
*

ERROR at line 3:
ORA-01476: divisor is equal to zero

Regards,
Dawar



Tom Kyte
May 17, 2005 - 2:56 pm UTC

decode( (blocks+empty_blocks), 0, to_number(null), empty_blocks/(blocks...) )

how to see what type of lock we were waiting on

A reader, May 18, 2005 - 2:33 am UTC

Here is the trace file from Oracle8i Enterprise Edition Release 8.1.7.3.

WAIT #24: nam='enqueue' ela= 336 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 307 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 346 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 307 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 308 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 307 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 308 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 309 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 308 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 308 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 307 p1=-471400442 p2=1572880 p3=338652
WAIT #24: nam='enqueue' ela= 109 p1=-471400442 p2=1572880 p3=338652
EXEC #24:c=1,e=3562,p=0,cr=135,cu=2,mis=0,r=0,dep=1,og=4,tim=119225571

Tried to use enqueue_decode on page 470 of Expert one-on-one:

APP_USER@U001D> select enqueue_decode(-471400442) from dual;
select enqueue_decode(-471400442) from dual
*
ERROR at line 1:
ORA-01426: numeric overflow
ORA-06512: at "APP_USER.ENQUEUE_DECODE", line 5


APP_USER@U001D> select text from user_source where name=upper('enqueue_decode') and line between 4 and 6 order by line;

TEXT
-----------------------------------------------------------------------------------------------------------------------------------
begin
select chr(bitand(l_p1, -16777216/16777215))||
chr(bitand(l_p1, 16711680)/65535)||' '||

any idea why enqueue_decode doesn't work?

Thanks!

Tom Kyte
May 18, 2005 - 8:58 am UTC

that p1 isn't in a valid format -- please contact support

bigger difference between c and e

A reader, May 18, 2005 - 3:09 pm UTC

Here is the trace file from Oracle8i Enterprise Edition Release 8.1.7.3.

PARSE #13:c=1,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=0,tim=119192676
BINDS #13:
EXEC #13:c=1,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=119192677
FETCH #13:c=16,e=28,p=0,cr=659,cu=0,mis=0,r=0,dep=1,og=4,tim=119192705
BINDS #13:
EXEC #13:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=119194172
BINDS #13:
EXEC #13:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=119243131
FETCH #13:c=16,e=20,p=0,cr=659,cu=0,mis=0,r=0,dep=1,og=4,tim=119243151
BINDS #13:
EXEC #13:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=119243802
FETCH #13:c=16,e=89,p=0,cr=659,cu=0,mis=0,r=0,dep=1,og=4,tim=119243891

There wasn't any wait events emitted before FETCH, does it mean that we run out of CPU capacity?

Thanks!

Tom Kyte
May 18, 2005 - 4:44 pm UTC

well, assuming you had wait events turned on, it could yes

Thanks a lot, Tom!!!

A reader, May 18, 2005 - 5:36 pm UTC

I used the following statement to enable the trace.

alter session set events '10046 trace name context forever, level 12', so all the wait events in 8.1.7.3 should be dumped ?

This big gap will be contributing to big accounted-for in the resouce profile --in Cary's term.

Thank you so much!

about Fetch Count

archana, June 06, 2005 - 2:09 am UTC

Hi Tom,

1.) I am sorry but I am not able to understand why the Fetch Count is more even if there is only one statement to execute. Why is it fetching more than once??

2.) Where does this NonRecursive statement and Recursive statement o/p come from?

My test case is:

create table test as select * from emp;

insert into test select * from emp;

TKProf output is:

select *
from
test


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.05 1 8 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 28
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.01 0.05 1 17 0 28

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 57



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.05 1 8 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 28
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.01 0.05 1 17 0 28

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0

1 user SQL statements in session.
0 internal SQL statements in session.
1 SQL statements in session.
********************************************************************************
Trace file: test.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
1 user SQL statements in trace file.
0 internal SQL statements in trace file.
1 SQL statements in trace file.
1 unique SQL statements in trace file.
33 lines in trace file.




Tom Kyte
June 06, 2005 - 7:19 am UTC

because client applications ask for N rows at a time. Not ALL rows, N rows.

so, the first fetch does NOT get all 28 records, just some, it processes them and then askes for the next N rows and so on.

thanks tom

archana, June 07, 2005 - 6:24 am UTC


what can cause High execute elapsed time

A reader, October 05, 2005 - 8:17 am UTC

Hi

I have a query which runs very fast however if has very high execute elapsed time. What can it be? (This table has FGAC implemented but very simple stuffs)

SELECT NVL(MAX(PRCOM),0)
FROM
TART E WHERE E.CODART = :B5 AND E.PROVEE = :B4 AND E.EXPLOT = SUBSTR(:B3 ,
1,5)||:B2 AND E.FECIVL <= :B1 AND FECIVL = (SELECT MAX(FECIVL) FROM TART
T WHERE T.CODART = :B5 AND T.PROVEE = :B4 AND T.EXPLOT =SUBSTR(:B3 ,1,5)
||:B2 AND T.FECIVL <= :B1 ) AND ROWNUM = 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.11 0 3 0 0
Execute 13446 149.70 1009.99 5829 53784 0 0
Fetch 13446 1.26 37.07 2150 66665 0 13446
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 26893 150.98 1047.17 7979 120452 0 13446

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 177 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 7979 1.91 181.23




Tom Kyte
October 05, 2005 - 11:03 am UTC



The FGAC is part of the parse - the execute is where the query is really doing the work.

Each execution of the query took 0.075 seconds (wall clock) and 0.01 cpu seconds.


Now, I look at that query and I pray you are not donig something like:

loop
run that query, get A SINGLE ROW
if return value was zero, then .... we are done, exit
process that row...
end loop


cause if it is, you've done is slow by slow by slow by slow - and that is a bad thing


PARSING IN CURSOR #1

V, October 11, 2005 - 5:23 pm UTC

I did a level 12 trace & in the trace file I see all of my SELECTS with the label "PARSING IN CURSOR #1". They aren't incremented. The only cursors that are incremented are the ORACLE system level statements. What could be the cause of this? Also, my STATSPACK 'execute to parse' is very low...(.29).



Tom Kyte
October 12, 2005 - 6:46 am UTC

if you

loop
parse
bind
execute
close
end loop

you would expect to see the same cursor number be used over and over and over.


Only if your application "cached" a cursor would it be different.


The "cause" of this is that your program parses and executed the statement one time before closing it.

Cursor

V, October 12, 2005 - 8:59 am UTC

So if I have an APP that has multiple statements, and after each statement a close is issued, I would expect to see 'PARSING IN CURSOR #1' everytime, correct?


Tom Kyte
October 12, 2005 - 1:52 pm UTC

or parsing in cursor #N - yes, some number over and over. it would not be unexpected.

No timed stats in TKPROF output

A reader, December 07, 2005 - 6:18 pm UTC

I used to see the timed statistics in my TKPROF output all the times. Example is

Rows Row Source Operation
------- ---------------------------------------------------
51 COUNT STOPKEY (cr=25329 r=0 w=0 time=4645700 us)

Suddenly I stopped getting that info in my output, How to see the timed statistics (I removed the actual table names from the output)?

My Oracle sever had timed statistics as true. I tried the query got the same output. Then I added the following statement to my session before running the query:

Set timing on
Set autotrace traceonly
alter session set timed_statistics=true;
alter session set sql_trace=true;

---------------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.10 0.12 1 426 0 0
Execute 1 0.02 0.12 8 70 0 0
Fetch 69 71.04 78.18 536 1181284 0 1008
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 71 71.16 78.43 545 1181780 0 1008

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 47

Rows Row Source Operation
------- ---------------------------------------------------
1008 COUNT STOPKEY
1008 HASH JOIN
1694 NESTED LOOPS
1694 NESTED LOOPS
1011 NESTED LOOPS
413 TABLE ACCESS BY INDEX ROWID S_S
418 DOMAIN INDEX S__C
1011 TABLE ACCESS BY INDEX ROWID S_S_R
1011 INDEX RANGE SCAN SE____S_S_I (object id 37104)
1694 TABLE ACCESS BY INDEX ROWID S_C
1694 INDEX RANGE SCAN S____O_ I_X (object id 37192)
1694 INDEX FAST FULL SCAN P_P_PK (object id 36822)
1023 VIEW
1023 SORT UNIQUE
1063 UNION-ALL
40 NESTED LOOPS
40 HASH JOIN
41 INDEX RANGE SCAN T___I_X (object id 37286)
275880 TABLE ACCESS FULL T_T_O
40 TABLE ACCESS BY INDEX ROWID P_P
40 INDEX UNIQUE SCAN P_P_PK (object id 36822)
1023 TABLE ACCESS BY INDEX ROWID P_P
1089 INDEX RANGE SCAN IX_P_P_O _ID (object id 36826)


Tom Kyte
December 08, 2005 - 1:54 am UTC

what is your statistics_level set to

Here is the level info

A reader, December 08, 2005 - 2:03 pm UTC

Here is the output from V$STATISTICS_LEVEL

STATISTICS_NAME SESSION_STATUS SYSTEM_STATUS ACTIVATION_LEVEL
Buffer Cache Advice ENABLED ENABLED TYPICAL
MTTR Advice ENABLED ENABLED TYPICAL
Timed Statistics ENABLED ENABLED TYPICAL
Timed OS Statistics DISABLED DISABLED ALL
Segment Level Statistics ENABLED ENABLED TYPICAL
PGA Advice ENABLED ENABLED TYPICAL
Plan Execution Statistics DISABLED DISABLED ALL
Shared Pool Advice ENABLED ENABLED TYPICAL


Tom Kyte
December 09, 2005 - 1:16 am UTC

no, the session/system parameter statistics_level:


ops$tkyte@ORA10GR2> show parameter statistics_level

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
statistics_level                     string      TYPICAL


 

More details

A reader, December 08, 2005 - 7:58 pm UTC

I forgot to mention one thing, the target database is a 2 node RAC 9.2.0.4 version.

Session Level

A reader, December 09, 2005 - 3:05 pm UTC

SQL> show parameter statistics_level

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
statistics_level                     string      TYPICAL
SQL>
 

Tom Kyte
December 10, 2005 - 5:01 am UTC

in 9205 i had to set to ALL to get this (but not in 10gr2, typically was sufficient). it was a new 9204 feature - this cr= pw= information in the trace - as I recall.

tkprof output

VA, December 22, 2005 - 4:24 pm UTC

Regarding the following tkprof output, this SELECT is in a PL/SQL function that is embedded in a view. The following tkprof is from a select .. from the view.

SELECT TRUNC(:B3 *1/EXCHANGE_RATE_TO_USD)
FROM
HISTORY_CURRENCY WHERE AS_OF_DT=:B2 AND CURRENCY_CD=:B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 9216 5.28 5.09 0 0 0 0
Fetch 9216 0.17 0.07 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18433 5.45 5.17 0 0 0 0

1. Why is the CPU time more than the elapsed time?
2. Why are there no rows returned, it definitely returns rows, I can see that in the output.
3. Why are the executes the same as fetches? Shouldn't it execute once and fetch many times?
4. Why is it taking so long? There is a composite index on the 2 columns references in the WHERE clause above.

Thanks

Tom Kyte
December 22, 2005 - 5:26 pm UTC

1) because when you measure lots of small things (as we have here), each can have an "incremental error" - if the clock is measuring things at a granularity of 1/1000th of a second - each measurement can be off by +- 1/1000th (if I take 5/1000 + 999/10000 of a second to do something - I might report that I took 5/1000th or 6/1000th of a second).

The problem is further "enlarged" by the fact that in current releases - cpu time is measured typically using one scale (thousandths) while elapsed is using another (millionths) on many platforms - hence one number allows for more "errors" than the other.

For all intents and purposes - I would consider those two numbers above "the same"

Tkprof output

reader, January 04, 2006 - 7:23 am UTC

I was trying to understand the tkprof output --
I have picked the following from the performance tuning Guide (chapter 20-26 ,page 440)
cr is consistent read ,r records read w written (Am I correct ?)

At each row source generation there is a cr r w with a number ?

How do these numbers(cr+r+w) relate to the final logical IO ? If yes then how ?

Can you help me understanding this ?
OR is there a place in your BOOKS where I can get some more details .

Thanks

-------------------------
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 29.60 60.68 266984 43776 131172 28144
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 29.60 60.68 266984 43776 131172 28144
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: 22
Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE (cr=43141 r=266947 w=25854 time=60235565 us)
28144 HASH JOIN ANTI (cr=43057 r=262332 w=25854 time=48830056 us)
51427 TABLE ACCESS FULL STATS$SQLTEXT (cr=3465 r=3463 w=0 time=865083 us)
647529 INDEX FAST FULL SCAN STATS$SQL_SUMMARY_PK
(cr=39592 r=39325 w=0 time=10522877 us) (object id 7409)

Tom Kyte
January 04, 2006 - 10:34 am UTC

cr's are the only thing that relate to logical IO, the other two are physical IO.

and the cr relates "directly" to the query mode gets.


select * from t

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.00 0 5 0 14

Rows Row Source Operation
------- ---------------------------------------------------
14 TABLE ACCESS FULL T (cr=4 pr=0 pw=0 time=63 us)

there we did cr=4 in the row source operation and that maps to the 4 query mode gets during the Fetch phase of the query

********************************************************************************
select * from t for update

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 1 0 0
Execute 1 0.00 0.00 0 3 16 0
Fetch 2 0.00 0.00 0 4 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.00 0 8 16 14

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 86

Rows Row Source Operation
------- ---------------------------------------------------
14 FOR UPDATE (cr=7 pr=0 pw=0 time=386 us)
28 TABLE ACCESS FULL T (cr=7 pr=0 pw=0 time=502 us)

Here we did cr=7 in the row source, which maps directly to the 3+4 during the execute/fetch phase of the query. The parse doesn't count, that was logical IO to do the parse itself


cpu vs elapsed,

A reader, February 08, 2006 - 5:30 pm UTC

I have always seen (well most of the time) the CPU time is less than the elapsed time in the tkprof output.

In one situation, I have a tkprof report that is the other way round.

Is this common?

The tkprof report is pasted below:

UPDATE dm_resource_dim res
SET res.level3 =
(SELECT sss.description
FROM PDB_sss sss
WHERE sss.structure_code =res.level3_code)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 544.10 74.08 0 45852657 2133 16548
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 544.10 74.08 0 45852657 2133 16548

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 733 (DMPVPROD)

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE DM_RESOURCE_DIM (cr=45852657 pr=0 pw=0 time=74082116 us)
16548 TABLE ACCESS FULL DM_RESOURCE_DIM (cr=1887 pr=0 pw=0 time=49685 us)
9831 TABLE ACCESS FULL SSS (cr=45850770 pr=0 pw=0 time=543693622 us)


Rows Execution Plan
------- ---------------------------------------------------
0 UPDATE STATEMENT MODE: ALL_ROWS
0 UPDATE OF 'DM_RESOURCE_DIM'
16548 TABLE ACCESS MODE: ANALYZED (FULL) OF 'DM_RESOURCE_DIM'
(TABLE)
9831 TABLE ACCESS MODE: ANALYZED (FULL) OF 'SSS' (TABLE)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************



Tom Kyte
February 10, 2006 - 8:59 am UTC

one would need to inspect the raw trace file to determine why this is - but in looking at the consistent reads - I would say it is not worth our while to do so, since this query obviously needs some help. That is way too much work to remove that small number of rows.

and something funny is going on, I would expect current to be much higher (was trace on WELL BEFORE the update started I wonder).


45million plus logical IO's seems "excessive" doesn't it?

confused about the tkprof output

A reader, February 14, 2006 - 10:33 am UTC

why is it saying that sys run when I was the one ran it??

select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare2
from
obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null
and linkname is null and subname is null


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6 0.00 0.00 0 0 0 0
Execute 21 0.00 0.03 0 0 0 0
Fetch 21 0.01 0.08 17 84 0 21
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 48 0.01 0.11 17 84 0 21

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 17 0.01 0.07
********************************************************************************


Tom Kyte
February 14, 2006 - 12:58 pm UTC

because you didn't run it, sys ran it. That is "our sql", recursive SQL, sql we run for you in order to run your sql.

we ran it, not you.

more confusion..

A reader, February 14, 2006 - 10:46 am UTC

Tom,

I just was going thru your expert book reading about sequential read but what is not clear is how can I deal with something like this in my
tkprof report? am I reading this right? that it took 33 seconds to do a simple insert?? please advice.

INSERT INTO CLASS_MESSAGES
VALUES
(:B1 ,:B2 ,:B3 ,:B4 ,:B5 ,:B6 ,:B7 ,:B8 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.03 0.33 36 93 60 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.03 0.33 36 93 60 2

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 18 0.00 0.04
direct path write 6 0.00 0.00
********************************************************************************


Tom Kyte
February 14, 2006 - 1:00 pm UTC

that is took 33/100ths of a second would be more like it.

33.00 - that would be 33 seconds
0.33 - that is 33/100ths of a second.

Execution Plan difference between 8i and 10g

DanielD, February 15, 2006 - 2:48 pm UTC

Tom,

Why is there such difference in execution plans between Oracle8i and Oracle10g?

Involved objects: (10 - oracle10gR2, 8 - Oracle8i 8.1.7.4)

10 8
select count(*) from PW_PROJECT_PHASE; 3241 3222
select count(*) from PW_PROJECT_ACTION; 4322 4303
select count(*) from PW_PROJECT; 1001 999
select count(*) from PW_PROJECT_STEP; 5252 5257
select count(*) from PW_L_User_Proj_Perm; 2091 2089
select count(*) from PW_L_Org_Proj_Perm; 120 120

Statistics has been computed on both instances with:
execute dbms_stats.gather_schema_stats(ownname=>'PWADMIN', cascade=>true);

Oracle8i execution plan:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.03 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 8 0.02 0.02 0 1982 0 105
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.05 0.05 0 1982 0 105

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 82 (PWADMIN)

Rows Row Source Operation
------- ---------------------------------------------------
105 NESTED LOOPS
106 NESTED LOOPS
106 NESTED LOOPS
609 VIEW
609 SORT GROUP BY
1810 VIEW
1810 UNION-ALL
2 TABLE ACCESS BY INDEX ROWID PROJECT_PHASE
3 INDEX RANGE SCAN (object id 60548)
1205 TABLE ACCESS BY INDEX ROWID PROJECT_STEP
1206 INDEX RANGE SCAN (object id 61351)
603 NESTED LOOPS
125 VIEW VW_NSO_1
125 SORT UNIQUE
124 UNION-ALL
124 TABLE ACCESS BY INDEX ROWID L_USER_PROJ_PERM
136 INDEX RANGE SCAN (object id 47378)
0 NESTED LOOPS
2 TABLE ACCESS BY INDEX ROWID L_USER_ORG
2 INDEX RANGE SCAN (object id 47432)
0 TABLE ACCESS BY INDEX ROWID L_ORG_PROJ_PERM
2 INDEX RANGE SCAN (object id 65827)
603 TABLE ACCESS BY INDEX ROWID PROJECT_PHASE
727 INDEX RANGE SCAN (object id 60546)
713 TABLE ACCESS BY INDEX ROWID PROJECT_PHASE
1216 INDEX UNIQUE SCAN (object id 60545)
210 TABLE ACCESS BY INDEX ROWID PROJECT
210 INDEX UNIQUE SCAN (object id 47271)
105 TABLE ACCESS BY INDEX ROWID PROJECT_ACTION
305 INDEX RANGE SCAN (object id 78509)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: FIRST_ROWS
105 NESTED LOOPS
106 NESTED LOOPS
106 NESTED LOOPS
609 VIEW
609 SORT (GROUP BY)
1810 VIEW
1810 UNION-ALL
2 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROJECT_PHASE'
3 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'XIF3PROJECT_PHASE' (NON-UNIQUE)
1205 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROJECT_STEP'
1206 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'XIF4PROJECT_STEP' (NON-UNIQUE)
603 NESTED LOOPS
125 VIEW OF 'VW_NSO_1'
125 SORT (UNIQUE)
124 UNION-ALL
124 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'L_USER_PROJ_PERM'
136 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'XIF2L_USER_PROJ_PERM' (NON-UNIQUE)
0 NESTED LOOPS
2 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'L_USER_ORG'
2 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'XIF1L_USER_ORG' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'L_ORG_PROJ_PERM'
2 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'XIF2L_ORG_PROJ_PERM' (NON-UNIQUE)
603 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'PROJECT_PHASE'
727 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'XIF1PROJECT_PHASE' (NON-UNIQUE)
713 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROJECT_PHASE'
1216 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'XPKPROJECT_PHASE' (UNIQUE)
210 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'PROJECT'
210 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'XPKPROJECT'
(UNIQUE)
105 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROJECT_ACTION'
305 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'XIF1PROJECT_ACTION'
(NON-UNIQUE)

************************************************************

Oracle10g execution plan:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.17 0.18 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 8 0.07 0.06 0 206 0 105
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.24 0.24 0 206 0 105

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 114 (PWADMIN)

Rows Row Source Operation
------- ---------------------------------------------------
105 HASH JOIN (cr=206 pr=0 pw=0 time=58089 us)
1001 TABLE ACCESS FULL PROJECT (cr=23 pr=0 pw=0 time=233 us)
105 MERGE JOIN (cr=183 pr=0 pw=0 time=50167 us)
105 MERGE JOIN (cr=152 pr=0 pw=0 time=64390 us)
607 SORT JOIN (cr=99 pr=0 pw=0 time=38769 us)
607 VIEW (cr=99 pr=0 pw=0 time=37211 us)
607 HASH GROUP BY (cr=99 pr=0 pw=0 time=36602 us)
1808 VIEW (cr=99 pr=0 pw=0 time=14567 us)
1808 UNION-ALL (cr=99 pr=0 pw=0 time=12757 us)
2 TABLE ACCESS BY INDEX ROWID PROJECT_PHASE (cr=4 pr=0 pw=0 time=121 us)
2 INDEX RANGE SCAN XIF3PROJECT_PHASE (cr=2 pr=0 pw=0 time=74 us)(object id 57716)
1203 VIEW index$_join$_007 (cr=22 pr=0 pw=0 time=14886 us)
1203 HASH JOIN (cr=22 pr=0 pw=0 time=13678 us)
1203 INDEX RANGE SCAN XIF4PROJECT_STEP (cr=5 pr=0 pw=0 time=7644 us)(object id 57728)
5252 INDEX FAST FULL SCAN XIF8PROJECT_STEP (cr=17 pr=0 pw=0 time=92 us)(object id 57731)
603 HASH JOIN (cr=73 pr=0 pw=0 time=8720 us)
124 VIEW VW_NSO_1 (cr=20 pr=0 pw=0 time=2397 us)
124 HASH UNIQUE (cr=20 pr=0 pw=0 time=2392 us)
124 UNION-ALL (cr=20 pr=0 pw=0 time=6485 us)
124 TABLE ACCESS FULL L_USER_PROJ_PERM (cr=16 pr=0 pw=0 time=6234 us)
0 TABLE ACCESS BY INDEX ROWID L_ORG_PROJ_PERM (cr=4 pr=0 pw=0 time=157 us)
3 NESTED LOOPS (cr=3 pr=0 pw=0 time=178 us)
1 TABLE ACCESS BY INDEX ROWID L_USER_ORG (cr=2 pr=0 pw=0 time=78 us)
1 INDEX RANGE SCAN XIF1L_USER_ORG (cr=1 pr=0 pw=0 time=44 us)(object id 57563)
1 INDEX RANGE SCAN XIF2L_ORG_PROJ_PERM (cr=1 pr=0 pw=0 time=21 us)(object id 57509)
3241 TABLE ACCESS FULL PROJECT_PHASE (cr=53 pr=0 pw=0 time=83 us)
105 SORT JOIN (cr=53 pr=0 pw=0 time=9259 us)
362 TABLE ACCESS FULL PROJECT_PHASE (cr=53 pr=0 pw=0 time=4098 us)
105 SORT JOIN (cr=31 pr=0 pw=0 time=7614 us)
2973 TABLE ACCESS FULL PROJECT_ACTION (cr=31 pr=0 pw=0 time=3048 us)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: FIRST_ROWS
105 HASH (GROUP BY)
1001 HASH JOIN
105 HASH JOIN
105 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PROJECT' (TABLE)
607 HASH JOIN
607 VIEW
607 UNION-ALL
1808 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'PROJECT_PHASE' (TABLE)
1808 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF3PROJECT_PHASE' (INDEX)
2 VIEW OF 'index$_join$_007' (VIEW)
2 HASH JOIN
1203 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF4PROJECT_STEP' (INDEX)
1203 INDEX MODE: ANALYZED (FAST FULL SCAN) OF
'XIF8PROJECT_STEP' (INDEX)
1203 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'PROJECT_PHASE' (TABLE)
5252 NESTED LOOPS
603 VIEW OF 'VW_NSO_1' (VIEW)
124 HASH (UNIQUE)
124 UNION-ALL
124 TABLE ACCESS MODE: ANALYZED (BY INDEX
ROWID) OF 'L_USER_PROJ_PERM' (TABLE)
124 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF2L_USER_PROJ_PERM' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX
ROWID) OF 'L_ORG_PROJ_PERM' (TABLE)
3 NESTED LOOPS
1 TABLE ACCESS MODE: ANALYZED (BY
INDEX ROWID) OF 'L_USER_ORG' (TABLE)
1 INDEX MODE: ANALYZED (RANGE SCAN)
OF 'XIF1L_USER_ORG' (INDEX)
1 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF2L_ORG_PROJ_PERM' (INDEX)
3241 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF1PROJECT_PHASE' (INDEX)
105 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PROJECT_PHASE'
(TABLE)
362 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PROJECT_ACTION'
(TABLE)

************************************************************

The query it self:

SELECT phase.*,
PhaseView.perm,
project.project_name,
action.status_id
FROM PW_PROJECT_PHASE phase,
PW_PROJECT_ACTION action,
PW_PROJECT project,
(SELECT PhaseIds.project_phase_id, MAX(PhaseIds.perm) AS perm FROM
(
(SELECT p.project_phase_id, 2 AS perm FROM PW_PROJECT_PHASE p WHERE p.assigned_to_user_id =51705)
UNION ALL
(SELECT step.project_phase_id, 1 AS perm FROM PW_PROJECT_STEP step WHERE step.assigned_to_user_id =51705)
UNION ALL
(SELECT P.project_phase_id, 3 AS perm FROM PW_PROJECT_PHASE P WHERE P.project_id IN
(

(SELECT project_id FROM PW_L_User_Proj_Perm WHERE pw_user_id=51705 AND delete_flag= '0' AND permission='A')
UNION ALL
(SELECT project_id FROM PW_L_Org_Proj_Perm Lopp, PW_L_User_Org Luo where Lopp.org_unit_id=Luo.org_unit_id AND Luo.delete_flag= '0' AND Lopp.delete_flag= '0' AND Luo.pw_user_id=51705 AND Lopp.permission='A') )

)



) PhaseIds

GROUP BY PhaseIds.project_phase_id) PhaseView
WHERE phase.project_phase_id=PhaseView.project_phase_id
AND phase.project_phase_id=action.project_phase_id
AND phase.project_id=project.project_id
AND (action.delete_flag = '0' AND active_flag = '1' AND action.status_id<>10002)
AND phase.actual_start_datetime <= sysdate AND ((phase.actual_end_datetime is null) OR (phase.actual_end_datetime>=(sysdate-10)))

And my questions:

It takes more cpu time (0.17) in 10g than in 8i (0.03) to parse the query. Is it because the box running 10g is busier?

I see that 8i is not utilizing XIF8PROJECT_STEP index. I've dropped that index from 10g and re-executed the query:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.23 0.24 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 8 0.06 0.05 0 385 0 105
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.29 0.29 0 385 0 105

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 114 (PWADMIN)

Rows Row Source Operation
------- ---------------------------------------------------
105 HASH GROUP BY (cr=385 pr=0 pw=0 time=48340 us)
311 HASH JOIN (cr=385 pr=0 pw=0 time=172014 us)
311 HASH JOIN (cr=354 pr=0 pw=0 time=112046 us)
1002 TABLE ACCESS FULL PROJECT (cr=23 pr=0 pw=0 time=257 us)
311 HASH JOIN (cr=331 pr=0 pw=0 time=103485 us)
1808 VIEW (cr=278 pr=0 pw=0 time=16385 us)
1808 UNION-ALL (cr=278 pr=0 pw=0 time=14573 us)
2 TABLE ACCESS BY INDEX ROWID PROJECT_PHASE (cr=4 pr=0 pw=0 time=126 us)
2 INDEX RANGE SCAN XIF3PROJECT_PHASE (cr=2 pr=0 pw=0 time=76 us)(object id 57716)
1203 TABLE ACCESS BY INDEX ROWID PROJECT_STEP (cr=41 pr=0 pw=0 time=8476 us)
1203 INDEX RANGE SCAN XIF4PROJECT_STEP (cr=5 pr=0 pw=0 time=4851 us)(object id 57728)
603 TABLE ACCESS BY INDEX ROWID PROJECT_PHASE (cr=233 pr=0 pw=0 time=8940 us)
728 NESTED LOOPS (cr=140 pr=0 pw=0 time=756351 us)
124 VIEW VW_NSO_1 (cr=12 pr=0 pw=0 time=6815 us)
124 HASH UNIQUE (cr=12 pr=0 pw=0 time=6564 us)
124 UNION-ALL (cr=12 pr=0 pw=0 time=3531 us)
124 TABLE ACCESS BY INDEX ROWID L_USER_PROJ_PERM (cr=8 pr=0 pw=0 time=3154 us)
135 INDEX RANGE SCAN XIF2L_USER_PROJ_PERM (cr=2 pr=0 pw=0 time=183 us)(object id 57577)
0 TABLE ACCESS BY INDEX ROWID L_ORG_PROJ_PERM (cr=4 pr=0 pw=0 time=151 us)
3 NESTED LOOPS (cr=3 pr=0 pw=0 time=173 us)
1 TABLE ACCESS BY INDEX ROWID L_USER_ORG (cr=2 pr=0 pw=0 time=71 us)
1 INDEX RANGE SCAN XIF1L_USER_ORG (cr=1 pr=0 pw=0 time=36 us)(object id 57563)
1 INDEX RANGE SCAN XIF2L_ORG_PROJ_PERM (cr=1 pr=0 pw=0 time=24 us)(object id 57509)
603 INDEX RANGE SCAN XIF1PROJECT_PHASE (cr=128 pr=0 pw=0 time=1793 us)(object id 57714)
362 TABLE ACCESS FULL PROJECT_PHASE (cr=53 pr=0 pw=0 time=4491 us)
2973 TABLE ACCESS FULL PROJECT_ACTION (cr=31 pr=0 pw=0 time=3059 us)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: FIRST_ROWS
105 HASH (GROUP BY)
311 HASH JOIN
311 HASH JOIN
1002 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PROJECT' (TABLE)
311 HASH JOIN
1808 VIEW
1808 UNION-ALL
2 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'PROJECT_PHASE' (TABLE)
2 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF3PROJECT_PHASE' (INDEX)
1203 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'PROJECT_STEP' (TABLE)
1203 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF4PROJECT_STEP' (INDEX)
603 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'PROJECT_PHASE' (TABLE)
728 NESTED LOOPS
124 VIEW OF 'VW_NSO_1' (VIEW)
124 HASH (UNIQUE)
124 UNION-ALL
124 TABLE ACCESS MODE: ANALYZED (BY INDEX
ROWID) OF 'L_USER_PROJ_PERM' (TABLE)
135 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF2L_USER_PROJ_PERM' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX
ROWID) OF 'L_ORG_PROJ_PERM' (TABLE)
3 NESTED LOOPS
1 TABLE ACCESS MODE: ANALYZED (BY
INDEX ROWID) OF 'L_USER_ORG' (TABLE)
1 INDEX MODE: ANALYZED (RANGE SCAN)
OF 'XIF1L_USER_ORG' (INDEX)
1 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF2L_ORG_PROJ_PERM' (INDEX)
603 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIF1PROJECT_PHASE' (INDEX)
362 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PROJECT_PHASE'
(TABLE)
2973 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PROJECT_ACTION'
(TABLE)

************************************************************

What should I do to get the same (or similar) execution plan in 8i and 10g?

8i is running on Sun 280R, 10g is on V440. The 10g instance has larger SGA and it is running in archivelog mode. 8i is in noarchivelog mode.

Thank you very much for your time.

??? NO parse/exec vs. multiple fetches ???

max, March 11, 2006 - 5:36 am UTC

taken from a real life case:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 60 0.02 0.00 0 0 0 59
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 60 0.02 0.00 0 0 0 59

what could have happened here?

Tom Kyte
March 11, 2006 - 3:47 pm UTC

you 

a) parsed the sql
b) opened the sql
c) then enabled trace
d) then fetched from it


ops$tkyte@ORA10GR2> create table t as select * from all_objects where rownum <= 60;

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create or replace package my_pkg
  2  as
  3          cursor c is select * from t;
  4          procedure p1;
  5          procedure p2;
  6  end;
  7  /

Package created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create or replace package body my_pkg
  2  as
  3
  4  procedure p1
  5  is
  6          l_rec c%rowtype;
  7  begin
  8          open c;
  9          fetch c into l_rec;
 10  end;
 11
 12  procedure p2
 13  is
 14          l_rec c%rowtype;
 15  begin
 16          loop
 17                  fetch c into l_rec;
 18                  exit when c%notfound;
 19          end loop;
 20          close c;
 21  end;
 22
 23  end;
 24  /

Package body created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec my_pkg.p1

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> alter session set sql_trace=true;

Session altered.

ops$tkyte@ORA10GR2> exec my_pkg.p2

PL/SQL procedure successfully completed.

<b>tkprof says:

SELECT *
FROM
 T


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch       60      0.00       0.00          0         59          0          59
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       60      0.00       0.00          0         59          0          59


</b> 

Plan Execution statistics

Mathew Butler, March 13, 2006 - 9:35 am UTC

Oracle 9.2.0.5 on RAC.

Above you note that row source operations such as
"(cr=25329 r=0 w=0 time=4645700 us)" can be switched on using

ALTER SYSTEM SET statistics_level=all
/

I understand that the default value for statistics_level is "typical"

As far as I can tell, the "(cr=25329 r=0 w=0 time=4645700 us)" information can be gathered in a trace file by switching on "Plan Execution Statistics". Which according to v$statistics_level may be set at the session level. However I can't find the corresponding parameter that controls this in the server reference.

Can you please let me know how I may control the "(cr=25329 r=0 w=0 time=4645700 us)" output at the session at the session level without switching on timed_os_statistics?

( my workaround is to put timed_os_statistics=0 in the spfile and then set statistics_level to all, but I want to know if there is an easier way to do this )

What is the performance overhead of "Plan Execution Statistics"?

Regards,

Tom Kyte
March 13, 2006 - 10:27 am UTC

alter session set statistics_level=all

(in 10g - this comes on by default at the session level which is nice)

Followup questions

Mathew Butler, March 13, 2006 - 12:21 pm UTC

So, it is not possible to switch on "Plan Execution Statistics" without also switching on timed OS statistics with an alter session?

Have you encountered any measurable overhead in switching on "Plan Execution Statistics"?


Tom Kyte
March 13, 2006 - 9:30 pm UTC

No, never said that - I have not enabled timed_os_statistics at all.

Is there overhead to statistics level = all ? yes.

what could have happened here?

max, March 15, 2006 - 3:01 am UTC

thank you for your answering my question about "no parse but exec/fetch"
(i really shouldn't have asked it ...)

but i 've two other cases regarding that certain trace file's content:

1st: number of fetched rows is only half the number of fetches?

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.42 0.43 0 0 0 0
Execute 235 3.48 3.36 0 0 0 0
Fetch 470 64.74 4.56 6 46652 0 235
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 706 68.64 8.36 6 46652 0 235

2nd: multiple parses regardless of bind variables?

SELECT <some_columns>
FROM <complex_view>
WHERE (<column1> = :1 AND <column2> = :2 AND <column3> = :3 AND
((<column4> = :4) OR ((<column4> IS NULL) AND (:5 IS NULL))))
ORDER BY <column5>

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 0.06 0.05 0 0 0 0
Execute 7 0.01 0.00 0 0 0 0
Fetch 10 26.02 28.85 360 21227 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 24 26.09 28.92 360 21227 0 3

could you please help me to understand that (too)?

Tom Kyte
March 15, 2006 - 5:02 pm UTC

case 1:

drop table t;
create table t ( x int );
insert into t values ( 1 );
commit;
alter session set sql_trace=true;
declare
cursor c is select * from t;
l_rec t%rowtype;
begin
for i in 1 .. 235
loop
open c;
loop
fetch c into l_rec;
exit when c%notfound;
end loop;
close c;
end loop;
end;
/



SELECT *
FROM
T


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 235 0.00 0.00 0 0 0 0
Fetch 470 0.01 0.01 0 1645 0 235
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 706 0.02 0.02 0 1646 0 235


You fetch to get a row, you fetch again to find "no more rows", hence two fetches.

program should employ ARRAY FETCHING - programmers need to fix this.


case 2:

programmers again, we parse everytime they ask us to. They asked us to parse 7 times and we obliged them. Only the coders can fix this.

CLARIFICATION

Alex, March 17, 2006 - 11:59 am UTC

Robert Ware wrote: “Apparently tkprof does not distinguish between user and system recursive sql”.
Can you give more information about way I can get “user and system recursive sql” from RAW trace file.
Thanks.


Tom Kyte
March 17, 2006 - 5:54 pm UTC

by looking at the parsing user id perhaps.  Oracle recursive sql will be parsed as SYS.

run something like this:

drop sequence s;

create sequence s NOCACHE;

create or replace procedure p
as
    l_n number;
begin
    for i in 1 .. 10
    loop
        select s.nextval into l_n from dual;
    end loop;
end;
/

alter system set sql_trace=true;
exec p


select s.nextval is "user recursive sql"
the fact that s is nocache will cause an update to seq$ evertime, that'll be SYS recursive SQL.

tkprof (which can actually filter out SYS sql) would show:

SELECT S.NEXTVAL FROM DUAL

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     10      0.00       0.00          0          0          0           0
Fetch       10      0.00       0.00          0          0         10          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       21      0.00       0.01          0          0         10          10

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS<b>
Parsing user id: 146  (OPS$TKYTE)</b>   (recursive depth: 1)
********************************************************************************
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10
where
 obj#=:1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       10      0.00       0.00          0          0          0           0
Execute     10      0.00       0.00          0         10         20          10
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       20      0.00       0.00          0         10         20          10

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE<b>
Parsing user id: SYS   </b>(recursive depth: 2)
 

User and system recursive sql

Alex, March 19, 2006 - 8:54 am UTC

Thanks a lot for your answer,

1. So from RAW trace file I look at dep>0 and:
system recursive sql: uid=0
user recursive sql: uid>0

Am I right?

2. Can I get such kind of information from DB level (not session level) in that way:

system recursive sql:
select sum(gets) from v$rowcache;
user recursive sql:
select value-sr.gets
from v$sysstat,
(select sum(gets) gets from v$rowcache) sr
where name in('recursive calls');

Thanks a lot.


Tom Kyte
March 19, 2006 - 2:10 pm UTC

1) yes.

2) why v$rowcache?

User and system recursive sql: why v$rowcache?

Alex, March 23, 2006 - 8:15 am UTC

Sorry, I can't make a testcase to prove or disprove my suggestion about v$rowcache. Sorry, itÂ’s my fault about v$rowcache.

Errata Update

Gerald Asp, March 25, 2006 - 6:16 pm UTC

Sorry, but I could not find a web site to post an "errata" for Effective Oracle By Design

On page 89 there is an explain plan for
select * from t t2 where object_id > 32000

Lower on the page is an example for populating the plan table with a query from the V$sql_plan table. The example continues on page 90, but with the select statement
select * from t t1 where object_id > 32000

I looked at the errata page from Oracle press, but didn't see this listed. Was this a mistake?

thanks!

Tom Kyte
March 25, 2006 - 8:05 pm UTC

oraclepress.com would be the place to submit errata...

correct that should be T2 in both places.

recursive statement in tkprof,

A reader, March 29, 2006 - 5:48 pm UTC

What is the difference between RECURSIVE STATEMENTS and NON-RECURSIVE STATEMENTS in the tkprof output?

Which one should we focus in reducing the elapsed time?


Tom Kyte
March 29, 2006 - 7:37 pm UTC

recursive sql is sql executed to execute some other SQL.


some recursive sql will be "SYS recursive sql" - sql Oracle does in order to process your sql - like security checks, quotas, parsing and such.

some recursive sql will be YOUR sql - like sql executed by plsql in a stored procedure, that is recursive as well.


since you wait for both recursive and non-recursive to complete before getting back from your sql, I would concentrate on both

tkprof output and "time" element

EBer, April 07, 2006 - 12:55 pm UTC

Hi Tom.

I noted that in the real plan, showed in trace files, sometimes "time=" field in a single step is higher than the total elapsed time of a statement.

For instance, elapsed time of my statement is about 2.5 seconds, but there is a INDEX RANGE SCANS with time=12787442
and elasped time near the "Fetch" count is > 6 seconds.

Is it possible?
Can you explain me why?

I paste below the text for a more precise comparison.
Thank you in advance.
EBer
--------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 3.65 4.37 0 724976 0 38
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 3.67 4.38 0 724976 0 38

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 315

Rows Row Source Operation
------- ---------------------------------------------------
38 SORT UNIQUE (cr=724976 pr=0 pw=0 time=4370438 us)
38 CONCATENATION (cr=724976 pr=0 pw=0 time=6623428 us)
38 INDEX RANGE SCAN IDX_WLI_LOCKUSERGUID_WLIGUID (cr=724973 pr=0 pw=0 time=6623272 us)(object id 155786)
76 NESTED LOOPS (cr=1449780 pr=0 pw=0 time=8572688 us)
222846 TABLE ACCESS FULL WLGT (cr=8262 pr=0 pw=0 time=545444 us)
76 INLIST ITERATOR (cr=1441518 pr=0 pw=0 time=7858086 us)
76 INDEX RANGE SCAN XPK_WORKLIST (cr=1441518 pr=0 pw=0 time=6087442 us)(object id 151539)
0 INDEX UNIQUE SCAN XPK_EXCLUSION (cr=76 pr=0 pw=0 time=2172 us)(object id 151499)
0 FILTER (cr=3 pr=0 pw=0 time=34 us)
0 INDEX RANGE SCAN IDX_WLI_LOCKUSERGUID_WLIGUID (cr=3 pr=0 pw=0 time=29 us)(object id 155786)
76 NESTED LOOPS (cr=1449780 pr=0 pw=0 time=8572688 us)
222846 TABLE ACCESS FULL WLGT (cr=8262 pr=0 pw=0 time=545444 us)
76 INLIST ITERATOR (cr=1441518 pr=0 pw=0 time=7858086 us)
76 INDEX RANGE SCAN XPK_WORKLIST (cr=1441518 pr=0 pw=0 time=6087442 us)(object id 151539)
0 INDEX UNIQUE SCAN XPK_EXCLUSION (cr=76 pr=0 pw=0 time=2172 us)(object id 151499)






Tom Kyte
April 08, 2006 - 9:13 am UTC

... For instance, elapsed time of my statement is about 2.5 seconds, but there is a
INDEX RANGE SCANS with time=12787442
and elasped time near the "Fetch" count is > 6 seconds. ...

does not compute. How could the fetch time (part of the elapsed time of a statement) be more than 6 seconds but the total elapsed time is only 2.5 seconds???


Your example above shows time=4370438 us = 4370438 /1000000 second ~ 4.38 seconds

and that is what the tkprof shows - so your example is not demonstrating any problem that I can see?

Trace and tkprof output

EBer, April 10, 2006 - 4:28 am UTC

It's probably a fault of mine...
I computed the average execution time from v$sql and it was about 2 seconds... So I thought that time was wrong (too high)...but it's surely right.

Might you please explain me why some partial time is bigger than the total elapsed? i.e. "Concatenation" time and "Index Range Scan" time.

Thank you ! I appreciate your work and your advice.
EBer

Tom Kyte
April 10, 2006 - 5:57 am UTC

that would appear to be incremental timing errors creeping in. Counting lots of little tiny events allows for an error to be observed on each event. Counting a single event does not have that margin of error.


outer_t1 := gettime;
for i in 1 .. 1000000
loop
inner_t1 := gettime;
do_something;
inner_total := inner_total + (gettime-inner_t1);
end loop;
outer_total := gettime-outer_t1;


inner_total and outer_total may diverge by a bit



tkprof output and "time" element

EBer, April 10, 2006 - 9:01 am UTC

Thank you.
Diverge a bit?
It's 12 seconds versus 4 seconds!!!
:)

Bye Tom

Tom Kyte
April 11, 2006 - 10:06 am UTC

where do you see 12? I saw 6ish?

tkprof output

EBer, April 24, 2006 - 6:58 am UTC

Hi Tom.
I know I'm late. I've been away for a while...

In my previous message I was referencing another execution of the same statement (12 seconds were in the trace file) sorry...it was my fault ;)

My doubt is still there: why 6 seconds?
Total elapsed time is about 4 seconds, right?
I can't figure it out.

Thank you for your patience!
EBer


why is it getting re-parsed ?

SVS, May 19, 2006 - 1:49 pm UTC

I am using Forms 9i and Oracle 10g and I have a form which populates the block based on the POST_QUERY trigger below.

POST_QUERY
----------
Declare
t_sum number;
begin
da.dbk_jc_time_phased_budget.p_jctime_phased_budget
(:budg_b2.jcat_comp_code , -- p_comp_code IN VARCHAR2,
:budg_b2.jcat_job_code , -- p_job_code IN VARCHAR2,
:budg_b2.jcat_phs_code , -- p_phs_code IN VARCHAR2,
:budg_b2.jcat_code , -- p_cat_code IN VARCHAR2,
:budg_b2.jcat_budmeth_code, -- p_method_code in varchar2,
t_sum , -- p_sum in out number,
:budg_b2.c_has_time_phased_budget_row, --p_row in out varchar2,
:budg_b2.c_has_time_phased_budget -- p_has in out varchar2
);



This calls a DB procedure which is as below


dbk_jc_time_phased_budget.p_jctime_phased_budget
------------------------------------------------
PROCEDURE p_jctime_phased_budget (p_comp_code IN VARCHAR2,
p_job_code IN VARCHAR2,
p_phs_code IN VARCHAR2,
p_cat_code IN VARCHAR2,
p_method_code in VARCHAR2,
p_sum in out number,
p_row in out VARCHAR2,
p_has in out VARCHAR2
)
IS
BEGIN

p_sum:=0;
p_row:='N';
p_has:='N';

for i in
(SELECT Sum(decode(jctpd.jctpd_is_budgeted_flag,k_yes
, Decode( p_method_code,k_budget_method_amount,jctpd.jctpd_budg_amt,jctpd.jctpd_budg_unit)
)
) v_sum,
sum(1) v_row,
sum(decode(jctpd.jctpd_is_budgeted_flag,k_yes,1)) v_has
FROM da.jctime_phased_budget jctpd
WHERE jctpd.jctpd_comp_code = p_comp_code
AND jctpd.jctpd_job_code = p_job_code
AND jctpd.jctpd_phs_code = p_phs_code
AND jctpd.jctpd_cat_code = p_cat_code
) loop
p_sum:=nvl(i.v_sum,0);
if nvl(i.v_row,0)>0 then p_row:='Y'; else p_row:='N'; end if;
if nvl(i.v_has,0)>0 then p_has:='Y'; else p_has:='N'; end if;
-- exit loop
exit;
end loop;
END p_jctime_phased_budget;



This is the TKPROF output that I got from my DBA which clearly indicates that re-parsing taking place.

TKPROF output
-------------

********************************************************************************
SELECT NVL(COUNT(*), 0)
FROM
DA.JCTIME_PHASED_BUDGET JCTPD WHERE JCTPD.JCTPD_COMP_CODE = :B5 AND
JCTPD.JCTPD_JOB_CODE = :B4 AND JCTPD.JCTPD_PHS_CODE = :B3 AND
JCTPD.JCTPD_CAT_CODE = :B2 AND JCTPD.JCTPD_IS_BUDGETED_FLAG = :B1 AND
ROWNUM <= 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 932 0.09 0.08 0 0 0 0
Execute 932 0.11 0.11 0 0 0 0
Fetch 932 0.03 0.04 0 932 0 932
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2796 0.23 0.24 0 932 0 932

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=76 us)
0 COUNT STOPKEY (cr=1 pr=0 pw=0 time=49 us)
0 TABLE ACCESS BY INDEX ROWID JCTIME_PHASED_BUDGET (cr=1 pr=0 pw=0 time=42 us)
0 INDEX RANGE SCAN JCTPD_COMP_JOB_PHS_CAT_END_UK (cr=1 pr=0 pw=0 time=36 us)(object id 70703)

********************************************************************************

SELECT NVL(COUNT(*), 0)
FROM
DA.JCTIME_PHASED_BUDGET JCTPD WHERE JCTPD.JCTPD_COMP_CODE = :B4 AND
JCTPD.JCTPD_JOB_CODE = :B3 AND JCTPD.JCTPD_PHS_CODE = :B2 AND
JCTPD.JCTPD_CAT_CODE = :B1 AND ROWNUM <= 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 932 0.02 0.07 0 0 0 0
Execute 932 0.11 0.08 0 0 0 0
Fetch 932 0.09 0.04 0 932 0 932
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2796 0.22 0.21 0 932 0 932

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=68 us)
0 COUNT STOPKEY (cr=1 pr=0 pw=0 time=46 us)
0 INDEX RANGE SCAN JCTPD_COMP_JOB_PHS_CAT_END_UK (cr=1 pr=0 pw=0 time=40 us)(object id 70703)

********************************************************************************

I know its not the right thing but I cannot understand why re-parsing took place inspite of Oracle Forms using Bind variables by default and also that I could not figure out anything wrong with the way the code has been written.

Would appreciate if i could get your feedback with more insights and whether I am missing anything.

Thanks

Tom Kyte
May 20, 2006 - 4:25 pm UTC

funny - i don't see either of those queries in the example???

for example, the first time we see rownum is in the tkprof.

So, what was the meaning of the lead in to the tkprof portion?? The appear entirely unrelated.

cpu vs ela

Prasad, May 24, 2006 - 12:46 pm UTC

Hello Tom,

Thanks for answering my previous questions. In the previous tkprof output cpu has taken 0.22 seconds and elapsed is 0.21 seconds. I think ela is total time taken right? If that is correct how come ela has taken less time than cpu time. cpu should be less than ela time right? Plesae through some light on this.

With Regards
Prasad.


Tom Kyte
May 25, 2006 - 1:05 pm UTC

<quote src=Effective Oracle by Design>
A question that frequently comes up with regards to TKPROF and the report is this – how could output such as the following be produced:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----- -------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 14755 12.77 12.60 4 29511 856828 14755
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ----- -------- ---------- ----------
total 14756 12.77 12.60 4 29511 856828 14755

How can CPU time be larger then elapsed time? This discrepancy is due to the way timings are collected – and attempts to time very fast operations or lots of operations. For example – suppose you were using a stopwatch that measured only down to the second. You timed 50 events. Each event seemed to have taken 2 seconds according to the stop watch. That means you had 100 seconds of time elapsed during these events right? Well, probably not. Suppose each event really took 2.99 seconds – you really had almost 150 seconds of time there.
Taking it a step further – suppose the stopwatch was continuously running. So, when the event started, you would look at the stopwatch and then when the event finished, you would look again and subtract the two numbers. This is closer to what happens with timing on a computer – you look at the system “watch”, do something and look again – the delta represents the timing of the event. Now, we’ll perform the same timing test as above. Again, each event appears to have taken 2 seconds – but they may have taken just 1.01! How so? Well, when the event started, the clock was really at time 2.99 – but you only saw “2” (the granularity of the stopwatch). When the event completed, the stopwatch reported 4 (and the real time was 4.00). The delta as you see it – 2, the “real” delta – 1.01.

Now, over time these discrepancies can accumulate in the aggregate total. The rule of averages would have them effectively cancel each other out more or less but over time, a small error can creep in. That is the cause of the discrepancy above – where the elapsed time is less then the CPU time. At the lowest level, Oracle is gathering statistics for timing at either the millisecond or microsecond level. And, further, it may time some events using one clock and other events using a different one – this is unavoidable as the timing information is gathered from the operating system, using its API’s. In this example, we executed a statement 14,755 times – meaning the average CPU time to execute that statement was 0.00865469 seconds. If we were to run this test over and over – we would find that the timings for CPU and elapsed are more or less “the same”. In general, this error is not so large as to send you looking down the wrong path, but it can be confusing the first time you see it.
</quote>

tkprof output

Reader, June 07, 2006 - 1:09 pm UTC

I can't thank you enough for all your help. Would you please through some light on the following output. Which area needs to be looked at in order to address the following issue:
(1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 44.20 0 0 0 0
Execute 12 0.00 32.92 0 0 0 0
Fetch 25 92800.00 125081.01 8720 285744 0 13
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 38 92800.00 125158.13 8720 285744 0 13

(2)

SELECT Name_id
FROM my_table
WHERE item_flag = '0'
AND type_id = :1
AND lang = :2
AND type_ref = :3

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 118 100.00 110.73 0 0 0 0
Execute 118 100.00 82.94 0 0 0 0
Fetch 118 300.00 176.31 7 491 0 15
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 354 500.00 369.98 7 491 0 15

Execute and Parse ratio is 1:1. I know from your book it is not good. Please suggest me.

Thanks.


Tom Kyte
June 07, 2006 - 3:31 pm UTC

likely you need to use the 9i or 10g tkprof against your 9i or 10g trace files

because it looks alot like you used the 8i version which is expecting the numbers to be of a different magnitude than 9i/10g produce :)



What can we infer from this report?

Srinivas Narashimalu, June 27, 2006 - 11:34 am UTC

Tom,

The following are the details of the environment -
Type- Warehousing
Version - 9.2.0.7
O.S - AIX.

Following is the tkprof output -

SELECT pfact.invoice_year,
dlr.dealer_key,
dlr.dealer_number,
dlr.name,
dlr.city||', '||dlr.state AS location,
dlr.country,
dlr.classification,
dlr.start_date_active,
dlr.end_date_active,
dlr.old_dealer_number,
regdist.region_code,
regdist.region_desc,
regdist.regional_manager,
regdist.district_code,
regdist.district_desc,
regdist.district_manager,
dlr.facing_dealer,
dlr.parts_parent_dealer,
-- dlr.parent_location, -- to be added to dealer_dim
'',
dlr.financial_report_dealer,
dlr.consolidation_dealer,
dlr.parts_management_group,
org.organization_code,
org.division_code,
supp.supplier_code,
supp.name,
pgrp.product_group_desc,
-- pvdr.parent_code,
-- pvdr.part_classification,
-- pvdr.proprietary_ind,
-- dship.direct_ship_program,
-- ordr.order_type,
-- pri.priority_code,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'S',
SUM(pfact.local_ext_sales_price), 0) ,0) ,0), 0) AS pdc_stock,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'E',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_vor,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'DSP',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS vdsp,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'SD',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS ship_direct,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
SUM(pfact.local_ext_sales_price), 0), 0) AS gross_sales,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
SUM(pfact.local_gross_profit), 0), 0) AS gross_profit,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'S',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_stock_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'E',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_vor_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'DSP',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS vdsp_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'SD',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS ship_direct_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
SUM(pfact.local_ext_sales_price), 0), 0) AS gross_sales_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
SUM(pfact.local_gross_profit), 0), 0) AS gross_profit_prev
FROM priority_dim pri,
product_group_dim pgrp,
order_type_dim ordr,
organization_dim org,
dealer_dim dlr,
region_district_dim regdist,
price_vendor_dim pvdr,
part_dim pdim,
supplier_dim supp,
part_order_line_fact pfact
WHERE regdist.type(+) = 'PARTS'
-- AND dlr.dealer_number = 'A300'
AND regdist.region_district_key IN
(SELECT MIN(region_district_key)
FROM region_district_dim
WHERE type = 'PARTS'
AND dealer_key = dlr.dealer_key)
AND (pfact.invoice_date_key BETWEEN TO_NUMBER(TO_CHAR(SYSDATE-1, 'yyyy')- 1||'0101')
AND TO_NUMBER(TO_CHAR(SYSDATE-1, 'yyyymmdd')))
AND (pfact.invoice_date_key NOT BETWEEN TO_NUMBER(TO_CHAR(ADD_MONTHS(SYSDATE-1, -12), 'yyyymmdd'))
AND TO_CHAR(TO_CHAR(SYSDATE-1, 'YYYY')- 1)||'1231')
--w/ Canada dealers
-- AND (SUBSTR(dlr.dealer_number, 1, 1) NOT IN ('X','Y'))
--w/o Canada dealers
AND (SUBSTR(dlr.dealer_number, 1, 1) NOT IN ('X','Y','N'))
AND NVL(gl_code, 'MISC') NOT LIKE 'DDSS%'
AND NVL(gl_code, 'MISC') NOT LIKE 'SSMX%'
AND NVL(gl_code, 'MISC') NOT LIKE 'SSMU%'
AND pdim.part_ind = 'Y'
AND dlr.dealer_key = regdist.dealer_key(+)
AND dlr.dealer_key = pfact.dealer_key
-- AND regdist.region_district_key (+) = pfact.region_district_key -- for region/district history only
AND org.organization_key = pfact.organization_key
AND pgrp.product_group = pvdr.product_group
-- AND cr.credit_reason_key = pfact.credit_reason_key -- used for Volvo sales data only
-- AND dship.direct_ship_program_key(+) = pfact.direct_ship_program_key
AND ordr.order_type_key = pfact.order_type_key
AND pri.priority_key = pfact.priority_key
AND pdim.part_key = pfact.shipped_part_key
AND pdim.organization_key = pfact.organization_key
AND pvdr.price_vendor_key = pfact.price_vendor_key
AND supp.supplier_key = pfact.supplier_key
GROUP BY pfact.invoice_year,
dlr.dealer_key,
dlr.dealer_number,
dlr.name,
dlr.city||', '||dlr.state,
dlr.country,
dlr.classification,
dlr.start_date_active,
dlr.end_date_active,
dlr.old_dealer_number,
regdist.region_code,
regdist.region_desc,
regdist.regional_manager,
regdist.district_code,
regdist.district_desc,
regdist.district_manager,
dlr.facing_dealer,
dlr.parts_parent_dealer,
-- dlr.parent_location,
dlr.financial_report_dealer,
dlr.consolidation_dealer,
dlr.parts_management_group,
org.organization_code,
org.division_code,
supp.supplier_code,
supp.name,
pgrp.product_group_desc,
-- pvdr.parent_code,
-- pvdr.part_classification,
-- pvdr.proprietary_ind,
-- dship.direct_ship_program,
ordr.order_type,
pri.priority_code

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.46 0.46 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 15647 371.92 743.32 178205 27696589 11 234681
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15649 372.38 743.79 178205 27696589 11 234681

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24

Rows Row Source Operation
------- ---------------------------------------------------
234681 SORT GROUP BY
2110388 FILTER
2110388 FILTER
2110388 NESTED LOOPS OUTER
2110388 NESTED LOOPS
2110400 NESTED LOOPS
2110400 NESTED LOOPS
2110412 NESTED LOOPS
2110412 NESTED LOOPS
2113800 NESTED LOOPS
2113800 NESTED LOOPS
2113815 NESTED LOOPS
5646 TABLE ACCESS FULL OBJ#(8207)
2113815 TABLE ACCESS BY GLOBAL INDEX ROWID OBJ#(8079) PARTITION: ROW LOCATION ROW LOCATION
4075238 INDEX RANGE SCAN OBJ#(9914) (object id 9914)
2113800 TABLE ACCESS BY INDEX ROWID OBJ#(8257)
2113800 INDEX UNIQUE SCAN OBJ#(11384) (object id 11384)
2113800 TABLE ACCESS BY INDEX ROWID OBJ#(8263)
2113800 INDEX UNIQUE SCAN OBJ#(11385) (object id 11385)
2110412 TABLE ACCESS BY INDEX ROWID OBJ#(8279)
2110412 INDEX UNIQUE SCAN OBJ#(11390) (object id 11390)
2110412 TABLE ACCESS BY INDEX ROWID OBJ#(8233)
2110412 INDEX UNIQUE SCAN OBJ#(11174) (object id 11174)
2110400 TABLE ACCESS BY INDEX ROWID OBJ#(8266)
2110400 INDEX RANGE SCAN OBJ#(1749830) (object id 1749830)
2110400 TABLE ACCESS BY INDEX ROWID OBJ#(8245)
2110400 INDEX UNIQUE SCAN OBJ#(11176) (object id 11176)
2110388 TABLE ACCESS BY INDEX ROWID OBJ#(8249)
2110400 INDEX UNIQUE SCAN OBJ#(11357) (object id 11357)
2110388 TABLE ACCESS BY INDEX ROWID OBJ#(8270)
2110388 INDEX RANGE SCAN OBJ#(8274) (object id 8274)
326 SORT AGGREGATE
326 TABLE ACCESS BY INDEX ROWID OBJ#(8270)
326 INDEX RANGE SCAN OBJ#(8274) (object id 8274)




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.46 0.46 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 15647 371.92 743.32 178205 27696589 11 234681
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15650 372.38 743.79 178205 27696589 11 234681

Misses in library cache during parse: 1
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 2 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 2 0 1

Misses in library cache during parse: 1

2 user SQL statements in session.
1 internal SQL statements in session.
3 SQL statements in session.
********************************************************************************
Trace file: dvwhseud_ora_4972626.trc
Trace file compatibility: 9.02.00
Sort options: default

1 session in tracefile.
2 user SQL statements in trace file.
1 internal SQL statements in trace file.
3 SQL statements in trace file.
3 unique SQL statements in trace file.
15871 lines in trace file.

Based on the count of distinct values to the total rows I created a few bitmap indexes (which were previously btree) and the follosing is the output I got (not including the sql as it's the same) -

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16083 306.51 359.02 114731 30975535 14 241216
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16085 306.51 359.05 114731 30975535 14 241216

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 24

Rows Row Source Operation
------- ---------------------------------------------------
241216 SORT GROUP BY
2257492 FILTER
2257492 FILTER
2257492 HASH JOIN OUTER
2257492 NESTED LOOPS
2263638 NESTED LOOPS
2263638 NESTED LOOPS
2263638 NESTED LOOPS
2263638 NESTED LOOPS
2263650 NESTED LOOPS
2263676 NESTED LOOPS
2563004 NESTED LOOPS
2563050 PARTITION RANGE ALL PARTITION: 1 10
2563050 PARTITION LIST ALL PARTITION: 1 1
2563050 TABLE ACCESS BY LOCAL INDEX ROWID PART_ORDER_LINE_FACT PARTITION: 1 10
2730512 BITMAP CONVERSION TO ROWIDS
175 BITMAP INDEX RANGE SCAN INVDATEKEY_BMNDX PARTITION: 1 10 (object id 1775932)
2563004 TABLE ACCESS BY INDEX ROWID PART_DIM
2563050 INDEX UNIQUE SCAN PART_DIM_PK (object id 11357)
2263676 TABLE ACCESS BY INDEX ROWID DEALER_DIM
2563004 INDEX UNIQUE SCAN DEALER_DIM_PK (object id 1758999)
2263650 TABLE ACCESS BY INDEX ROWID PRICE_VENDOR_DIM
2263650 INDEX UNIQUE SCAN PRICE_VENDOR_DIM_PK (object id 11384)
2263638 TABLE ACCESS BY INDEX ROWID PRODUCT_GROUP_DIM
2263638 INDEX RANGE SCAN PRODUCT_GROUP_INDX (object id 1775107)
2263638 TABLE ACCESS BY INDEX ROWID ORGANIZATION_DIM
2263638 INDEX UNIQUE SCAN ORGAN_DIM_PK (object id 11176)
2263638 TABLE ACCESS BY INDEX ROWID ORDER_TYPE_DIM
2263638 INDEX UNIQUE SCAN ORDER_TYPE_DIM_PK (object id 11174)
2263638 TABLE ACCESS BY INDEX ROWID PRIORITY_DIM
2263638 INDEX UNIQUE SCAN PRIORITY_DIM_PK (object id 11385)
2257492 TABLE ACCESS BY INDEX ROWID SUPPLIER_DIM
2257492 INDEX UNIQUE SCAN SUPPLIER_DIM_PK (object id 11390)
3709 TABLE ACCESS FULL REGION_DISTRICT_DIM
340 SORT AGGREGATE
340 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
340 BITMAP CONVERSION TO ROWIDS
340 BITMAP INDEX SINGLE VALUE REGION_DIST_DIM_IDX_TYPEDLR (object id 1776017)

********************************************************************************

I changed "in" to "=" in the 3rd condition from the where clause and this is what I got -

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.73 0.73 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16083 309.87 350.49 118347 30977381 18 241216
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16085 310.60 351.22 118347 30977381 18 241216

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24

The explain plan is the same.

1. Can you tell me which one of the three is efficient (and also why)? After creating the bitmap indexes the disk reads are lesser
by 60,000 but the logical reads are more by 3M, so what would be a good factor (here it's 50 times almost) to be sure of? And anyway this could be better written?

2. This select statement feeds an insert, shall I do it in sql(as per your advise to innumerable people) or would it be better to do a buld insert using parralelism?

3. I have also read Jonathan Lewis article(in db journal) about bitmap indexes not providing any better performance than a btree index unless used in the most appropraite scenario, Oracle recommends to use a bitmap index if there are 10000 distinct values in a million rows, what is your recommendation on using bitmap indexes?

4. Would you recommend to set the all_rows setting for optimizer at session level (in warehouse) or at instance level as throughput is more important in such applications.
And any other parameter that you recommend for warehouses.

5. And would be great if you could explain star_transformation_enable (I could not fully understand from Jonathan's example).

We would be glad if you could please write a book for warehouse tuning as oltp is what is primarily concentrated everywhere.

Thank you much!
Srinivas


Tom Kyte
June 27, 2006 - 2:31 pm UTC

you got lucky and the second two didn't do as much physical IO. reintroduce the physical IO's likely done by the first and you'd probably find the first is "most efficient"

why do we have different row counts if the sql is the same?

fact table is the largest with 4million records

Srinivas Narashimalu, June 27, 2006 - 12:01 pm UTC

Tom,

I forgot to add that the fact table is the largest with 4 million records. This is our dev environment and the prod has 40 million records in fact table.

Thanks,
Srinivas

Can you please explain

Srinivas Narashimalu, June 27, 2006 - 3:04 pm UTC

Tom,

Can you please eloborate this -

reintroduce the physical IO's likely done by the first and you'd probably find the first is "most efficient".

So, you are saying me the first one is the best here, and the first report was taken while the indexes were btree. Also the elapsed time is 743.79 for the first one while it is around 351.22 for the other two (using bitmap index).

I am just printing how the values are distributed in some of the columns -

SQL> select count(*),count(distinct(DEALER_KEY)) from PART_ORDER_LINE_FACT;

COUNT(*) COUNT(DISTINCT(DEALER_KEY))
---------- ---------------------------
   4878461                         479

Created a bitmap index for this, it was btree before.

SQL> select count(*),count(distinct(INVOICE_DATE_KEY)) from PART_ORDER_LINE_FACT;

  COUNT(*) COUNT(DISTINCT(INVOICE_DATE_KEY))
---------- ---------------------------------
   4878461                               312

SQL> select count(*),count(distinct(ORGANIZATION_KEY)) from PART_ORDER_LINE_FACT;

  COUNT(*) COUNT(DISTINCT(ORGANIZATION_KEY))
---------- ---------------------------------
   4878461                                 2

Same here.

SQL> select count(*),count(distinct(REGION_CODE)) from REGION_DISTRICT_DIM;

  COUNT(*) COUNT(DISTINCT(REGION_CODE))
---------- ----------------------------
     12875                           63

Same here.

And I have dropped btree and created bitmap like this for around 23 other indexes.

Can you please tell why you are saying the first one is better inspite of the differences in the elapsed time and the physical reads?

Also why Iam not getting these details at each step in the explain plan even though Iam using 9.2.0.7?

And is this te best performance (around 6mins) we can get for just 4million records in the fact table and to generate a 200,000 records report?

The rows returned are different becuase I ran the first query a week ago. And some new data in there in the meantime.

Thanks much!
Srinivas 

Tom Kyte
June 27, 2006 - 3:14 pm UTC

the first query did lots of physical IO that the second two did not. that is what I meant.

If they all did the same physical IO....

Need your insight!

Srinivas Narashimalu, June 27, 2006 - 3:24 pm UTC

Hope to not bother you...

But can you please tell whether my basis of dropping btree index and creating bitmap is correct based on the column values?

And if the second query is not doing as many physical IO then the 2nd/3rd must be obviously better (I mean more efficient) than the first one right?(here all the 3 queries are same...only that the first one ran with btree and the 2nd and 3rd ran with bitmap)

And isn't 6 mins to generate a report of 200,000 rows too long? Can I try replacing decode with case statement?

Any init.ora parameter you would recommend specifically for warehouse?

Any ideas on writing a book for datawarehouse tuning?

Thanks,
Srinivas

Tom Kyte
June 27, 2006 - 4:52 pm UTC

or the second query benefited from the fact the first query did all of the heavy lifting.


6 minutes might be

o very long
o extremely FAST
o just ok

for 200,000 rows. "it depends"


In looking at this, I'm wondering why indexes are being used at all. Not knowing the true sizes of underlying objects and such - it is not really possible to say.

The queries were run on different days

Srinivas Narashimalu, June 27, 2006 - 4:25 pm UTC

Tom,

The 1st query (query with btree indexes) was run a week before the 2nd and 3rd queries, So I think the probablity of the second query finding data in order to better its physical reads count in memory is very very remote.

Thanks,


Tom Kyte
June 27, 2006 - 4:55 pm UTC

then you need to run all three again don't you? data is totally different, circumstances different.

Srinivas, June 27, 2006 - 4:29 pm UTC

Tom,

The 1st query (query with btree indexes) was run a week before the 2nd and 3rd queries, So I think the probablity of the second query finding data in memory in order to better its physical reads count is very very remote.

And is my basis of replacing the btree index with bitmap index correct?

Thanks,
Srinivas

Thanks for the response!

Srinivas Narashimalu, June 27, 2006 - 4:59 pm UTC


Reran the queries one after another flushing the shared pool

Srinivas Narashimalu, June 28, 2006 - 4:32 pm UTC

Hi Tom,

I reran the queries today one after another, I flushed the shared pool after the 1st run.

Following are the query and the output -

1st - Btree indexes -

SELECT pfact.invoice_year,
dlr.dealer_key,
dlr.dealer_number,
dlr.name,
dlr.city||', '||dlr.state AS location,
dlr.country,
dlr.classification,
dlr.start_date_active,
dlr.end_date_active,
dlr.old_dealer_number,
regdist.region_code,
regdist.region_desc,
regdist.regional_manager,
regdist.district_code,
regdist.district_desc,
regdist.district_manager,
dlr.facing_dealer,
dlr.parts_parent_dealer,
-- dlr.parent_location, -- to be added to dealer_dim
'',
dlr.financial_report_dealer,
dlr.consolidation_dealer,
dlr.parts_management_group,
org.organization_code,
org.division_code,
supp.supplier_code,
supp.name,
pgrp.product_group_desc,
-- pvdr.parent_code,
-- pvdr.part_classification,
-- pvdr.proprietary_ind,
-- dship.direct_ship_program,
-- ordr.order_type,
-- pri.priority_code,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'S',
SUM(pfact.local_ext_sales_price), 0) ,0) ,0), 0) AS pdc_stock,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'E',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_vor,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'DSP',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS vdsp,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'SD',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS ship_direct,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
SUM(pfact.local_ext_sales_price), 0), 0) AS gross_sales,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
SUM(pfact.local_gross_profit), 0), 0) AS gross_profit,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'S',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_stock_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'E',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_vor_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'DSP',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS vdsp_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'SD',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS ship_direct_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
SUM(pfact.local_ext_sales_price), 0), 0) AS gross_sales_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
SUM(pfact.local_gross_profit), 0), 0) AS gross_profit_prev
FROM priority_dim pri,
product_group_dim pgrp,
order_type_dim ordr,
organization_dim org,
dealer_dim dlr,
region_district_dim regdist,
price_vendor_dim pvdr,
part_dim pdim,
supplier_dim supp,
part_order_line_fact pfact
WHERE regdist.type(+) = 'PARTS'
-- AND dlr.dealer_number = 'A300'
AND regdist.region_district_key IN
(SELECT MIN(region_district_key)
FROM region_district_dim
WHERE type = 'PARTS'
AND dealer_key = dlr.dealer_key)
AND (pfact.invoice_date_key BETWEEN TO_NUMBER(TO_CHAR(SYSDATE-1, 'yyyy')- 1||'0101')
AND TO_NUMBER(TO_CHAR(SYSDATE-1, 'yyyymmdd')))
AND (pfact.invoice_date_key NOT BETWEEN TO_NUMBER(TO_CHAR(ADD_MONTHS(SYSDATE-1, -12), 'yyyymmdd'))
AND TO_CHAR(TO_CHAR(SYSDATE-1, 'YYYY')- 1)||'1231')
--w/ Canada dealers
-- AND (SUBSTR(dlr.dealer_number, 1, 1) NOT IN ('X','Y'))
--w/o Canada dealers
AND (SUBSTR(dlr.dealer_number, 1, 1) NOT IN ('X','Y','N'))
AND NVL(gl_code, 'MISC') NOT LIKE 'DDSS%'
AND NVL(gl_code, 'MISC') NOT LIKE 'SSMX%'
AND NVL(gl_code, 'MISC') NOT LIKE 'SSMU%'
AND pdim.part_ind = 'Y'
AND dlr.dealer_key = regdist.dealer_key(+)
AND dlr.dealer_key = pfact.dealer_key
-- AND regdist.region_district_key (+) = pfact.region_district_key -- for region/district history only
AND org.organization_key = pfact.organization_key
AND pgrp.product_group = pvdr.product_group
-- AND cr.credit_reason_key = pfact.credit_reason_key -- used for Volvo sales data only
-- AND dship.direct_ship_program_key(+) = pfact.direct_ship_program_key
AND ordr.order_type_key = pfact.order_type_key
AND pri.priority_key = pfact.priority_key
AND pdim.part_key = pfact.shipped_part_key
AND pdim.organization_key = pfact.organization_key
AND pvdr.price_vendor_key = pfact.price_vendor_key
AND supp.supplier_key = pfact.supplier_key
GROUP BY pfact.invoice_year,
dlr.dealer_key,
dlr.dealer_number,
dlr.name,
dlr.city||', '||dlr.state,
dlr.country,
dlr.classification,
dlr.start_date_active,
dlr.end_date_active,
dlr.old_dealer_number,
regdist.region_code,
regdist.region_desc,
regdist.regional_manager,
regdist.district_code,
regdist.district_desc,
regdist.district_manager,
dlr.facing_dealer,
dlr.parts_parent_dealer,
-- dlr.parent_location,
dlr.financial_report_dealer,
dlr.consolidation_dealer,
dlr.parts_management_group,
org.organization_code,
org.division_code,
supp.supplier_code,
supp.name,
pgrp.product_group_desc,
-- pvdr.parent_code,
-- pvdr.part_classification,
-- pvdr.proprietary_ind,
-- dship.direct_ship_program,
ordr.order_type,
pri.priority_code

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.13 0.13 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16083 362.25 375.69 154607 29611828 11 241222
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16085 362.38 375.82 154607 29611828 11 241222

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24

Rows Row Source Operation
------- ---------------------------------------------------
241222 SORT GROUP BY
2257523 FILTER
2257523 FILTER
2257523 NESTED LOOPS OUTER
2257523 NESTED LOOPS
2257535 NESTED LOOPS
2257535 NESTED LOOPS
2257547 NESTED LOOPS
2257547 NESTED LOOPS
2263693 NESTED LOOPS
2263693 NESTED LOOPS
2263719 NESTED LOOPS
5646 TABLE ACCESS FULL DEALER_DIM
2263719 TABLE ACCESS BY GLOBAL INDEX ROWID PART_ORDER_LINE_FACT PARTITION: ROW LOCATION ROW LOCATION
4075238 INDEX RANGE SCAN POLF_DLR_BMI (object id 1776193)
2263693 TABLE ACCESS BY INDEX ROWID PRICE_VENDOR_DIM
2263693 INDEX UNIQUE SCAN PRICE_VENDOR_DIM_PK (object id 11384)
2263693 TABLE ACCESS BY INDEX ROWID PRIORITY_DIM
2263693 INDEX UNIQUE SCAN PRIORITY_DIM_PK (object id 11385)
2257547 TABLE ACCESS BY INDEX ROWID SUPPLIER_DIM
2257547 INDEX UNIQUE SCAN SUPPLIER_DIM_PK (object id 11390)
2257547 TABLE ACCESS BY INDEX ROWID ORDER_TYPE_DIM
2257547 INDEX UNIQUE SCAN ORDER_TYPE_DIM_PK (object id 11174)
2257535 TABLE ACCESS BY INDEX ROWID PRODUCT_GROUP_DIM
2257535 INDEX RANGE SCAN PRODUCT_GROUP_INDX (object id 1775107)
2257535 TABLE ACCESS BY INDEX ROWID ORGANIZATION_DIM
2257535 INDEX UNIQUE SCAN ORGAN_DIM_PK (object id 11176)
2257523 TABLE ACCESS BY INDEX ROWID PART_DIM
2257535 INDEX UNIQUE SCAN PART_DIM_PK (object id 11357)
2257523 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
2257523 INDEX RANGE SCAN REGION_DIST_DIM_IDX_TYPEDLR (object id 1776197)
328 SORT AGGREGATE
328 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
328 INDEX RANGE SCAN REGION_DIST_DIM_IDX_TYPEDLR (object id 1776197)

********************************************************************************


2nd - Bitmap Indexes (only output as it's the same sql)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.82 0.88 0 163 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16083 308.08 386.94 117155 30977381 18 241216
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16085 308.90 387.83 117155 30977544 18 241222

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24

Rows Row Source Operation
------- ---------------------------------------------------
241216 SORT GROUP BY
2257492 FILTER
2257492 FILTER
2257492 HASH JOIN OUTER
2257492 NESTED LOOPS
2263638 NESTED LOOPS
2263638 NESTED LOOPS
2263638 NESTED LOOPS
2263638 NESTED LOOPS
2263650 NESTED LOOPS
2263676 NESTED LOOPS
2563004 NESTED LOOPS
2563050 PARTITION RANGE ALL PARTITION: 1 10
2563050 PARTITION LIST ALL PARTITION: 1 1
2563050 TABLE ACCESS BY LOCAL INDEX ROWID PART_ORDER_LINE_FACT PARTITION: 1 10
2730512 BITMAP CONVERSION TO ROWIDS
175 BITMAP INDEX RANGE SCAN INVDATEKEY_BMNDX PARTITION: 1 10 (object id 1775932)
2563004 TABLE ACCESS BY INDEX ROWID PART_DIM
2563050 INDEX UNIQUE SCAN PART_DIM_PK (object id 11357)
2263676 TABLE ACCESS BY INDEX ROWID DEALER_DIM
2563004 INDEX UNIQUE SCAN DEALER_DIM_PK (object id 1758999)
2263650 TABLE ACCESS BY INDEX ROWID PRICE_VENDOR_DIM
2263650 INDEX UNIQUE SCAN PRICE_VENDOR_DIM_PK (object id 11384)
2263638 TABLE ACCESS BY INDEX ROWID PRODUCT_GROUP_DIM
2263638 INDEX RANGE SCAN PRODUCT_GROUP_INDX (object id 1775107)
2263638 TABLE ACCESS BY INDEX ROWID ORGANIZATION_DIM
2263638 INDEX UNIQUE SCAN ORGAN_DIM_PK (object id 11176)
2263638 TABLE ACCESS BY INDEX ROWID ORDER_TYPE_DIM
2263638 INDEX UNIQUE SCAN ORDER_TYPE_DIM_PK (object id 11174)
2263638 TABLE ACCESS BY INDEX ROWID PRIORITY_DIM
2263638 INDEX UNIQUE SCAN PRIORITY_DIM_PK (object id 11385)
2257492 TABLE ACCESS BY INDEX ROWID SUPPLIER_DIM
2257492 INDEX UNIQUE SCAN SUPPLIER_DIM_PK (object id 11390)
3709 TABLE ACCESS FULL REGION_DISTRICT_DIM
1263 SORT AGGREGATE
1263 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
1263 BITMAP CONVERSION TO ROWIDS
1263 BITMAP INDEX SINGLE VALUE REGION_DIST_DIM_IDX_TYPEDLR (object id 1776017)

********************************************************************************

Now can you tell using which index is better, also you told in your review that you wonder why an index is used at all...can you please eloborate that?

If bitmap is better because of the physical reads, can you tell can this query be written in any better way?

Shall I feed this output to an insert or shall I try to use bulk?

Thanks much for all the help!

Srinivas



Tom Kyte
June 28, 2006 - 5:03 pm UTC

why would you flush the shared pool?

anyway...


looking at the number of rows retrieved, I'm thinking the indexes could be "not the right approach at all"

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894#6760861174154 <code>



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.13 0.13 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16083 362.25 375.69 154607 29611828 11 241222
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16085 362.38 375.82 154607 29611828 11 241222



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.82 0.88 0 163 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16083 308.08 386.94 117155 30977381 18 241216
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16085 308.90 387.83 117155 30977544 18 241222

still don't know why they return different numbers of rows...

It looks like a virtual "tie" doesn't it.

Subquery effects

Jonathan Lewis, June 28, 2006 - 7:21 pm UTC

Coming into this one a bit late - and without looking very closely at the SQL - but I think you have a correlated subquery that is not being unnested and is therefore being used as a filter.

Since filter subqueries always operate late in the query execution you have a load of joins of 2.5M rows across many tables - then finally filter down to 250,000 on the subquery. It looks like you need to use the PUSH_SUBQ hint in the main select to make that subquery operate at the earliest possible moment to minimise the subsequent cost of joins.

The btree/bitmap discussion seems to be irrelevant - the benefit of bitmaps comes from combining several, and you seem to be using just one to get into a table.



Reset HWM

Jdam, June 30, 2006 - 8:45 am UTC

Tom,

If I have a execution plan with no full scan then I alter table t move, rebuild the index and gathers the statistics this
could change my plans to full scan ?

Tom Kyte
June 30, 2006 - 12:42 pm UTC

absolutely.


ops$tkyte@ORA10GR2> create table t
  2  as
  3  select *
  4    from big_table.big_table;
 
Table created.
 
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> delete from t where owner <> 'SCOTT';
 
999880 rows deleted.
 
ops$tkyte@ORA10GR2> create index t_idx on t(owner);
 
Index created.
 
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10GR2> set autotrace traceonly explain
ops$tkyte@ORA10GR2> select * from t where owner = 'SCOTT';
 
Execution Plan
----------------------------------------------------------
Plan hash value: 470836197
 
-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |   120 | 12600 |    32   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |   120 | 12600 |    32   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |   120 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("OWNER"='SCOTT')
 
ops$tkyte@ORA10GR2> set autotrace off
ops$tkyte@ORA10GR2> alter table t move;
 
Table altered.
 
ops$tkyte@ORA10GR2> alter index t_idx rebuild;
 
Index altered.
 
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10GR2> set autotrace traceonly explain
ops$tkyte@ORA10GR2> select * from t where owner = 'SCOTT';
 
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   120 | 12600 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |   120 | 12600 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("OWNER"='SCOTT')
 
ops$tkyte@ORA10GR2> set autotrace off
 

Thanks Jonathan and Tom!

Srinivas Narashimalu, June 30, 2006 - 2:25 pm UTC

Thanks very much for your input Jonathan!

I will try it out next week and will update the result here. Between I tried searching about warehouse tuning articles by you/Tom. But I could find only some (3 to be precise) articles in Database Journal by you, do you have it anywhere else? If yes, can you give the link please?

Thanks for the example Tom, but how do I make the index be used after moving the table without any hints?

Thanks,
Srinivas

Tom Kyte
June 30, 2006 - 4:23 pm UTC

The index would only be used if the index would be USEFUL.

In this example I cooked up, anything other than a full scan would be *wrong*.

indexes <> (fast = true)

full scans <> (evil)



Thanks!

Srinivas Narashimalu, June 30, 2006 - 4:30 pm UTC

Thanks much!

-Srinivas

Latest tkprof reports

Srinivas Narashimalu, July 10, 2006 - 1:55 pm UTC

Hi Tom,

I have run the following query 4times with bouncing the database each time.

1st time it was run with btree and no hint, And 3rd time it was run with bitmap and no hint.
2nd time it was run with btree and PUSH_SUBQ hint as per Jonathan Lewis suggestion.
4th time it was run with bitmap and PUSH_SUBQ hint as per Jonathan Lewis suggestion.

You guys were CORRECT as USUAL, btree and bitmap didn't make much difference, but I have 10 tables involved in this query with many "AND" and the columns have low cardinality (wrt to the total number of rows in the table) and yet why you (and even Jonatahan) don't recommend bitmap here. As per Oracle if a table has 10,000 distinct values and the table has a total of 1million rows then it is suitable for Bitmap, my idea of bitmap for the btrees are based on that fact.

The reason I want to get this go as fast as it can is- there are around 40 million records in the production, and this report is taking around 45mins now. Each day it's taking longer and longer as the records are increasing. If this is the best this query can do, then is there any other solution by which I can get the report in shorter time or anyway by which I can keep the time for report generation constant irrespective of the number of records?

SELECT pfact.invoice_year,
dlr.dealer_key,
dlr.dealer_number,
dlr.name,
dlr.city||', '||dlr.state AS location,
dlr.country,
dlr.classification,
dlr.start_date_active,
dlr.end_date_active,
dlr.old_dealer_number,
regdist.region_code,
regdist.region_desc,
regdist.regional_manager,
regdist.district_code,
regdist.district_desc,
regdist.district_manager,
dlr.facing_dealer,
dlr.parts_parent_dealer,
-- dlr.parent_location, -- to be added to dealer_dim
'',
dlr.financial_report_dealer,
dlr.consolidation_dealer,
dlr.parts_management_group,
org.organization_code,
org.division_code,
supp.supplier_code,
supp.name,
pgrp.product_group_desc,
-- pvdr.parent_code,
-- pvdr.part_classification,
-- pvdr.proprietary_ind,
-- dship.direct_ship_program,
-- ordr.order_type,
-- pri.priority_code,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'S',
SUM(pfact.local_ext_sales_price), 0) ,0) ,0), 0) AS pdc_stock,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'E',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_vor,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'DSP',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS vdsp,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
DECODE(ordr.order_type,'SD',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS ship_direct,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
SUM(pfact.local_ext_sales_price), 0), 0) AS gross_sales,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY'),
SUM(pfact.local_gross_profit), 0), 0) AS gross_profit,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'S',
SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_stock_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'PDC',
DECODE(pri.priority_code,'E',

SUM(pfact.local_ext_sales_price), 0), 0), 0), 0) AS pdc_vor_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'DSP',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS vdsp_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
DECODE(ordr.order_type,'SD',
SUM(pfact.local_ext_sales_price), 0), 0), 0) AS ship_direct_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
SUM(pfact.local_ext_sales_price), 0), 0) AS gross_sales_prev,
NVL(DECODE(invoice_year, TO_CHAR(SYSDATE - 1,'YYYY') - 1,
SUM(pfact.local_gross_profit), 0), 0) AS gross_profit_prev
FROM priority_dim pri,
product_group_dim pgrp,
order_type_dim ordr,
organization_dim org,
dealer_dim dlr,
region_district_dim regdist,
price_vendor_dim pvdr,
part_dim pdim,
supplier_dim supp,
part_order_line_fact pfact
WHERE regdist.type(+) = 'PARTS'
-- AND dlr.dealer_number = 'A300'
AND regdist.region_district_key IN
(SELECT MIN(region_district_key)
FROM region_district_dim
WHERE type = 'PARTS'
AND dealer_key = dlr.dealer_key)
AND (pfact.invoice_date_key BETWEEN TO_NUMBER(TO_CHAR(SYSDATE-1, 'yyyy')- 1||'0101')
AND TO_NUMBER(TO_CHAR(SYSDATE-1, 'yyyymmdd')))
AND (pfact.invoice_date_key NOT BETWEEN TO_NUMBER(TO_CHAR(ADD_MONTHS(SYSDATE-1, -12), 'yyyymmdd'))
AND TO_CHAR(TO_CHAR(SYSDATE-1, 'YYYY')- 1)||'1231')
--w/ Canada dealers
-- AND (SUBSTR(dlr.dealer_number, 1, 1) NOT IN ('X','Y'))
--w/o Canada dealers
AND (SUBSTR(dlr.dealer_number, 1, 1) NOT IN ('X','Y','N'))
AND NVL(gl_code, 'MISC') NOT LIKE 'DDSS%'
AND NVL(gl_code, 'MISC') NOT LIKE 'SSMX%'
AND NVL(gl_code, 'MISC') NOT LIKE 'SSMU%'
AND pdim.part_ind = 'Y'
AND dlr.dealer_key = regdist.dealer_key(+)
AND dlr.dealer_key = pfact.dealer_key
-- AND regdist.region_district_key (+) = pfact.region_district_key -- for region/district history only
AND org.organization_key = pfact.organization_key
AND pgrp.product_group = pvdr.product_group
-- AND cr.credit_reason_key = pfact.credit_reason_key
-- AND dship.direct_ship_program_key(+) = pfact.direct_ship_program_key
AND ordr.order_type_key = pfact.order_type_key
AND pri.priority_key = pfact.priority_key
AND pdim.part_key = pfact.shipped_part_key
AND pdim.organization_key = pfact.organization_key
AND pvdr.price_vendor_key = pfact.price_vendor_key
AND supp.supplier_key = pfact.supplier_key
GROUP BY pfact.invoice_year,
dlr.dealer_key,
dlr.dealer_number,
dlr.name,
dlr.city||', '||dlr.state,
dlr.country,
dlr.classification,
dlr.start_date_active,
dlr.end_date_active,
dlr.old_dealer_number,
regdist.region_code,
regdist.region_desc,
regdist.regional_manager,
regdist.district_code,
regdist.district_desc,
regdist.district_manager,
dlr.facing_dealer,
dlr.parts_parent_dealer,
-- dlr.parent_location,
dlr.financial_report_dealer,
dlr.consolidation_dealer,
dlr.parts_management_group,
org.organization_code,
org.division_code,
supp.supplier_code,
supp.name,
pgrp.product_group_desc,
-- pvdr.parent_code,
-- pvdr.part_classification,
-- pvdr.proprietary_ind,
-- dship.direct_ship_program,
ordr.order_type,
pri.priority_code

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.94 0.91 45 1114 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16460 383.89 383.32 91733 37799840 44 246878
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16462 384.83 384.24 91778 37800954 44 246878

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24

Rows Row Source Operation
------- ---------------------------------------------------
246878 SORT GROUP BY
2364550 CONCATENATION
0 FILTER
0 FILTER
0 NESTED LOOPS OUTER
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 PARTITION RANGE ALL PARTITION: 1 10
0 PARTITION LIST ALL PARTITION: 1 1
0 TABLE ACCESS BY LOCAL INDEX ROWID PART_ORDER_LINE_FACT PARTITION: 1 10

0 INDEX RANGE SCAN INVDATEKEY_BMNDX PARTITION: 1 10 (object id 1778698)
0 TABLE ACCESS BY INDEX ROWID PART_DIM
0 INDEX UNIQUE SCAN PART_DIM_PK (object id 11357)
0 TABLE ACCESS BY INDEX ROWID DEALER_DIM
0 INDEX UNIQUE SCAN DEALER_DIM_PK (object id 1758999)
0 TABLE ACCESS BY INDEX ROWID ORDER_TYPE_DIM
0 INDEX UNIQUE SCAN ORDER_TYPE_DIM_PK (object id 11174)
0 TABLE ACCESS BY INDEX ROWID ORGANIZATION_DIM
0 INDEX UNIQUE SCAN ORGAN_DIM_PK (object id 11176)
0 TABLE ACCESS BY INDEX ROWID PRIORITY_DIM
0 INDEX UNIQUE SCAN PRIORITY_DIM_PK (object id 11385)
0 TABLE ACCESS BY INDEX ROWID SUPPLIER_DIM
0 INDEX UNIQUE SCAN SUPPLIER_DIM_PK (object id 11390)
0 TABLE ACCESS BY INDEX ROWID PRICE_VENDOR_DIM
0 INDEX UNIQUE SCAN PRICE_VENDOR_DIM_PK (object id 11384)
0 TABLE ACCESS BY INDEX ROWID PRODUCT_GROUP_DIM
0 INDEX RANGE SCAN PRODUCT_GROUP_INDX (object id 1775107)
0 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
0 INDEX RANGE SCAN REGION_DIST_DIM_IDX_TYPEDLR (object id 1778746)
41201 SORT AGGREGATE
41201 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
41201 INDEX RANGE SCAN REGION_DIST_DIM_IDX_TYPEDLR (object id 1778746)
2364550 FILTER
2364550 FILTER
2364550 NESTED LOOPS OUTER
2364550 NESTED LOOPS
2364571 NESTED LOOPS
2364609 NESTED LOOPS
2428551 NESTED LOOPS
2428551 NESTED LOOPS
2428551 NESTED LOOPS
2428551 NESTED LOOPS
2749557 NESTED LOOPS
2749606 PARTITION RANGE ALL PARTITION: 1 10
2749606 PARTITION LIST ALL PARTITION: 1 1
2749606 TABLE ACCESS BY LOCAL INDEX ROWID PART_ORDER_LINE_FACT PARTITION: 1 10
2928781 INDEX RANGE SCAN INVDATEKEY_BMNDX PARTITION: 1 10 (object id 1778698)
2749557 TABLE ACCESS BY INDEX ROWID PART_DIM
2749606 INDEX UNIQUE SCAN PART_DIM_PK (object id 11357)
2428551 TABLE ACCESS BY INDEX ROWID DEALER_DIM
2749557 INDEX UNIQUE SCAN DEALER_DIM_PK (object id 1758999)
2428551 TABLE ACCESS BY INDEX ROWID ORDER_TYPE_DIM
2428551 INDEX UNIQUE SCAN ORDER_TYPE_DIM_PK (object id 11174)
2428551 TABLE ACCESS BY INDEX ROWID ORGANIZATION_DIM
2428551 INDEX UNIQUE SCAN ORGAN_DIM_PK (object id 11176)
2428551 TABLE ACCESS BY INDEX ROWID PRIORITY_DIM
2428551 INDEX UNIQUE SCAN PRIORITY_DIM_PK (object id 11385)
2364609 TABLE ACCESS BY INDEX ROWID SUPPLIER_DIM
2364609 INDEX UNIQUE SCAN SUPPLIER_DIM_PK (object id 11390)
2364571 TABLE ACCESS BY INDEX ROWID PRICE_VENDOR_DIM
2364571 INDEX UNIQUE SCAN PRICE_VENDOR_DIM_PK (object id 11384)
2364550 TABLE ACCESS BY INDEX ROWID PRODUCT_GROUP_DIM
2364550 INDEX RANGE SCAN PRODUCT_GROUP_INDX (object id 1775107)
2364550 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
2364550 INDEX RANGE SCAN REGION_DIST_DIM_IDX_TYPEDLR (object id 1778746)
41201 SORT AGGREGATE
41201 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
41201 INDEX RANGE SCAN REGION_DIST_DIM_IDX_TYPEDLR (object id 1778746)
********************************************************************************



2.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.94 0.94 33 1103 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16460 382.68 382.32 91801 37799840 44 246878
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16462 383.62 383.26 91834 37800943 44 246878

3.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.83 0.95 44 1252 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16460 320.80 320.76 134971 32791923 18 246878
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16462 321.63 321.72 135015 32793175 18 246878

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24

Rows Row Source Operation
------- ---------------------------------------------------
246878 SORT GROUP BY
2364550 FILTER
2364550 FILTER
2364550 HASH JOIN OUTER
2364550 NESTED LOOPS
2364550 NESTED LOOPS
2364550 NESTED LOOPS
2364550 NESTED LOOPS
2364571 NESTED LOOPS
2364609 NESTED LOOPS
2428551 NESTED LOOPS
2749557 NESTED LOOPS
2749606 PARTITION RANGE ALL PARTITION: 1 10
2749606 PARTITION LIST ALL PARTITION: 1 1
2749606 TABLE ACCESS BY LOCAL INDEX ROWID PART_ORDER_LINE_FACT PARTITION: 1 10
2928781 BITMAP CONVERSION TO ROWIDS
188 BITMAP INDEX RANGE SCAN PART_ORDER_LINE_IDX_INVDT PARTITION: 1 10 (object id 1778448)
2749557 TABLE ACCESS BY INDEX ROWID PART_DIM
2749606 INDEX UNIQUE SCAN PART_DIM_PK (object id 11357)
2428551 TABLE ACCESS BY INDEX ROWID DEALER_DIM
2749557 INDEX UNIQUE SCAN DEALER_DIM_PK (object id 1758999)
2364609 TABLE ACCESS BY INDEX ROWID SUPPLIER_DIM
2364609 INDEX UNIQUE SCAN SUPPLIER_DIM_PK (object id 11390)
2364571 TABLE ACCESS BY INDEX ROWID PRICE_VENDOR_DIM
2364571 INDEX UNIQUE SCAN PRICE_VENDOR_DIM_PK (object id 11384)
2364550 TABLE ACCESS BY INDEX ROWID PRODUCT_GROUP_DIM
2364550 INDEX RANGE SCAN PRODUCT_GROUP_INDX (object id 1775107)
2364550 TABLE ACCESS BY INDEX ROWID ORGANIZATION_DIM
2364550 INDEX UNIQUE SCAN ORGAN_DIM_PK (object id 11176)
2364550 TABLE ACCESS BY INDEX ROWID ORDER_TYPE_DIM
2364550 INDEX UNIQUE SCAN ORDER_TYPE_DIM_PK (object id 11174)
2364550 TABLE ACCESS BY INDEX ROWID PRIORITY_DIM
2364550 INDEX UNIQUE SCAN PRIORITY_DIM_PK (object id 11385)
3709 TABLE ACCESS FULL REGION_DISTRICT_DIM
1320 SORT AGGREGATE
1320 TABLE ACCESS BY INDEX ROWID REGION_DISTRICT_DIM
1320 BITMAP CONVERSION TO ROWIDS
1320 BITMAP INDEX SINGLE VALUE REGION_DIST_DIM_IDX_TYPEDLR (object id 1778630)

********************************************************************************

4.
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.91 0.86 44 1252 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 16460 318.15 319.06 134907 32791923 18 246878
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16462 319.06 319.92 134951 32793175 18 246878


P.S - There is no change in the execution plan with the PUSH_SUBQ hint.

And even though the number of rows returned is 246,878, the total number of rows in the fact table is around 4million, so why do you say *index* should not be used as it's just around 5% of the total number of rows?

Please suggest me any solution by which I can reduce the time it's taking in the production.

Thanks,
Srinivas

Tom Kyte
July 11, 2006 - 7:20 pm UTC

does the query return MORE data every day?

I said, there is a good change the index should not be used - not really knowing how much data you really have sort of factors into that commment.

disk > query?

Phil Miesle, July 12, 2006 - 11:38 am UTC

Tom,

From what I can find, Logical IO = query + current. Fine, except I have a number of cases where physical IO (disk) is greater than logical IO (10gR2). In reality, this problem is very much componded by the fact that I'm doing scattered reads, so each physical IO will be reading in db_file_multiblock_read_count blocks.

My tuning goal is to minimize Logical IO (and consequently latching and CPU and all that jazz). Short of revising to say that "logical IO = disk+query+current", how do I reconcile this?

Here's an example:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 6910 165.20 226.77 171411 28983 0 34540738
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6912 165.20 226.77 171411 28983 0 34540738

Rows Row Source Operation
------- ---------------------------------------------------
34540738 HASH GROUP BY (cr=28983 pr=171411 pw=144894 time=161517397 us)
34860108 VIEW (cr=28983 pr=26517 pw=0 time=104581041 us)
34860108 UNION-ALL (cr=28983 pr=26517 pw=0 time=69720928 us)
31928466 TABLE ACCESS FULL ONEC (cr=26538 pr=26517 pw=0 time=698 us)
2931642 TABLE ACCESS FULL TWOC (cr=2445 pr=0 pw=0 time=134 us)


Tom Kyte
July 12, 2006 - 5:07 pm UTC

no one ever said physical IO < logical IO.

direct path reads (no logical IO)
reads from temp (no logical IO)

yours is a case of writes to temp it looks like here. the pw (physical writes) were writes to temp, the pr's increased reading data from temp.

is your hash area/pga_aggregate_target set appropriately.

This is a 'batch' query - we have to think about them a little differently in most cases. You are not having hundreds of concurrent users hitting the database trying to run this query at the same time (oltp) - it is "different"

disk > query?

Phil Miesle, July 12, 2006 - 11:42 am UTC

Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 6910 0.00 0.00
db file scattered read 845 0.01 2.06
db file sequential read 1 0.00 0.00
direct path write temp 4674 0.35 32.03
SQL*Net message from client 6910 0.04 131.37
SQL*Net more data to client 368945 0.04 3.81
direct path read temp 4674 0.00 0.17


Following on (a bit), much of the physical IO happens as part of the Hash Join (which isn't logical IO, at least not in the SGA sense). I think this is perhaps the answer...

And "logical IO" (query+current) is the SGA logical IO whilst physical IO (disk) is a combination of SGA physical IO (getting data into the SGA) and also PGA physical IO (like hashes that overflow the hash_area_size).

Tom Kyte
July 12, 2006 - 5:07 pm UTC

direct path reads (parallel query, non-cached lobs, reads from temp)....

physical IO without any corresponding logical IO

yes, data increases

A reader, July 12, 2006 - 1:08 pm UTC

Tom,

Yes, the data increases each day in the result set.

Thanks,


Tom Kyte
July 12, 2006 - 3:55 pm UTC

so, why don't you expect it would take longer over time?

Just looking for another 'efficient' way if there is one..

Srinivas Narashimalu, July 12, 2006 - 4:54 pm UTC

Tom,

Yes, I expect it to take longer as the data is getting larger. But I want to confirm from the EXPERT that the sql and execution plan is efficient and nothing could be improvised with respect to the sql.

And check if the same job that this "select" does could be done in any other way. Like I can restrict the date to a single day and insert that value(s) into a table (or can create a materialized view and refresh it everyday with one days' value).

Thanks for your response!

Srinivas

Tom Kyte
July 12, 2006 - 5:47 pm UTC

sorry, it is a BIG query and all - I'm not going to parse it and break it apart in my head.

I'd make sure it was using all_rows optimization as a starter.

I'd also run the query in trace mode with statistics level set to all and look at the row source operation which will then tell you how much IO each step is doing (looking for steps that perform TONS of LIO but result in few output rows)

mixed-mode environment

Phil Miesle, July 13, 2006 - 6:09 am UTC

<quote>

is your hash area/pga_aggregate_target set appropriately.

This is a 'batch' query - we have to think about them a little differently in most cases. You are not having hundreds of concurrent users hitting the database trying to run this query at the same time (oltp) - it is "different"

</quote>

Yep, this is a big batch query. However, I can't get the auto PGA memory manager (APMM) to differentiate very well between 'batch' and 'oltp' users (this is a mixed-use database).

processes=50 (for OLTP daytime use) and pga_aggregate_target=4GB (for nighttime batch ops). Daytime is fine, it is the nightly batch that concerns me. In this case, I have 4 concurrent batch processes running...so *conceptually* the APMM should allow (roughly) 1 GB per process.

In reality I don't observe this. Looking at v$sql_workarea_active during my query, I see expected size=400M, actual mem used=350M, and tempseg size=450M. It believes this to be an optimal execution (number of passes=0) yet it has overflowed to temp space.

This indicates to me that APMM is 'holding back' some memory for the 'extra' 46 processes (which are not connected to the instance). I don't really understand the mechanics of how this works and how to influence it (Concepts and Perf Tuning guides provide limited info here).

I tried to disable APMM (pga_aggregate_target=0) and then use the *_area_size parameters (configured in the spfile for OLTP use and then overridden by session during batch use), but this apparently disables my new favourite feature (hash group by).

I suppose my question is this: where can I learn more about how to configure and optimize the APMM? This is a moderately complex area to try to learn experimentally; without having a comprehensive set of testcases (and a decent volume of data for the batch cases) most conclusions will have dubious validity.

Tom Kyte
July 13, 2006 - 8:03 am UTC

automatic pga memory management does not work like that - it would not give 1gig out when set to 4gig with 4 users (there are formulas, they change from release to release, but think more like 5%)

for batch, you can always

alter session set workarea_size_policy=manual;
alter session set hash_area_size = xxxxxxxxxxxxxx;

do you have access to "Expert Oracle Database Architecture"? I cover how automatic pga memory works/what it does.

drop table t;

create table t as select * from all_objects;

alter session set sql_trace=true;
alter session set workarea_size_policy = auto;
select owner, count(*)
from t workarea_auto
group by owner;
alter session set workarea_size_policy = manual;
select owner, count(*)
from t workarea_manual
group by owner;




select owner, count(*)
from t workarea_auto
group by owner

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 0.04 0.04 350 694 0 37
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.04 0.04 350 695 0 37

Rows Row Source Operation
------- ---------------------------------------------------
37 HASH GROUP BY (cr=694 pr=350 pw=0 time=41860 us)
50112 TABLE ACCESS FULL T (cr=694 pr=350 pw=0 time=100285 us)
********************************************************************************
select owner, count(*)
from t workarea_manual
group by owner

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 0.03 0.03 0 694 0 37
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.03 0.03 0 695 0 37

Rows Row Source Operation
------- ---------------------------------------------------
37 HASH GROUP BY (cr=694 pr=0 pw=0 time=32455 us)
50112 TABLE ACCESS FULL T (cr=694 pr=0 pw=0 time=50169 us)


does not see to have disabled it?

Thanks!

Srinivas Narashimalu, July 13, 2006 - 9:27 am UTC

Thanks for the response Tom!

I will try that one thing and incase I don't find anything, I will opt for some other way (like refreshing just one days' data) of doing this.

Thanks,
Srinivas

Direct IO

praveen, July 14, 2006 - 2:32 am UTC

Hi Tom,

I am interested in looking into the Direct IO capabilities of filesystems
(specifically HP and maybe Solaris) and what gains in performance this could
have.

I have read documents, and tried to search HP/Oracle and various google
searches but have come up with very little.

How do I

1) Investigate what the filesytem options are with regard to direct IO?? And
for which types filesystems these can be applied ? Eg VxFS
2) How to build a test case/test harness to measure the pro's/con's - ie how to
"prove" its affect

Thanks,
Praveen


Tom Kyte
July 14, 2006 - 8:31 am UTC

I'd myself ask the hardware/driver vendors (since I don't really do this bit myself too much).

things not always as they seem

Phil Miesle, July 14, 2006 - 12:35 pm UTC

The 'disk' and 'pr=' stats in tkprof aren't quite what they seem on the surface of it, at least not where multi-block index I/O is concerned. My wait events (db file scattered read) indicate that I went to disk 3119 times, yet the tkprof stats for the query think I went to disk 96831 times (one for each block on each pass).

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 6910 115.98 146.72 96831 103384 0 34540738
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6912 115.99 146.74 96831 103384 0 34540738


Rows Row Source Operation
------- ---------------------------------------------------
34540738 UNION-ALL (cr=103384 pr=96831 pw=0 time=143217056 us)
31928466 HASH JOIN RIGHT OUTER (cr=54884 pr=48438 pw=0 time=100839474 us)
2931642 INDEX FAST FULL SCAN PK_TWOD (cr=4073 pr=4054 pw=0 time=1147 us)(object id 91809)
31928466 INDEX FAST FULL SCAN PK_ONED (cr=50811 pr=44384 pw=0 time=31930530 us)(object id 91807)
2612272 HASH JOIN ANTI (cr=48500 pr=48393 pw=0 time=35999825 us)
2931642 INDEX FAST FULL SCAN PK_TWOD (cr=4073 pr=4053 pw=0 time=535 us)(object id 91809)
31928466 INDEX FAST FULL SCAN PK_ONED (cr=44427 pr=44340 pw=0 time=928 us)(object id 91807)


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 6910 0.00 0.00
db file sequential read 15 0.00 0.00
db file scattered read 3119 0.00 9.44
SQL*Net message from client 6910 0.05 99.95
SQL*Net more data to client 357521 0.02 3.75


Also, following up on my findings with pga aggregate target. Setting the parameter (in the spfile) disables the HASH GROUP BY, but disabling per session (with workarea_size_policy) doesn't disable it.

HOWEVER, I have continued to notice 'inefficient' behaviour when combining APMM with workarea_size_policy=manual. This setting (with hash_area_size=1000000000) went to temp space with the same execution plan as with wsp=auto (which also went to temp, but not as badly). These two things don't co-exist very well together, at least not for the big 'batch-like' stuff I tend to run.

It was an interesting set of experiments regardless, I discovered that my best execution plan happened in manual mode (without the hash join) and with USE_HASH and HASH_AJ hints I was able to get the optimizer to match the best plan...ended up with the fewest logical IOs, lowest CPU consumption, and zero writes to temp.

Tom Kyte
July 14, 2006 - 1:08 pm UTC

one measures "how many times" the other "how many blocks". They are what they seem?


I don't know what you mean by the 'inefficient' behaviour? I would expect the plans to be the same - you just gave it more hash area to use (and hence the run time would be a bit better?)




Puzzling Tkprof ouput.

reader, August 23, 2006 - 8:43 am UTC

Hi Tom,

i was told in a tuning session that the elapsed time will always be greater than cpu time.
But i got the follwing trace output for a qeury.
iam confused.

Kindly let me know under which scenario's can cpu time may be greater then elapsed time.

Expecting detailed explantion .

Thank in advance.

------------------------------------------

SELECT *
FROM
table_name


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 508848 8.42 6.91 0 0 0 0
Fetch 37145904 321.51 193.77 17 37654752 0 36637056
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 37654753 329.93 200.69 17 37654752 0 36637056

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 18 (FCCPOL) (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
36637056 TABLE ACCESS FULL MSTM_SWIFT_CHAR



Thanks
Reader


Tom Kyte
August 27, 2006 - 3:39 pm UTC

... "Expecting detailed explantion"... (that must be a cultural thing...)


<quote src=Effective Oracle by Design>
A question that frequently comes up with regards to TKPROF and the report is this – how could output such as the following be produced:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----- -------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 14755 12.77 12.60 4 29511 856828 14755
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ----- -------- ---------- ----------
total 14756 12.77 12.60 4 29511 856828 14755

How can CPU time be larger then elapsed time? This discrepancy is due to the way timings are collected – and attempts to time very fast operations or lots of operations. For example – suppose you were using a stopwatch that measured only down to the second. You timed 50 events. Each event seemed to have taken 2 seconds according to the stop watch. That means you had 100 seconds of time elapsed during these events right? Well, probably not. Suppose each event really took 2.99 seconds – you really had almost 150 seconds of time there.
Taking it a step further – suppose the stopwatch was continuously running. So, when the event started, you would look at the stopwatch and then when the event finished, you would look again and subtract the two numbers. This is closer to what happens with timing on a computer – you look at the system “watch”, do something and look again – the delta represents the timing of the event. Now, we’ll perform the same timing test as above. Again, each event appears to have taken 2 seconds – but they may have taken just 1.01! How so? Well, when the event started, the clock was really at time 2.99 – but you only saw “2” (the granularity of the stopwatch). When the event completed, the stopwatch reported 4 (and the real time was 4.00). The delta as you see it – 2, the “real” delta – 1.01.

Now, over time these discrepancies can accumulate in the aggregate total. The rule of averages would have them effectively cancel each other out more or less but over time, a small error can creep in. That is the cause of the discrepancy above – where the elapsed time is less then the CPU time. At the lowest level, Oracle is gathering statistics for timing at either the millisecond or microsecond level. And, further, it may time some events using one clock and other events using a different one – this is unavoidable as the timing information is gathered from the operating system, using its API’s. In this example, we executed a statement 14,755 times – meaning the average CPU time to execute that statement was 0.00865469 seconds. If we were to run this test over and over – we would find that the timings for CPU and elapsed are more or less “the same”. In general, this error is not so large as to send you looking down the wrong path, but it can be confusing the first time you see it.
</quote>


And in current releases - whereby CPU time is measured in 1/100ths of a second whereas elapsed time is in 1/1000000ths of a second - the problem can be "worse" when you measure millions of discrete events - as you have here.

executing a full scan of the same table 508,848 times - to retrieve 36,637,056 rows - I'd be much less worried about the cpu/elapsed time issue than "WHY THE HECK ARE WE DOING THAT, what went wrong in our design, we need to fix that algorithm big time"



help on tkprof

Ritesh, August 29, 2006 - 8:54 am UTC

Tom,

When I try to save the data through forms it is taking around 3 minutes for each save.

I am using Oracle 8.1.7.3 and forms 6i on Suse Linux 7.1

I enabled trace for the session where I was saving data through forms. In *.trf file I can see one section where I see some pointers for possible performance problems. That section from trf file is pasted below :


***************************************************************************­*****


SELECT A.FPL_ID,A.PZUG_ID,A.PZV_KEY,A.EINBRUCHZEIT,A.STRECKE_ID,
A.BST_REIHENFOLGE,COUNT(*),B.ZN
FROM
PZV_BSTKETTE A,PFPL_ZUG B WHERE (A.PZUG_ID = B.PZUG_ID ) GROUP BY A.FPL_ID, A.PZUG_ID,A.PZV_KEY,A.EINBRUCHZEIT,A.STRECKE_ID,A.BST_REIHENFOLGE,B.ZN HAVING COUNT(*) > 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 20 0.00 0.00 0 0 0 0
Fetch 20 4.93 89.37 12388 157899 1400 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 41 4.93 89.37 12388 157899 1400 0


Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 40 (recursive depth: 1)
***************************************************************************­*****


Then I tried to execute the above mentioned query manually and enabled the trace for this session. That section (from *.trf file) corresponding to that query is pasted below:


***************************************************************************­*****


SELECT A.FPL_ID,A.PZUG_ID,A.PZV_KEY,A.EINBRUCHZEIT,A.STRECKE_ID,
A.BST_REIHENFOLGE,COUNT(*),B.ZN
FROM
PZV_BSTKETTE A,PFPL_ZUG B WHERE (A.PZUG_ID = B.PZUG_ID ) GROUP BY A.FPL_ID,
A.PZUG_ID,A.PZV_KEY,A.EINBRUCHZEIT,A.STRECKE_ID,A.BST_REIHENFOLGE,B.ZN HAVING COUNT(*) > 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.32 4.57 620 7895 70 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.32 4.57 620 7895 70 0


Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 24


Rows Row Source Operation
------- ---------------------------------------------------
0 FILTER
44625 SORT GROUP BY
44624 NESTED LOOPS
3665 TABLE ACCESS FULL PFPL_ZUG
44624 TABLE ACCESS BY INDEX ROWID PZV_BSTKETTE
48288 INDEX RANGE SCAN (object id 6452)


***************************************************************************­*****


Please let me know your comments, why through froms this query is taking more time.

Thanks, Ritesh

Tom Kyte
August 29, 2006 - 3:26 pm UTC

more time than what?

That's what you get for using the RBO... No way it should use an index on anything.

the cbo would do two full scans and a nice big juicy hash join.

but question for you - since there are no inputs into this query, one wonders why your form executes it 20 times for a single save??

another question for you - this query isn't there to do "duplicate prevention" is it????!?!?

why do you run this query - do you know?

Some questions regarding tkprof output

A reader, September 18, 2006 - 12:17 pm UTC

Hi,

Here is the SQL and tkprof output of 10046 trace


SELECT listing_type.listing_type_desc, tag_lvl_cust.parent_lvl_cust_name,
tag_lvl_cust.parent_lvl_cust_id,
tag_lvl_market.region_lvl_market_name, tag_lvl_market.market_name,
eq_product_listing.product_id, date_hierarchy.YEAR,
date_hierarchy.month_name,
SUM ( eq_trade_fact.retained_comm
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.soft_rebate
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.funded_payout
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.csc_payaway
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.mutual_fund_transfer
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.retail_payaway
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.gross_comm
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.cap_edge
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.cap_mkt_spread
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.local_broker_comm
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.market_expense
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.guaranteed_write_off
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.risk_premium
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( (eq_trade_fact.fac + eq_trade_fact.risk_premium)
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.other_costs
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
SUM ( eq_trade_fact.edge
* monthly_exchange_rate.annual_exchange_rate
/ monthly_exchange_rate.monthly_exchange_rate
),
tag_lvl_prod.booking_lvl_prod_name
FROM eq_trade_fact,
eq_prod_phys_hier,
eq_product_listing,
listing_type,
tag_lvl_cust,
tag_lvl_market,
date_hierarchy,
monthly_exchange_rate,
tag_lvl_prod
WHERE (eq_prod_phys_hier.child_product_id = eq_trade_fact.prod_level_id)
AND (eq_product_listing.product_id = eq_prod_phys_hier.parent_product_id
)
AND (date_hierarchy.project_date = eq_trade_fact.trade_date)
AND (eq_trade_fact.date_id = monthly_exchange_rate.month_id)
AND (monthly_exchange_rate.currency_id = eq_trade_fact.currency_id)
AND (listing_type.listing_type_id = eq_trade_fact.listing_type_id)
AND (eq_trade_fact.cust_level_id = tag_lvl_cust.cust_level_id)
AND (tag_lvl_market.market_id = eq_trade_fact.market_id)
AND (tag_lvl_prod.prod_level_id = eq_trade_fact.prod_level_id)
AND eq_trade_fact.system_id IN (SELECT system_id
FROM eq_host_system)
AND ( eq_product_listing.product_id IN
(1943,
400032,
400056,
9000010,
600810,
703154,
702964,
702755,
9000015,
600063,
387,
611,
600147,
500040,
500046,
600135,
9000823,
702339,
702651
)
AND ( (date_hierarchy.prev_year = 'Y')
OR (date_hierarchy.ytd_last_month = 'Y')
)
)
GROUP BY listing_type.listing_type_desc,
tag_lvl_cust.parent_lvl_cust_name,
tag_lvl_cust.parent_lvl_cust_id,
tag_lvl_market.region_lvl_market_name,
tag_lvl_market.market_name,
eq_product_listing.product_id,
date_hierarchy.YEAR,
date_hierarchy.month_name,
tag_lvl_prod.booking_lvl_prod_name

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.60 0.67 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 42548 5473.71 7631.82 2720367 537354957 130 638191 ------>1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 42550 5474.31 7632.49 2720367 537354957 130 638191

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 115

Rows Row Source Operation
------- ---------------------------------------------------
638191 SORT GROUP BY (cr=537354957 r=2720367 w=652017 time=449054971 us) ------> 2
28148818 NESTED LOOPS (cr=537354957 r=2693359 w=597992 time=6728638273 us) ----> 3
28148818 HASH JOIN (cr=2527415 r=2693359 w=597992 time=3326314502 us)
15516 TABLE ACCESS FULL MONTHLY_EXCHANGE_RATE (cr=157 r=0 w=0 time=11686 us)
28148818 NESTED LOOPS (cr=2527258 r=2693359 w=597992 time=3235354098 us)
28213700 HASH JOIN (cr=2527256 r=2693359 w=597992 time=3105170153 us)
11 TABLE ACCESS FULL LISTING_TYPE (cr=3 r=0 w=0 time=150 us)
28213700 HASH JOIN (cr=2527253 r=2693359 w=597992 time=3012811542 us)
28247167 HASH JOIN (cr=2438849 r=2057967 w=0 time=2363772889 us)
114 TABLE ACCESS FULL TAG_LVL_MARKET (cr=5 r=0 w=0 time=329 us)
28253128 HASH JOIN (cr=2438844 r=2057967 w=0 time=2314443858 us)
608 TABLE ACCESS FULL DATE_HIERARCHY (cr=332 r=0 w=0 time=14860 us)
28253128 HASH JOIN (cr=2438512 r=2057967 w=0 time=2260660658 us)
8458 TABLE ACCESS FULL TAG_LVL_PROD (cr=675 r=0 w=0 time=13793 us)
28253875 HASH JOIN (cr=2437837 r=2057967 w=0 time=2211438745 us)
4884 INDEX FAST FULL SCAN EQ_PROD_PHYS_HIER_I0 (cr=97 r=0 w=0 time=86314 us)(object id 2045869)
28267189 PARTITION RANGE ITERATOR PARTITION: KEY KEY (cr=2437408 r=2057967 w=0 time=2158593748 us)
28267189 TABLE ACCESS FULL EQ_TRADE_FACT PARTITION: KEY KEY (cr=2437408 r=2057967 w=0 time=2150050157 us)
758599 TABLE ACCESS FULL TAG_LVL_CUST (cr=88404 r=37400 w=0 time=59042660 us)
28148818 INDEX UNIQUE SCAN EQ_HOST_SYSTEM_I0 (cr=2 r=0 w=0 time=59837237 us)(object id 2046603)
28148818 INLIST ITERATOR (cr=534827542 r=0 w=0 time=3335122428 us)
28148818 INDEX UNIQUE SCAN EQ_PRODUCT_LISTING_I0 (cr=534827542 r=0 w=0 time=2095532378 us)(object id 2045874)


638191 SORT GROUP BY (cr=537354957 r=2720367 w=652017 time=449054971 us) ------> 2
28148818 NESTED LOOPS (cr=537354957 r=2693359 w=597992 time=6728638273 us) ----> 3

a) Why is the time in 2 less than in 3 even though 2 occurred after 3 ?
b) The majority of the time is spent by the query in step 3(nested loop without any table,immediately after the hash join ). Is there any way we can avoid this step ?
c) Since the output is so huge, even after group by,
Fetch 42548 5473.71 7631.82 2720367 537354957 130 638191 ------>1

it has to do 42458 fetches depending on the arraysize . Is this huge amount of fetches contributing to high time of nested loop ( step 3)

thanks
Apraim



execute > rows for insert.

Steve, November 08, 2006 - 9:56 am UTC

I know I've seen this explained somewhere, but I can't locate it.
What could cause the execute to be greater then the number of rows inserted?

INSERT INTO LIST_PROD (LP_CD, LP_NUM, LP_FL_NUM, LP_TYPE)
VALUES (:B4 , :B3 , :B2 , :B1 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2720 3.14 3.05 4 6 19027 470
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2721 3.14 3.05 4 6 19027 470

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 1804 (recursive depth: 1)

Thanks.

Tom Kyte
November 08, 2006 - 7:47 pm UTC

ops$tkyte%ORA10GR2> create table t ( x int primary key );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @trace
ops$tkyte%ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA10GR2> begin
  2      for i in 1 .. 10
  3      loop
  4          insert into t values ( i );
  5          begin
  6              insert into t values ( i );
  7          exception
  8              when dup_val_on_index then null;
  9          end;
 10      end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.



INSERT INTO T
VALUES
 ( :B1 )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute     20      0.01       0.01          0          2        122          10
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       22      0.01       0.01          0          2        122          10




Lots of failures - I'd look into that, raising those errors over and over and over and over is really expensive. 

bingo!

Steve, November 09, 2006 - 3:57 pm UTC

I crawled through the plsql code and that's exactly what it's doing.

Thanks,
Steve

Rows column

Karmit, November 17, 2006 - 1:34 pm UTC

Hi Tom,

I have a table called KOPL which has around 475K records.
Its also got loads of indexes on it.
A particular query we traced gave an output like this (tkprof).

-----

Rows Row Source Operation
------- ---------------------------------------------------
0 SORT UNIQUE
0 TABLE ACCESS BY INDEX ROWID TABLE_XYZ
1 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
987 TABLE ACCESS FULL QTABLE_ABC
0 TABLE ACCESS BY INDEX ROWID KOPL
0 BITMAP CONVERSION TO ROWIDS
0 BITMAP AND
987 BITMAP CONVERSION FROM ROWIDS
987 INDEX RANGE SCAN PK_KOPL (object id 123991)
974 BITMAP CONVERSION FROM ROWIDS
4830378 INDEX RANGE SCAN IDX5_KOPL (object id 123996)
974 BITMAP CONVERSION FROM ROWIDS
156789924 INDEX RANGE SCAN IDX10_KOPL (object id 124001)
974 BITMAP CONVERSION FROM ROWIDS
296029742 INDEX RANGE SCAN IDX4_KOPL (object id 123995)
974 BITMAP OR
974 BITMAP CONVERSION FROM ROWIDS
50786696 INDEX RANGE SCAN IDX7_KOPL (object id 123998)
974 BITMAP CONVERSION FROM ROWIDS
29884268 INDEX RANGE SCAN IDX7_KOPL (object id 123998)
974 BITMAP CONVERSION FROM ROWIDS
6011528 INDEX RANGE SCAN IDX7_KOPL (object id 123998)
0 BITMAP CONVERSION FROM ROWIDS
0 INDEX RANGE SCAN IDX7_KOPL (object id 123998)
0 TABLE ACCESS BY INDEX ROWID LPSQTY


-----

If we look at the Rows column. I understand this:

a) QTABLE_ABC has 987 rows in it and thats the number
of records scanned in this part of the query.

b) 156789924 - This is NOT the number of records in table
KOPL (as we know its definitely not by doing a count(*)!)
but merely the number of records scanned in using the
index IDX10_KOPL?

c) Similarly, 296029742 is the number of records scanned
in this part of the plan while using the index IDX4_KOPL.

and so on...

Problem is.. If the the table KOPL has only ~475K records
why is the Rows column for the indexes giving so high values?

Will appreciate clarification.

Regards,
Karmit
P.S: and of course this query is quite slow!!

Tom Kyte
November 20, 2006 - 2:17 am UTC

a) qtable_abc has at least 987 records, could have billions more, but after full scanning it, only 987 records satisfied some filter on it.

b) you are nested loops here, that index stuff was done 987 times.

tkprof output

Karmit, November 20, 2006 - 10:24 am UTC

Hi Tom,

Regarding the above query, I observe that we have a difference in the run plan for the same query between the Live and Test environment (luckily the test is performing badly for the moment!).

I noticed the following different plans for the exact same query:

LIVE system
===========

----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 127 | 66 (0)|
| 1 | SORT UNIQUE | | 1 | 127 | 66 (0)|
|* 2 | TABLE ACCESS BY INDEX ROWID | CALCOPSQ | 1 | 18 | 2 (50)|
| 3 | NESTED LOOPS | | 1 | 127 | 55 (0)|
| 4 | NESTED LOOPS | | 1 | 109 | 54 (0)|
| 5 | NESTED LOOPS | | 1 | 101 | 53 (0)|
| 6 | NESTED LOOPS | | 1 | 93 | 52 (0)|
| 7 | TABLE ACCESS BY INDEX ROWID| OPEXADJREF | 997 | 49850 | 2 (0)|
|* 8 | INDEX RANGE SCAN | IDX2_OPEXADJREF | 997 | | 7 (0)|
|* 9 | TABLE ACCESS BY INDEX ROWID| KOPLTRX | 1 | 43 | 2 (50)|
|* 10 | INDEX UNIQUE SCAN | PK_KOPLTRX | 484K| | |
| 11 | TABLE ACCESS BY INDEX ROWID | OPSQTY | 1 | 8 | 2 (50)|
|* 12 | INDEX RANGE SCAN | IDX11_OPSQTY | 1 | | 1 (0)|
| 13 | TABLE ACCESS BY INDEX ROWID | LOKPCLASSIF | 2 | 16 | 2 (50)|
|* 14 | INDEX RANGE SCAN | IDX8_LOKPCLASSIF | 2 | | 1 (0)|
|* 15 | INDEX RANGE SCAN | IDX5_CALCOPSQ | 1 | | |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("T2"."BEGDATE"<:Z AND "T2"."ENDDATE">:Z)
8 - access("T0"."OPSADJXREFOBJECTTYPE"=TO_NUMBER(:Z))
9 - filter(("T1"."OPSTRXTYPE"=TO_NUMBER(:Z) OR "T1"."OPSTRXTYPE"=TO_NUMBER(:Z) OR
"T1"."OPSTRXTYPE"=TO_NUMBER(:Z) OR "T1"."OPSTRXTYPE"=TO_NUMBER(:Z)) AND "T1"."MINEFFSTDT"<:Z AND
"T1"."MINEFFENDT">:Z AND "T1"."ACTGCY"=TO_NUMBER(:Z) AND
"T1"."FXCH_ID"=TO_NUMBER(:Z) AND "T1"."SRCPNOT"=TO_NUMBER(:Z) AND
"T1"."RCPNOID"=TO_NUMBER(:Z) AND "T1"."OPSTRXCLASSIF"=TO_NUMBER(:Z))
10 - access("T0"."OPSADJXREFOBJECTID"="T1"."KOPLTRX_ID")
12 - access("T4"."QO_ID"="T1"."KOPLTRX_ID")
14 - access("T3"."QUANTITY_ID"="T4"."QUANTITY_ID")
15 - access("T2"."QTYCLASSIFICATION_ID"="T3"."QTYCLASSIFICATION_ID")



Test System
===========

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 127 | 61 |
| 1 | SORT UNIQUE | | 1 | 127 | 61 |
|* 2 | TABLE ACCESS BY INDEX ROWID | CALCOPSQ | 1 | 18 | 1 |
| 3 | NESTED LOOPS | | 1 | 127 | 50 |
| 4 | NESTED LOOPS | | 1 | 109 | 49 |
| 5 | NESTED LOOPS | | 1 | 101 | 48 |
| 6 | NESTED LOOPS | | 1 | 93 | 46 |
| 7 | TABLE ACCESS BY INDEX ROWID | OPEXADJREF | 987 | 49350 | 2 |
|* 8 | INDEX RANGE SCAN | IDX2_OPEXADJREF | 987 | | 5 |
|* 9 | TABLE ACCESS BY INDEX ROWID | KOPLTRX | 1 | 43 | 46 |
| 10 | BITMAP CONVERSION TO ROWIDS | | | | |
| 11 | BITMAP AND | | | | |
| 12 | BITMAP CONVERSION FROM ROWIDS | | | | |
|* 13 | INDEX RANGE SCAN | PK_KOPLTRX | 1 | | |
| 14 | BITMAP CONVERSION FROM ROWIDS | | | | |
|* 15 | INDEX RANGE SCAN | IDX5_KOPLTRX | 1 | | |
| 16 | BITMAP CONVERSION FROM ROWIDS | | | | |
|* 17 | INDEX RANGE SCAN | IDX10_KOPLTRX | 1 | | |
| 18 | BITMAP CONVERSION FROM ROWIDS | | | | |
|* 19 | INDEX RANGE SCAN | IDX4_KOPLTRX | 1 | | |
| 20 | BITMAP OR | | | | |
| 21 | BITMAP CONVERSION FROM ROWIDS| | | | |
|* 22 | INDEX RANGE SCAN | IDX7_KOPLTRX | 1 | | |
| 23 | BITMAP CONVERSION FROM ROWIDS| | | | |
|* 24 | INDEX RANGE SCAN | IDX7_KOPLTRX | 1 | | |
| 25 | BITMAP CONVERSION FROM ROWIDS| | | | |
|* 26 | INDEX RANGE SCAN | IDX7_KOPLTRX | 1 | | |
| 27 | BITMAP CONVERSION FROM ROWIDS| | | | |
|* 28 | INDEX RANGE SCAN | IDX7_KOPLTRX | 1 | | |
| 29 | TABLE ACCESS BY INDEX ROWID | OPSQTY | 1 | 8 | 1 |
|* 30 | INDEX RANGE SCAN | IDX11_OPSQTY | 1 | | 1 |
| 31 | TABLE ACCESS BY INDEX ROWID | LOKPCLASSIF | 2 | 16 | 1 |
|* 32 | INDEX RANGE SCAN | IDX8_LOKPCLASSIF | 2 | | 1 |
|* 33 | INDEX RANGE SCAN | IDX5_CALCOPSQ | 1 | | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("T2"."BEGDATE"<:Z AND "T2"."ENDDATE">:Z)
8 - access("T0"."OPSADJXREFOBJECTTYPE"=TO_NUMBER(:Z))
9 - filter("T1"."MINEFFSTDT"<:Z AND "T1"."MINEFFENDT">:Z AND
"T1"."RCPNOID"=TO_NUMBER(:Z) AND "T1"."OPSTRXCLASSIF"=TO_NUMBER(:Z))
13 - access("T0"."OPSADJXREFOBJECTID"="T1"."KOPLTRX_ID")
15 - access("T1"."FXCH_ID"=TO_NUMBER(:Z))
17 - access("T1"."ACTGCY"=TO_NUMBER(:Z))
19 - access("T1"."SRCPNOT"=TO_NUMBER(:Z))
22 - access("T1"."OPSTRXTYPE"=TO_NUMBER(:Z))
24 - access("T1"."OPSTRXTYPE"=TO_NUMBER(:Z))
26 - access("T1"."OPSTRXTYPE"=TO_NUMBER(:Z))
28 - access("T1"."OPSTRXTYPE"=TO_NUMBER(:Z))
30 - access("T4"."QO_ID"="T1"."KOPLTRX_ID")
32 - access("T3"."QUANTITY_ID"="T4"."QUANTITY_ID")
33 - access("T2"."QTYCLASSIFICATION_ID"="T3"."QTYCLASSIFICATION_ID")

Question is:
In the Live system it seems to access the table KOPLTRX using the primary index and then filters out rows based on the WHERE condition, whereas in the Test system it seems to use the BITMAP CONVERSION FROM ROWIDS functionality and access rows for each WHERE clause condition then tries to use bitmap AND/OR functions to get the resultset.

Both systems have exactly the same indexes in place. Also, the number of records in the systems are also similar.

Why would the plan be so drastically different? (and elapsed time is also ~30 times slower on Test).

Any idea what could be the issue and pointers on
resolution?

Thanks,
Karmit

Use of statistics

John, November 21, 2006 - 5:35 pm UTC

Tom,

Are statistics used by the optimizer to generate execution plan only? Do they have any further use after the plan is generated? Do they benefit the 'execution' phase of the SQL?


John

Tom Kyte
November 22, 2006 - 4:13 pm UTC

they are used to optimize a query plan, not during the execution.

What is USER ROWID?

john, November 22, 2006 - 7:02 pm UTC

Thanks, Tom.
Two questions about this execution plan.
1. What is USER ROWID (in step 10)?
2. So step 7 HASH joins the 2 tables. Can you tell me what the optimizer is doing in step 6 by nest-looping to the same BILLBOARD table again?

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 260K| | |
| 1 | SORT AGGREGATE | | 1 | 2 | | | |
| 2 | VIEW | | 1 | 2 | 260K| | |
|* 3 | FILTER | | | | | | |
| 4 | SORT GROUP BY | | 1 | 572 | 260K| | |
|* 5 | HASH JOIN | | 1 | 572 | 260K| | |
| 6 | NESTED LOOPS | | 1 | 502 | 260K| | |
|* 7 | HASH JOIN | | 1 | 186 | 260K| | |
|* 8 | TABLE ACCESS FULL | BILLBOARD | 8 | 976 | 260K| 2623 | 2623 |
|* 9 | TABLE ACCESS FULL | BUCKET | 87 | 5568 | 24 | | |
|* 10 | TABLE ACCESS BY USER ROWID| BILLBOARD | 1 | 316 | 1 | 2623 | 2623 |
| 11 | INDEX FAST FULL SCAN | PK_BUCKET | 23462 | 1603K| 10 | | |
----------------------------------------------------------------------------------------------------


Tom Kyte
November 24, 2006 - 6:11 pm UTC

ops$tkyte%ORA10GR2> select * from dual where rowid in (select rowid from dual);

Execution Plan
----------------------------------------------------------
Plan hash value: 1709275517

------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)
------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |     4 |     3   (0)
|   1 |  NESTED LOOPS               |      |     1 |     4 |     3   (0)
|   2 |   TABLE ACCESS FULL         | DUAL |     1 |     2 |     2   (0)
|   3 |   TABLE ACCESS BY USER ROWID| DUAL |     1 |     2 |     1   (0)
------------------------------------------------------------------------


userid rowid means you - the user - are producing the rowids, not an index range scan or something.


2) got query, I hate to GUESS.  Look at your query, it should be rather "obvious" probably. 

tkprof output November 20, 2006

Karmit, November 25, 2006 - 6:24 am UTC

Hi Tom,

Looks like my question got missed (November 20, 2006)

Would appreciate comments on that query.

Thanks,
Karmit

Tom Kyte
November 25, 2006 - 9:22 am UTC

no, I've said over and over and over again - I don't look at nor respond to all "reviews"

there was just too much there and it was hard to read, had to skip it.

one would need lots of supporting information - tables, indexes, data volumes, lots of stuff I don't want here. Sorry.

Off the top of your head.

A reader, November 29, 2006 - 2:52 pm UTC

Off the top of your head, taking look at this waiting events do you know why is it happening?

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.08 0.08 0 24 0 0
Execute 1 154.34 1940.85 181736 10807359 182011 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 154.42 1940.93 181736 10807383 182011 1

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 127

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 11 0.00 0.01
log file sync 3638 0.99 1456.86
SQL*Net message to dblink 2651 0.00 0.01
SQL*Net message from dblink 2651 0.02 2.29
log file switch completion 1 0.27 0.27
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************

===========

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5030 0.86 2.11 53 870 7 0
Execute 51685 70.71 434.60 36641 2700969 178265 68105
Fetch 133859 96.24 183.52 180240 10771683 2 128411
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 190574 167.81 620.24 216934 13473522 178274 196516

Misses in library cache during parse: 71

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 128857 0.11 137.17
latch free 621 0.01 1.34
rdbms ipc reply 10 1.16 3.34
local write wait 8 0.98 3.37
db file scattered read 23618 0.03 5.99
single-task message 1 0.09 0.09
SQL*Net message to dblink 15637 0.00 0.02
SQL*Net message from dblink 15637 0.22 10.06
log file sync 8 0.98 7.80
buffer busy waits 1 0.00 0.00
log buffer space 341 0.98 291.05
log file switch completion 4 0.27 0.43
SQL*Net more data from dblink 26 0.01 0.02

40 user SQL statements in session.
4993 internal SQL statements in session.
5033 SQL statements in session.
********************************************************************************

Tom Kyte
November 30, 2006 - 9:24 am UTC

got a sequence in your insert as select there?

looks like a really inefficient query.

i hate guessing, it would have been nice to show us the SQL you know.

How to TKPROF 500MB files ?

VKOUL, December 13, 2006 - 4:39 pm UTC

Hi Tom,

Is there an easy way to TKPROF large trace files ? I have some files 500MB or larger and it is taking forever (hours and hours) to TKPROF these.

Any help would be appreciated.

Thanks


Tom Kyte
December 15, 2006 - 7:54 am UTC

I'll guess

stop using explain=





TKPROF on large trace files ...

VKOUL, December 17, 2006 - 2:36 am UTC

I am not using EXPLAIN keyword, even then it is taking a lot of hours ...

Tom Kyte
December 17, 2006 - 10:03 am UTC

doesn't sound right - I've not experienced that. Sorry, don't think I'll be able to diagnose it sitting here. tkprof should whip through those as fast as it can read them.

tkprof and Row Source Operation.

Philippe, January 03, 2007 - 12:43 pm UTC

Hi Tom,

(To begin, sorry for my english).

Here an extract of tkprof output :

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT GROUP BY (cr=14552868 r=72570 w=0 time=754813911 us)
7171 TABLE ACCESS BY INDEX ROWID OBJ#(6750) (cr=14552868 r=72570 w=0 time=754701433 us)
203163792 NESTED LOOPS (cr=622146 r=72570 w=0 time=327607924 us)
22643 NESTED LOOPS (cr=148852 r=72570 w=0 time=8415991 us)
37375 HASH JOIN (cr=72704 r=72570 w=0 time=7387342 us)
37375 HASH JOIN (cr=72690 r=72570 w=0 time=6999149 us)
6 TABLE ACCESS FULL OBJ#(6734) (cr=30 r=0 w=0 time=2590 us)
307733 HASH JOIN (cr=72660 r=72570 w=0 time=6595118 us)
4 NESTED LOOPS (cr=8 r=0 w=0 time=247 us)
1 TABLE ACCESS BY INDEX ROWID OBJ#(5942) (cr=3 r=0 w=0 time=83 us)
1 INDEX UNIQUE SCAN OBJ#(5943) (cr=2 r=0 w=0 time=49 us)(object id 5943)
4 TABLE ACCESS FULL OBJ#(6738) (cr=5 r=0 w=0 time=159 us)
330589 TABLE ACCESS FULL OBJ#(5957) (cr=72652 r=72570 w=0 time=6051259 us)
484 TABLE ACCESS FULL OBJ#(6756) (cr=14 r=0 w=0 time=2075 us)
22643 TABLE ACCESS BY INDEX ROWID OBJ#(6750) (cr=76148 r=0 w=0 time=829195 us)
37375 INDEX UNIQUE SCAN OBJ#(6751) (cr=37377 r=0 w=0 time=398350 us)(object id 6751)
203141148 INDEX RANGE SCAN OBJ#(6751) (cr=473294 r=0 w=0 time=161379216 us)(object id 6751)

I don't understand the link between the column named Rows, and r variable.
For example the last line Rows = 203141148 and r = 0.
What it is these values?

Thank.

Tom Kyte
January 05, 2007 - 8:25 am UTC

r is physical IO

you didn't do any in that step to get those hundreds of millions of records.

what is cr for this query

A reader, January 18, 2007 - 8:53 am UTC

Hi Tom,

Below is the output of tkprof of a query -

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 15 0.10 0.08 0 443 0 201
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 0.11 0.09 0 443 0 201

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 38 (CRMUSER)

Rows Row Source Operation
------- ---------------------------------------------------
201 SORT UNIQUE (cr=2252 pr=0 pw=0 time=951615 us)
201 COUNT STOPKEY (cr=443 pr=0 pw=0 time=4815 us)
201 NESTED LOOPS (cr=443 pr=0 pw=0 time=4812 us)
206 TABLE ACCESS FULL AO_ACCTMOD (cr=235 pr=0 pw=0 time=3567 us)
201 INDEX RANGE SCAN IX_AO_DEMO_ACCID (cr=208 pr=0 pw=0 time=1943 us)(object id 18490)

Here the cr for this query is 443 or 2252 ?

the query have a distinct so it does sorting of data.

why did not this cr comes into the tablular statistitcs at the Top ? should not it be part of Fetch ..

because statspack shows the cr as 2252 for the query.

Regards

tkprof output, version 10.2.0.2.0

Arnaud, January 31, 2007 - 4:55 am UTC

Hai Tom,

i have a question about tkprof output, lately i see an extra line "Misses in library cache during execute"
what does it mean?

an example:

SELECT CODIGEXT
FROM
 TABLE827  WHERE ACTCODE = :b1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      6      0.01       0.01          0          0          0           0
Fetch        6      0.00       0.00          0         12          0           6
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       13      0.01       0.01          0         12          0           6

Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: CHOOSE
Parsing user id: 29

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      19        0.00          0.00
  SQL*Net message from client                    19        0.00          0.00
********************************************************************************


thx


Tom Kyte
January 31, 2007 - 9:59 am UTC

it means the statement was flushed out of the cache for some reason during the six executions. The SQL plan was not there anymore.

for example

ops$tkyte%ORA10GR2> create table t ( x int, y int );
Table created.

ops$tkyte%ORA10GR2> create or replace procedure p
  2  as
  3  begin
  4          for x in ( select * from t where x = 5 )
  5          loop
  6                  null;
  7          end loop;
  8  end;
  9  /
Procedure created.

ops$tkyte%ORA10GR2> exec p
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> alter session set sql_trace=true;
Session altered.

ops$tkyte%ORA10GR2> exec p
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec p
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> create index t_idx on t(x);
Index created.

ops$tkyte%ORA10GR2> exec p
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec p
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> drop index t_idx;
Index dropped.

ops$tkyte%ORA10GR2> exec p
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec p
PL/SQL procedure successfully completed.


tkprof will show two misses - due to the stats gathering that invalidated the cursor and caused it to implicitly reparse:

SELECT *
FROM
 T WHERE X = 5


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      6      0.00       0.00          0          4          0           0
Fetch        6      0.00       0.00          0         18          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.00       0.00          0         22          0           0

Misses in library cache during parse: 0
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 64     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS FULL T (cr=6 pr=0 pw=0 time=78 us)

A query in 9i run faster then in 10gR2

yoav ben moha, February 06, 2007 - 9:47 am UTC

Hi Tom,

We gust upgrade from 9206 to 10202 on hpux.
The First statment i runned against the 9206 database and its run for 9 sec.
The same query runned against 10202 and didnt return a result after 4 minute.
Looking at tkprof for each statment show that the statments in 10gr2 used much
more Physical and logical IO then the one in 9i.
Could you please explain why ?

TKPROF: Release 9.2.0.6.0 - Production on Tue Feb 6 16:08:22 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: psdwh_ora_24870.trc
Sort options: default

********************************************************************************

...

SELECT tablespace_name,owner,segment_name,partition_name,segment_type,size_mb,
ROUND(size_mb/1024,2) size_gb,sml,
row_number() OVER (PARTITION BY sml ORDER BY size_mb DESC) rn
FROM(
SELECT tablespace_name,owner,segment_name,partition_name,segment_type,ROUND(size_mb) size_mb,
(CASE WHEN size_mb <= 50 THEN 'SMALL'
WHEN size_mb BETWEEN 50 AND 500 THEN 'MEDIUM'
ELSE 'BIG'
END ) SML
FROM(SELECT tablespace_name,owner,segment_name,partition_name,segment_type,
SUM(bytes)/1024/1024 size_MB
FROM DBA_EXTENTS
WHERE segment_type IN ('TABLE PARTITION','TABLE SUBPARTITION','INDEX PARTITION')
AND owner NOT IN ('SYS','SYSTEM','XDB','WMSYS','MDSYS','CTXSYS',
'OUTLN','ORDSYS','WKSYS',
'QAS','TOAD','QUEST','SQLA','SQL_ANA','SQL_REP','ADMIN')
AND EXISTS (SELECT 1
FROM DBA_USERS
WHERE username = owner
AND account_status='OPEN')
AND NOT EXISTS (SELECT 1
FROM DBA_TAB_COLUMNS
WHERE OWNER = DBA_EXTENTS.owner
AND table_name = DBA_EXTENTS.segment_name
AND DATA_TYPE LIKE '%LONG%')
GROUP BY tablespace_name,owner,segment_name,partition_name,segment_type)
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.27 0.26 0 64 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 33 1.84 9.64 697 57215 125 477
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 35 2.11 9.91 697 57279 125 477

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS

Rows Row Source Operation
------- ---------------------------------------------------
477 WINDOW SORT
477 SORT GROUP BY
25520 FILTER
25520 VIEW
25520 UNION-ALL
0 NESTED LOOPS
0 NESTED LOOPS
0 VIEW
0 UNION-ALL
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 HASH JOIN
0 NESTED LOOPS
0 TABLE ACCESS FULL OBJ#(14)
0 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
0 VIEW
0 UNION-ALL
0 TABLE ACCESS FULL OBJ#(4)
0 TABLE ACCESS FULL OBJ#(230)
0 FILTER
0 TABLE ACCESS FULL OBJ#(5)
0 TABLE ACCESS FULL OBJ#(19)
0 TABLE ACCESS FULL OBJ#(234)
0 FILTER
0 TABLE ACCESS FULL OBJ#(156)
0 TABLE ACCESS FULL OBJ#(240)
0 FILTER
0 TABLE ACCESS FULL OBJ#(245)
0 TABLE ACCESS FULL OBJ#(258)
0 TABLE ACCESS CLUSTER OBJ#(16)
0 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18)
0 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
0 TABLE ACCESS CLUSTER OBJ#(22)
0 INDEX UNIQUE SCAN OBJ#(11) (object id 11)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 HASH JOIN
0 TABLE ACCESS FULL OBJ#(14)
0 TABLE ACCESS FULL OBJ#(15)
0 TABLE ACCESS CLUSTER OBJ#(16)
0 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
0 TABLE ACCESS CLUSTER OBJ#(22)
0 INDEX UNIQUE SCAN OBJ#(11) (object id 11)
0 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS FULL OBJ#(14)
0 TABLE ACCESS CLUSTER OBJ#(16)
0 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
0 TABLE ACCESS CLUSTER OBJ#(22)
0 INDEX UNIQUE SCAN OBJ#(11) (object id 11)
0 TABLE ACCESS BY INDEX ROWID OBJ#(17)
0 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
0 TABLE ACCESS CLUSTER OBJ#(13)
0 INDEX UNIQUE SCAN OBJ#(9) (object id 9)
0 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
25520 NESTED LOOPS
25520 NESTED LOOPS
477 VIEW
477 UNION-ALL
477 NESTED LOOPS
528 NESTED LOOPS
528 NESTED LOOPS
528 HASH JOIN
3748 NESTED LOOPS
3748 TABLE ACCESS FULL OBJ#(14)
3748 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
528 VIEW
528 UNION-ALL
0 TABLE ACCESS FULL OBJ#(4)
423 TABLE ACCESS FULL OBJ#(230)
0 FILTER
0 TABLE ACCESS FULL OBJ#(5)
0 TABLE ACCESS FULL OBJ#(19)
56 TABLE ACCESS FULL OBJ#(234)
0 FILTER
0 TABLE ACCESS FULL OBJ#(156)
49 TABLE ACCESS FULL OBJ#(240)
0 FILTER
0 TABLE ACCESS FULL OBJ#(245)
0 TABLE ACCESS FULL OBJ#(258)
528 TABLE ACCESS CLUSTER OBJ#(16)
528 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
528 TABLE ACCESS BY INDEX ROWID OBJ#(18)
528 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
477 TABLE ACCESS CLUSTER OBJ#(22)
528 INDEX UNIQUE SCAN OBJ#(11) (object id 11)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 HASH JOIN
0 TABLE ACCESS FULL OBJ#(14)
0 TABLE ACCESS FULL OBJ#(15)
0 TABLE ACCESS CLUSTER OBJ#(16)
0 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
0 TABLE ACCESS CLUSTER OBJ#(22)
0 INDEX UNIQUE SCAN OBJ#(11) (object id 11)
0 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS FULL OBJ#(14)
0 TABLE ACCESS CLUSTER OBJ#(16)
0 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
0 TABLE ACCESS CLUSTER OBJ#(22)
0 INDEX UNIQUE SCAN OBJ#(11) (object id 11)
0 TABLE ACCESS BY INDEX ROWID OBJ#(17)
0 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
25520 FIXED TABLE FIXED INDEX X$KTFBUE (ind:1)
25520 INDEX UNIQUE SCAN OBJ#(42) (object id 42)
8 TABLE ACCESS BY INDEX ROWID OBJ#(94)
104 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
8 TABLE ACCESS BY INDEX ROWID OBJ#(22)
8 INDEX UNIQUE SCAN OBJ#(44) (object id 44)
8 TABLE ACCESS BY INDEX ROWID OBJ#(94)
96 INDEX RANGE SCAN OBJ#(139) (object id 139)
8 TABLE ACCESS FULL OBJ#(280)
8 TABLE ACCESS FULL OBJ#(95)
8 TABLE ACCESS CLUSTER OBJ#(16)
8 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
8 TABLE ACCESS CLUSTER OBJ#(16)
8 INDEX UNIQUE SCAN OBJ#(7) (object id 7)
96 INDEX RANGE SCAN OBJ#(139) (object id 139)
0 FILTER
6644 NESTED LOOPS OUTER
6644 NESTED LOOPS OUTER
6644 NESTED LOOPS OUTER
6644 NESTED LOOPS OUTER
6644 NESTED LOOPS
630 NESTED LOOPS
96 TABLE ACCESS BY INDEX ROWID OBJ#(22)
96 INDEX UNIQUE SCAN OBJ#(44) (object id 44)
630 TABLE ACCESS BY INDEX ROWID OBJ#(18)
630 INDEX RANGE SCAN OBJ#(37) (object id 37)
6644 TABLE ACCESS CLUSTER OBJ#(21)
84 INDEX UNIQUE SCAN OBJ#(3) (object id 3)
0 TABLE ACCESS CLUSTER OBJ#(283)
6523 INDEX RANGE SCAN OBJ#(221) (object id 221)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18)
0 INDEX RANGE SCAN OBJ#(38) (object id 38)
0 TABLE ACCESS CLUSTER OBJ#(22)
0 INDEX UNIQUE SCAN OBJ#(11) (object id 11)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18)
0 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18)
0 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18)
0 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18)
0 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18)
0 INDEX UNIQUE SCAN OBJ#(36) (object id 36)
0 TABLE ACCESS CLUSTER OBJ#(4)
0 INDEX UNIQUE SCAN OBJ#(3) (object id 3)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 33 0.00 0.00
db file sequential read 592 0.15 7.16
db file scattered read 21 0.22 0.76
SQL*Net message from client 33 7.69 8.29
********************************************************************************



THE SAME SELECT IN 10G:


TKPROF: Release 10.2.0.2.0 - Production on Tue Feb 6 16:29:01 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: tsdwh_ora_19287.trc
Sort options: default

********************************************************************************
...

SELECT tablespace_name,owner,segment_name,partition_name,segment_type,size_mb,
ROUND(size_mb/1024,2) size_gb,sml,
row_number() OVER (PARTITION BY sml ORDER BY size_mb DESC) rn
FROM(
SELECT tablespace_name,owner,segment_name,partition_name,segment_type,ROUND(size_mb) size_mb,
(CASE WHEN size_mb <= 50 THEN 'SMALL'
WHEN size_mb BETWEEN 50 AND 500 THEN 'MEDIUM'
ELSE 'BIG'
END ) SML
FROM(SELECT tablespace_name,owner,segment_name,partition_name,segment_type,
SUM(bytes)/1024/1024 size_MB
FROM DBA_EXTENTS
WHERE segment_type IN ('TABLE PARTITION','TABLE SUBPARTITION','INDEX PARTITION')
AND owner NOT IN ('SYS','SYSTEM','XDB','WMSYS','MDSYS','CTXSYS',
'OUTLN','ORDSYS','WKSYS',
'QAS','TOAD','QUEST','SQLA','SQL_ANA','SQL_REP','ADMIN')
AND EXISTS (SELECT 1
FROM DBA_USERS
WHERE username = owner
AND account_status='OPEN')
AND NOT EXISTS (SELECT 1
FROM DBA_TAB_COLUMNS
WHERE OWNER = DBA_EXTENTS.owner
AND table_name = DBA_EXTENTS.segment_name
AND DATA_TYPE LIKE '%LONG%')
GROUP BY tablespace_name,owner,segment_name,partition_name,segment_type)
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.41 0.42 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 239.43 234.48 3196 17730705 710 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 239.84 234.91 3196 17730705 710 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows Row Source Operation
------- ---------------------------------------------------
0 WINDOW SORT (cr=0 pr=0 pw=0 time=207 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=187 us)
609 FILTER (cr=17453921 pr=3166 pw=0 time=228882804 us)
609 VIEW DBA_EXTENTS (cr=17452882 pr=3166 pw=0 time=228875217 us)
609 UNION-ALL (cr=17452882 pr=3166 pw=0 time=228873998 us)
0 NESTED LOOPS (cr=77 pr=0 pw=0 time=534 us)
0 NESTED LOOPS (cr=77 pr=0 pw=0 time=529 us)
0 TABLE ACCESS FULL UET$ (cr=77 pr=0 pw=0 time=526 us)
0 INDEX UNIQUE SCAN I_FILE2 (cr=0 pr=0 pw=0 time=0 us)(object id 42)
0 VIEW SYS_DBA_SEGS (cr=0 pr=0 pw=0 time=0 us)
0 UNION-ALL PARTITION (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS CLUSTER TS$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_TS# (cr=0 pr=0 pw=0 time=0 us)(object id 7)
0 TABLE ACCESS CLUSTER SEG$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN I_FILE#_BLOCK# (cr=0 pr=0 pw=0 time=0 us)(object id 9)
0 VIEW SYS_OBJECTS (cr=0 pr=0 pw=0 time=0 us)
0 UNION-ALL (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL TAB$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL TABPART$ (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL CLU$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL IND$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL INDPART$ (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL LOB$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL TABSUBPART$ (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL INDSUBPART$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL LOBFRAG$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_FILE2 (cr=0 pr=0 pw=0 time=0 us)(object id 42)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS CLUSTER TS$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_TS# (cr=0 pr=0 pw=0 time=0 us)(object id 7)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL UNDO$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_FILE2 (cr=0 pr=0 pw=0 time=0 us)(object id 42)
0 TABLE ACCESS CLUSTER SEG$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=0 pr=0 pw=0 time=0 us)(object id 9)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS CLUSTER TS$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_TS# (cr=0 pr=0 pw=0 time=0 us)(object id 7)
0 TABLE ACCESS CLUSTER SEG$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN I_FILE#_BLOCK# (cr=0 pr=0 pw=0 time=0 us)(object id 9)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
0 TABLE ACCESS BY INDEX ROWID FILE$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_FILE2 (cr=0 pr=0 pw=0 time=0 us)(object id 42)
609 NESTED LOOPS (cr=17452805 pr=3166 pw=0 time=228869798 us)
609 NESTED LOOPS (cr=17452803 pr=3166 pw=0 time=228864924 us)
6466 FIXED TABLE FULL X$KTFBUE (cr=12307 pr=3196 pw=0 time=711865 us)
609 VIEW SYS_DBA_SEGS (cr=17716751 pr=0 pw=0 time=233651384 us)
57186 UNION-ALL PARTITION (cr=17716751 pr=0 pw=0 time=223725956 us)
57186 FILTER (cr=12280516 pr=0 pw=0 time=177879547 us)
57186 NESTED LOOPS OUTER (cr=12280516 pr=0 pw=0 time=177461715 us)
57186 NESTED LOOPS (cr=12221661 pr=0 pw=0 time=176472838 us)
57186 NESTED LOOPS (cr=12106229 pr=0 pw=0 time=175653474 us)
57186 HASH JOIN (cr=12105169 pr=0 pw=0 time=175292570 us)
5092066 NESTED LOOPS (cr=5181436 pr=0 pw=0 time=61304033 us)
6466 TABLE ACCESS CLUSTER TS$ (cr=19398 pr=0 pw=0 time=141438 us)
6466 INDEX UNIQUE SCAN I_TS# (cr=6466 pr=0 pw=0 time=58148 us)(object id 7)
5092066 TABLE ACCESS CLUSTER SEG$ (cr=5162038 pr=0 pw=0 time=56095853 us)
5138594 INDEX RANGE SCAN I_FILE#_BLOCK# (cr=23444 pr=0 pw=0 time=5198079 us)(object id 9)
3950115 VIEW SYS_OBJECTS (cr=6924015 pr=0 pw=0 time=100180481 us)
3950115 UNION-ALL (cr=6924015 pr=0 pw=0 time=92273258 us)
0 TABLE ACCESS FULL TAB$ (cr=3400590 pr=0 pw=0 time=60502555 us)
3077340 TABLE ACCESS FULL TABPART$ (cr=64650 pr=0 pw=0 time=144117 us)
0 FILTER (cr=0 pr=0 pw=0 time=13368 us)
0 TABLE ACCESS FULL CLU$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL IND$ (cr=3400590 pr=0 pw=0 time=30003137 us)
724080 TABLE ACCESS FULL INDPART$ (cr=19395 pr=0 pw=0 time=130250 us)
0 FILTER (cr=0 pr=0 pw=0 time=13136 us)
0 TABLE ACCESS FULL LOB$ (cr=0 pr=0 pw=0 time=0 us)
148695 TABLE ACCESS FULL TABSUBPART$ (cr=19395 pr=0 pw=0 time=114551 us)
0 FILTER (cr=0 pr=0 pw=0 time=11798 us)
0 TABLE ACCESS FULL INDSUBPART$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL LOBFRAG$ (cr=19395 pr=0 pw=0 time=142111 us)
57186 INDEX UNIQUE SCAN I_FILE2 (cr=1060 pr=0 pw=0 time=229385 us)(object id 42)
57186 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=115432 pr=0 pw=0 time=687102 us)
57186 INDEX UNIQUE SCAN I_OBJ1 (cr=58246 pr=0 pw=0 time=318395 us)(object id 36)
57186 TABLE ACCESS CLUSTER USER$ (cr=58855 pr=0 pw=0 time=804021 us)
57186 INDEX UNIQUE SCAN I_USER# (cr=1060 pr=0 pw=0 time=217654 us)(object id 11)
0 FILTER (cr=255081 pr=0 pw=0 time=3597572 us)
0 NESTED LOOPS OUTER (cr=255081 pr=0 pw=0 time=3583648 us)
0 NESTED LOOPS (cr=255081 pr=0 pw=0 time=3566909 us)
174555 NESTED LOOPS (cr=45256 pr=0 pw=0 time=2039440 us)
174555 MERGE JOIN CARTESIAN (cr=38790 pr=0 pw=0 time=963193 us)
6465 TABLE ACCESS CLUSTER TS$ (cr=19395 pr=0 pw=0 time=146921 us)
6465 INDEX UNIQUE SCAN I_TS# (cr=6465 pr=0 pw=0 time=56913 us)(object id 7)
174555 BUFFER SORT (cr=19395 pr=0 pw=0 time=656486 us)
174555 TABLE ACCESS FULL UNDO$ (cr=19395 pr=0 pw=0 time=129895 us)
174555 INDEX UNIQUE SCAN I_FILE2 (cr=6466 pr=0 pw=0 time=704837 us)(object id 42)
0 TABLE ACCESS CLUSTER SEG$ (cr=209825 pr=0 pw=0 time=1602517 us)
28804 INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=181021 pr=0 pw=0 time=833082 us)(object id 9)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
0 NESTED LOOPS (cr=5181154 pr=0 pw=0 time=45131839 us)
0 FILTER (cr=5181154 pr=0 pw=0 time=45117049 us)
0 NESTED LOOPS OUTER (cr=5181154 pr=0 pw=0 time=45103420 us)
0 NESTED LOOPS (cr=5181154 pr=0 pw=0 time=45087612 us)
6465 TABLE ACCESS CLUSTER TS$ (cr=19395 pr=0 pw=0 time=137151 us)
6465 INDEX UNIQUE SCAN I_TS# (cr=6465 pr=0 pw=0 time=46610 us)(object id 7)
0 TABLE ACCESS CLUSTER SEG$ (cr=5161759 pr=0 pw=0 time=44928079 us)
5138317 INDEX RANGE SCAN I_FILE#_BLOCK# (cr=23442 pr=0 pw=0 time=5195822 us)(object id 9)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
0 TABLE ACCESS BY INDEX ROWID FILE$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_FILE2 (cr=0 pr=0 pw=0 time=0 us)(object id 42)
609 INDEX UNIQUE SCAN I_FILE2 (cr=2 pr=0 pw=0 time=3455 us)(object id 42)
1 MERGE JOIN CARTESIAN (cr=19 pr=0 pw=0 time=769 us)
1 NESTED LOOPS (cr=17 pr=0 pw=0 time=696 us)
1 HASH JOIN (cr=15 pr=0 pw=0 time=678 us)
1 NESTED LOOPS (cr=12 pr=0 pw=0 time=410 us)
1 NESTED LOOPS (cr=10 pr=0 pw=0 time=385 us)
1 HASH JOIN OUTER (cr=8 pr=0 pw=0 time=329 us)
1 NESTED LOOPS (cr=5 pr=0 pw=0 time=54 us)
1 TABLE ACCESS BY INDEX ROWID USER$ (cr=2 pr=0 pw=0 time=20 us)
1 INDEX UNIQUE SCAN I_USER1 (cr=1 pr=0 pw=0 time=8 us)(object id 44)
1 TABLE ACCESS FULL PROFNAME$ (cr=3 pr=0 pw=0 time=30 us)
0 TABLE ACCESS FULL RESOURCE_GROUP_MAPPING$ (cr=3 pr=0 pw=0 time=21 us)
1 TABLE ACCESS BY INDEX ROWID PROFILE$ (cr=2 pr=0 pw=0 time=50 us)
17 INDEX RANGE SCAN I_PROFILE (cr=1 pr=0 pw=0 time=43 us)(object id 139)
1 TABLE ACCESS CLUSTER TS$ (cr=2 pr=0 pw=0 time=21 us)
1 INDEX UNIQUE SCAN I_TS# (cr=1 pr=0 pw=0 time=5 us)(object id 7)
1 TABLE ACCESS FULL USER_ASTATUS_MAP (cr=3 pr=0 pw=0 time=23 us)
1 TABLE ACCESS CLUSTER TS$ (cr=2 pr=0 pw=0 time=15 us)
1 INDEX UNIQUE SCAN I_TS# (cr=1 pr=0 pw=0 time=8 us)(object id 7)
1 BUFFER SORT (cr=2 pr=0 pw=0 time=66 us)
1 TABLE ACCESS BY INDEX ROWID PROFILE$ (cr=2 pr=0 pw=0 time=42 us)
17 INDEX RANGE SCAN I_PROFILE (cr=1 pr=0 pw=0 time=39 us)(object id 139)
0 FILTER (cr=1020 pr=0 pw=0 time=13186 us)
281 NESTED LOOPS OUTER (cr=1020 pr=0 pw=0 time=13293 us)
281 NESTED LOOPS OUTER (cr=1020 pr=0 pw=0 time=11571 us)
281 NESTED LOOPS OUTER (cr=1020 pr=0 pw=0 time=9575 us)
281 NESTED LOOPS OUTER (cr=726 pr=0 pw=0 time=5836 us)
281 NESTED LOOPS (cr=432 pr=0 pw=0 time=3507 us)
64 NESTED LOOPS (cr=62 pr=0 pw=0 time=836 us)
12 TABLE ACCESS BY INDEX ROWID USER$ (cr=24 pr=0 pw=0 time=182 us)
12 INDEX UNIQUE SCAN I_USER1 (cr=12 pr=0 pw=0 time=88 us)(object id 44)
64 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=38 pr=0 pw=0 time=587 us)
64 INDEX RANGE SCAN I_OBJ2 (cr=24 pr=0 pw=0 time=205 us)(object id 37)
281 TABLE ACCESS CLUSTER COL$ (cr=370 pr=0 pw=0 time=2083 us)
12 INDEX UNIQUE SCAN I_OBJ# (cr=77 pr=0 pw=0 time=392 us)(object id 3)
0 TABLE ACCESS BY INDEX ROWID COLTYPE$ (cr=294 pr=0 pw=0 time=2004 us)
0 INDEX UNIQUE SCAN I_COLTYPE2 (cr=294 pr=0 pw=0 time=1272 us)(object id 285)
281 INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=294 pr=0 pw=0 time=2422 us)(object id 221)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=1257 us)
0 INDEX RANGE SCAN I_OBJ3 (cr=0 pr=0 pw=0 time=527 us)(object id 38)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=1118 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=493 us)(object id 11)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS CLUSTER TAB$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ# (cr=0 pr=0 pw=0 time=0 us)(object id 3)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 3196 0.00 0.09
SQL*Net break/reset to client 1 0.00 0.00
********************************************************************************

Answered ?

yoav ben moha, February 10, 2007 - 5:02 am UTC

Hi Tom,
I know that you cant answer all the question ,
but looking at "Home"-->"your question"--> "Status" :
"
Answered, Publish 6 Feb 2007 09:47AM (last commented)
"
I didnt found an answer to my question.
Thanks
Tom Kyte
February 12, 2007 - 10:14 am UTC

not sure what you mean here.

Always helpful: High cpu times and elapsed times

Ravi, February 12, 2007 - 12:52 pm UTC

I really hope you can answer this :)
my tkprof output shows (8.1.7.4 instance):

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 168.87 1895.33 0 0 0 0
Execute 2 234.25 1534.16 0 0 1 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 403.12 3429.49 0 0 1 2

during periods of heavy concurrent activity (plenty of concurrent programs run from Europe and US), the cpu times and elapsed times is very high.

But, when there is less activity, the same report runs fast even though the report didn't change.

begin fnd_global.initialize(:a,:b,:c,:d,:e,:f,:g,:h,:i,:j,:k);
fnd_profile.put('MFG_ORGANIZATION_ID', :l);
fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :m); end;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.52 0.51 0 0 0 0
Execute 2 0.50 0.50 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.02 1.01 0 0 0 2

The Applications report is using bind variables, I checked. During heavy concurrent activity, the problem occurs consistently, though.

The problem occurs on other FND packages also. I'm at total loss - statspack wasn't that helpful, either.

What am I missing? Could you shed some light?

Thanks!
Ravi.

Tom Kyte
February 12, 2007 - 1:14 pm UTC

where are the query and current numbers??

I would guess "consistent reads go up" due to you reading data that is being modified a lot.

can you use statspack - which will show you the average number of logical IO's per execution during these periods - are they close in value?

While executing the package fnd_global

Ravi, February 12, 2007 - 1:19 pm UTC

begin
fnd_global.initialize(:a,:b,:c,:d,:e,:f,:g,:h,:i,:j,:k);
fnd_profile.put('MFG_ORGANIZATION_ID', :l);
fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :m);
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 168.87 1895.33 0 0 0 0
Execute 2 234.25 1534.16 0 0 1 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 403.12 3429.49 0 0 1 2

Misses in library cache during parse: 2
Misses in library cache during execute: 2
Optimizer goal: RULE
Parsing user id: 87 (APPS)

The current numbers are where the cpu times and the elapsed times have gone down (because Europe is currently logged off).

I will run the statspack during the 'long run' times. I don't like statspack in 8i. 9i's statspack is much better :( I will look for logical IOs per execution)

Thanks!
Ravi.

statspack is for the whole database

Ravi, February 12, 2007 - 1:59 pm UTC

Hi, Tom,
I ran the statspack. I don't know how to correlate this back to the report where it runs FND_GLOBAL and FND_PROFILE (previous update) forever! Because it is for the entire database.

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
db file sequential read 45,596,647 13,351,120 41.31
latch free 8,497,060 10,920,631 33.79
db file scattered read 7,066,922 4,474,679 13.84
library cache pin 575,830 1,109,760 3.43
buffer busy waits 871,072 953,255 2.95

Do you need any other details that I may have missed? I appreciate your help.

Thanks!
Ravi.

Tom Kyte
February 12, 2007 - 2:49 pm UTC

look at the sql report.

FND_GLOBAL in Sharable memory of the statspack report

Ravi, February 12, 2007 - 2:11 pm UTC

Hi, Tom,
My intention is to provide more detail about the problem than bothering you.

The packages which show in the tkprof.out file do show in the sharable memory part of the statspack report.

Sharable Mem (b) Executions % Total Hash Value
---------------- ------------ ------- ------------
10,329,121,601 42,996 938.8 617989494
begin fnd_global.initialize(:a,:b,:c,:d,:e,:f,:g,:h,:i,:j,:k); f
nd_profile.put('MFG_ORGANIZATION_ID', :l); fnd_profile.put('MFG_
CHART_OF_ACCOUNTS_ID', :m); end;

4,862,937,504 13,133 442.0 3361540009
declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_I
D, :DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_
int(X0rv); end;

What should be my action plan?

Thanks!
Ravi.

Tom Kyte
February 12, 2007 - 2:49 pm UTC

look at your high load sql, the ones that sometimes run super fast and other times not.

statspack report: SQL Ordered by Gets

Ravi, February 13, 2007 - 10:18 am UTC

Statspack report (these statements are from APPS 11.0.3, I don't know which concurrent program is calling this code):

SQL ordered by Gets for DB: PROD Instance: PROD Snaps: 761 -771
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100

Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
24,959,318 13 1,919,947.5 6.2 1226666995
SELECT st.short_text oh_note, ad.pk1_value oh_header_id_char F
ROM fnd_documents_short_text st, fnd_documents_tl tl, fnd_do
cuments d, fnd_attached_documents ad, so_note_usages nu, so
_note_usage_attributes nua WHERE ( tl.language = substr(useren
v('LANGUAGE'), 1, instr(userenv('LANGUAGE'), '_') - 1) AND t

23,038,914 12 1,919,909.5 5.7 215712561
SELECT st.short_text of_note, ad.pk1_value of_header_id_char F
ROM fnd_documents_short_text st, fnd_documents_tl tl, fnd_do
cuments d, fnd_attached_documents ad, so_note_usages nu, so
_note_usage_attributes nua WHERE ( tl.language = substr(useren
v('LANGUAGE'), 1, instr(userenv('LANGUAGE'), '_') - 1) AND t

13,439,374 7 1,919,910.6 3.3 3303211093
SELECT st.short_text line_note, ad.pk1_value nl_header_id_char
, ad.pk2_value nl_line_id_char FROM fnd_documents_short_text s
t, fnd_documents_tl tl, fnd_documents d, fnd_attached_docum
ents ad, so_note_usages nu, so_note_usage_attributes nua WH
ERE ( tl.language = substr(userenv('LANGUAGE'), 1, instr(user

3,923,696 1,659 2,365.1 1.0 2710389309
select request_id ,phase_code from fnd_concurrent_requests wher
e parent_request_id=:b0

3,908,775 3,969 984.8 1.0 2054874295
select O.PROFILE_OPTION_NAME, V.PROFILE_OPTION_VALUE, V.L
EVEL_ID from FND_PROFILE_OPTIONS O, FND_PROFILE_OPTION_VALUES
V where O.START_DATE_ACTIVE <= SYSDATE and (nvl(O.END_DATE_A
CTIVE,SYSDATE) >= SYSDATE) and ((V.LEVEL_ID = 10001 and V.LEV
EL_VALUE = 0) or (V.LEVEL_ID = 10002 and V.LEVEL_VALUE = :a

3,123,652 1 3,123,652.0 0.8 1336664238
SELECT ROW_ID,HEADER_ID,CUSTOMER_NAME,SHIP_TO_LOCATION,SHIP_TO_C
USTOMER_NAME,SHIP_TO_GSA_INDICATOR,SHIP_TO_ADDRESS1,SHIP_TO_ADDR
ESS2,SHIP_TO_ADDRESS3,SHIP_TO_ADDRESS4,SHIP_TO_ADDRESS5,SHIP_TO_

Could you tell me how I should proceed? Thank you for all your help.

Tom Kyte
February 13, 2007 - 10:23 am UTC

I would like you to take the sql from "normal" and "busy" and you compare the logical IO's to see if during periods of lots of concurrent activity - the IO's go way up for the same queries.

That is all.

Puzzled by elapsed time

Steve, March 02, 2007 - 7:55 am UTC

I have a puzzle of the elapsed time form an extract of a trace file. I ran a trace against a particular user and, among other things, here's an extract:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1    312.50     310.00          0          0          0           0
Execute      1    312.50     320.00          0         53          1           0
Fetch        2  18281.25   57970.00      17838      43005          0          12
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4  18906.25   58600.00      17838      43058          1          12


If I understand correctly, the elapsed time was 57970 seconds or 16+ hours. As I was only running the trace for a couple of hours how can this be the case?
Tom Kyte
March 04, 2007 - 5:45 pm UTC

did you use 8i tkprof on 9i and above trace files perhaps?

Steve, March 05, 2007 - 6:18 am UTC

Spot on - thank you.

A reader, March 09, 2007 - 11:18 am UTC

Can you help, I've done a tkprof for a single query, I've pased the explain plan below, but the tkprof summary says it waited a large % of time on something called a 'direct path write', can you tell me what this wait is and if this can be reduced?

Rows Row Source Operation
------- ---------------------------------------------------
4 SORT UNIQUE
4 UNION-ALL
4 SORT GROUP BY
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
4128 HASH JOIN
43017 TABLE ACCESS FULL OBJ#(279141)
5040 MERGE JOIN
4 SORT JOIN
4 NESTED LOOPS
4 NESTED LOOPS
1 TABLE ACCESS FULL OBJ#(279102)
4 TABLE ACCESS BY INDEX ROWID OBJ#(382194)
4 INDEX RANGE SCAN OBJ#(458625) (object id 458625)
4 TABLE ACCESS BY INDEX ROWID OBJ#(382188)
4 INDEX UNIQUE SCAN OBJ#(586044) (object id 586044)
5040 SORT JOIN
14482 TABLE ACCESS FULL OBJ#(279104)
8 INDEX UNIQUE SCAN OBJ#(586129) (object id 586129)
0 NESTED LOOPS
8 INDEX RANGE SCAN OBJ#(586129) (object id 586129)
0 TABLE ACCESS BY INDEX ROWID OBJ#(279141)
8 INDEX UNIQUE SCAN OBJ#(375697) (object id 375697)
8 TABLE ACCESS BY INDEX ROWID OBJ#(371260)
8 INDEX UNIQUE SCAN OBJ#(371261) (object id 371261)
8 TABLE ACCESS BY INDEX ROWID OBJ#(279133)
8 INDEX UNIQUE SCAN OBJ#(371278) (object id 371278)
8 INDEX RANGE SCAN OBJ#(586124) (object id 586124)
0 SORT GROUP BY
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
1 TABLE ACCESS FULL OBJ#(279102)
0 TABLE ACCESS BY INDEX ROWID OBJ#(279141)
0 INDEX RANGE SCAN OBJ#(375696) (object id 375696)
0 TABLE ACCESS BY INDEX ROWID OBJ#(279133)
0 INDEX UNIQUE SCAN OBJ#(371278) (object id 371278)
0 TABLE ACCESS BY INDEX ROWID OBJ#(371260)
0 INDEX UNIQUE SCAN OBJ#(371261) (object id 371261)
0 TABLE ACCESS BY INDEX ROWID OBJ#(382194)
0 INDEX RANGE SCAN OBJ#(458627) (object id 458627)
0 TABLE ACCESS BY INDEX ROWID OBJ#(382188)
0 INDEX UNIQUE SCAN OBJ#(586044) (object id 586044)
0 INDEX RANGE SCAN OBJ#(586124) (object id 586124)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
4 SORT (UNIQUE)
4 UNION-ALL
4 SORT (GROUP BY)
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
4128 HASH JOIN
43017 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'BUS_EVENT_STEPS'
5040 MERGE JOIN
4 SORT (JOIN)
4 NESTED LOOPS
4 NESTED LOOPS
1 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PARTY_ROLES'
4 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'ENTITLEMENT_INTERESTS'
4 INDEX (RANGE SCAN) OF 'EI_PARR_ID_IDX'
(NON-UNIQUE)
4 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'ENTITLEMENT_REGISTER'
4 INDEX (UNIQUE SCAN) OF 'ER_PK' (UNIQUE)
5040 SORT (JOIN)
14482 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'OTHER_BUS_EVENTS'
8 INDEX (UNIQUE SCAN) OF 'ERV_PK' (UNIQUE)
0 NESTED LOOPS
8 INDEX (RANGE SCAN) OF 'ERV_PK' (UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'BUS_EVENT_STEPS'
8 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'BES_PK' (UNIQUE)
8 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROCESSES'
8 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'P_PK'
(UNIQUE)
8 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'STEPS'
8 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'S_PK'
(UNIQUE)
8 INDEX (RANGE SCAN) OF 'ERT_PK' (UNIQUE)
0 SORT (GROUP BY)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
1 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PARTY_ROLES'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'BUS_EVENT_STEPS'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'BES_PARR_IDX' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'STEPS'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'S_PK'
(UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROCESSES'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'P_PK'
(UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'ENTITLEMENT_INTERESTS'
0 INDEX (RANGE SCAN) OF 'EI_OBE_ID_IDX' (NON-UNIQUE)

0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'ENTITLEMENT_REGISTER'
0 INDEX (UNIQUE SCAN) OF 'ER_PK' (UNIQUE)
0 INDEX (RANGE SCAN) OF 'ERT_PK' (UNIQUE)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file scattered read 321 0.10 1.60
db file sequential read 20 0.00 0.02
direct path write 43 0.16 4.06
direct path read 43 0.00 0.00
latch free 1 0.00 0.00
SQL*Net more data to client 1 0.00 0.00
SQL*Net message from client 2 2.78 2.80
********************************************************************************
Tom Kyte
March 12, 2007 - 2:48 pm UTC

write to temp - looks like "disk sorts"

A reader, March 13, 2007 - 6:41 am UTC

Thanks

From the previous posting, that is a copy and paste of the original tkprof output. But why is the RBO using Hash Join, I think it shouldn't.

1) Is it a feature or is the ORDERED hint somehow force it.
2)If its the hint that is doing it and it is wrong, what are the 'safe' hints to use in RBO?

The original query:

SELECT MIN(entitlement_interests.created_date) ei_created_date,
bus_event_steps.created_date bes_created_date,
DECODE(bes_ps_p_id, 517, obe_start_date, ei_start_date) effective_date,
ert_rt_code entitlement_type,
s_id,
p_id,
SUBSTR(p_description, 1, 40)||': '||SUBSTR(s_description, 1, 40) step_description,
SUM(NVL(er_number,0)) no_of_entitlements,
DECODE( bes_ps_s_id, 5019, transfer_entitlements_qry.get_overall_erv_rate_under_bes(er_id, bes_id),
5020, transfer_entitlements_qry.get_overall_erv_rate_under_bes(er_id, bes_id),
transfer_entitlements_qry.get_overall_erv_rate3(er_id, obe_id)) overall_erv_rate,
MAX(DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.bcs_reduction), 'TRUE', 'SBCS')) sbcs,
MAX(DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.dairy_contribution), 'TRUE', 'DAIRY')) dairy,
MAX(DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.nat_reserve_cont), 'TRUE', 'NAT_RESERVE')) nat_reserve,
MAX(DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.scaleback), 'TRUE', 'SCALEBACK')) scaleback,
MAX(transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.create_entitlement)) creation_erv_rate,
MAX(transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.bcs_reduction)) bcs_erv_rate,
MAX(transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.dairy_contribution)) dairy_erv_rate,
MAX(transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.nat_reserve_cont)) nat_reserve_cont_erv_rate,
MAX(transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.scaleback)) scaleback_erv_rate,
bes_reason_code,
bus_event_steps.created_by,
bes_id,
parr_id,
bes_obe_id,
entitlements_qry.get_end_dated_value(bes_id, er_id) end_dated_value,
parr_brn brn
FROM party_roles,
entitlement_interests,
entitlement_register,
entitlement_register_types,
entitlement_register_values erv1,
bus_event_steps,
other_bus_events,
processes,
steps
WHERE er_id = ei_er_id
AND ert_er_id = er_id
AND erv1.erv_er_id = er_id
AND obe_id = bes_obe_id
AND bes_id = erv1.erv_bes_id
AND bes_ps_p_id = p_id
AND s_id = bes_ps_s_id
AND parr_id = ei_parr_id
AND parr_brn = NVL(156472, parr_brn)
AND ei_eit_code = 'EROWN'
AND ( (bes_ps_p_id = 500 AND bes_ps_s_id IN (5000, 5002)) --create entitlements
OR (bes_ps_p_id = 501 AND bes_ps_s_id = 5000) -- AND er_end_date IS NULL)
OR (bes_ps_p_id = 501 AND bes_ps_s_id = 5002)
OR (bes_ps_p_id = 502 AND bes_ps_s_id = 5003) --nat res cont
OR (bes_ps_p_id = 503 AND bes_ps_s_id = 5006) --scaleback
OR (bes_ps_p_id = 504 AND bes_ps_s_id = 5005) --sbcs
OR (bes_ps_p_id = 505 AND bes_ps_s_id IN (5000, 5002)) --return
OR (bes_ps_p_id = 507 AND bes_ps_s_id IN (5000, 5002)) --auto change
OR (bes_ps_p_id = 514 AND bes_ps_s_id IN (5000, 5002)) --recalc
OR (bes_ps_p_id = 515 AND bes_ps_s_id = 5013) --dairy
OR (bes_ps_p_id = 517 AND bes_ps_s_id IN (5000, 5002, 5014, 5124)) --nat res including adj
OR (bes_ps_p_id = 519 AND bes_ps_s_id IN (5000, 5002)) --change ent type
OR (bes_ps_p_id = 518 AND bes_ps_s_id = 5002) --failed establishment
OR (bes_ps_p_id IN ( 501, 515) AND bes_ps_s_id = 5020 ) -- Dairy 2006
OR (bes_ps_p_id IN ( 501, 502) AND bes_ps_s_id = 5019 ) -- NR cont 2006
)
AND NOT EXISTS (SELECT /*+ordered*/ 1
FROM entitlement_register_values erv2,
bus_event_steps
WHERE erv2.erv_er_id = erv1.erv_er_id
AND erv2.erv_bes_id = bes_id
AND bes_ps_p_id = wkfl_constants.p_process_sfps_claim
AND bes_ps_s_id IN (wkfl_constants.s_entitlements_used, wkfl_constants.s_entitlements_claimed))
AND entitlement_interests.ei_obe_id <= obe_id
AND bes_id > 0
AND bes_id <= 9999999999
GROUP BY ert_rt_code,
SUBSTR(p_description, 1, 40)||': '||SUBSTR(s_description, 1, 40),
bes_reason_code,
bus_event_steps.created_date,
DECODE(bes_ps_p_id, 517, obe_start_date, ei_start_date),
bus_event_steps.created_by,
bes_id,
parr_id,
bes_obe_id,
s_id,
p_id,
DECODE( bes_ps_s_id, 5019, transfer_entitlements_qry.get_overall_erv_rate_under_bes(er_id, bes_id),
5020, transfer_entitlements_qry.get_overall_erv_rate_under_bes(er_id, bes_id),
transfer_entitlements_qry.get_overall_erv_rate3(er_id, obe_id)),
entitlements_qry.get_end_dated_value(bes_id, er_id),
parr_brn
UNION
SELECT MIN(entitlement_interests.created_date) ei_created_date,
bus_event_steps.created_date,
ei_start_date effective_date,
ert_rt_code entitlement_type,
s_id,
p_id,
case when bes_ps_s_id = wkfl_constants.s_trn_ent_cap_transferor then 'ETU Transfer Out'
when bes_ps_s_id = wkfl_constants.s_trn_ent_cap_transferee then 'ETU Transfer In'
else SUBSTR(p_description, 1, 40)||': '||SUBSTR(s_description, 1, 40)
end step_description,
SUM(NVL(er_number,0)) no_of_entitlements,
NVL(transfer_entitlements_qry.get_overall_erv_rate3(er_id, ei_obe_id), 0) overall_erv_rate,
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.bcs_reduction), 'TRUE', 'SBCS') sbcs,
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.dairy_contribution), 'TRUE', 'DAIRY') dairy,
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.nat_reserve_cont), 'TRUE', 'NAT_RESERVE') nat_reserve,
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.scaleback), 'TRUE', 'SCALEBACK') scaleback,
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.create_entitlement) creation_erv_rate,
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.bcs_reduction) bcs_erv_rate,
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.dairy_contribution) dairy_erv_rate,
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.nat_reserve_cont) nat_reserve_cont_erv_rate,
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.scaleback) scaleback_erv_rate,
bes_reason_code,
bus_event_steps.created_by,
bes_id,
bes_parr_id parr_id,
bes_obe_id,
entitlements_qry.get_end_dated_value(bes_id, er_id) end_dated_value,
parr_brn brn
FROM entitlement_interests,
entitlement_register,
entitlement_register_types,
bus_event_steps,
processes,
steps,
party_roles
WHERE ei_er_id = er_id
AND ( er_end_date IS NULL OR
trunc(er_end_date) >= entitlements_qry.get_scheme_start_date(2005) )
AND ert_er_id = er_id
AND bes_obe_id = ei_obe_id
AND parr_id = bes_parr_id
AND parr_brn = NVL(156472, parr_brn)
AND s_id = bes_ps_s_id
AND p_id = bes_ps_p_id
AND bes_ps_p_id in ( wkfl_constants.process_transfer_ents,
wkfl_constants.p_trn_ent_req_for_transfer )
AND bes_ps_s_id in ( wkfl_constants.ent_transfer_from,
wkfl_constants.ent_transfer_to,
wkfl_constants.s_trn_ent_cap_transferor,
wkfl_constants.s_trn_ent_cap_transferee )
AND bes_end_date IS NULL
AND bes_id > 0
AND bes_id <= 9999999999
GROUP BY bus_event_steps.created_date,
ei_start_date,
ert_rt_code,
s_id,
p_id,
case when bes_ps_s_id = wkfl_constants.s_trn_ent_cap_transferor then 'ETU Transfer Out'
when bes_ps_s_id = wkfl_constants.s_trn_ent_cap_transferee then 'ETU Transfer In'
else SUBSTR(p_description, 1, 40)||': '||SUBSTR(s_description, 1, 40)
end,
NVL(transfer_entitlements_qry.get_overall_erv_rate3(er_id, ei_obe_id), 0),
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.bcs_reduction), 'TRUE', 'SBCS'),
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.dairy_contribution), 'TRUE', 'DAIRY'),
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.nat_reserve_cont), 'TRUE', 'NAT_RESERVE'),
DECODE(transfer_entitlements_qry.check_for_bes_step(ei_er_id, wkfl_constants.scaleback), 'TRUE', 'SCALEBACK'),
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.create_entitlement),
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.bcs_reduction),
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.dairy_contribution),
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.nat_reserve_cont),
transfer_entitlements_qry.sum_erv_rate_for_bes_step(er_id, wkfl_constants.scaleback),
bes_reason_code,
bus_event_steps.created_by,
bes_id,
bes_parr_id,
bes_obe_id,
entitlements_qry.get_end_dated_value(bes_id, er_id),
parr_brn

Tom Kyte
March 13, 2007 - 11:32 am UTC

the RBO isn't involved here.

why do you think it is?


1 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PARTY_ROLES'

you have some analyzed tables..

A reader, March 13, 2007 - 11:47 am UTC

If you look at the tkprof output, it says it is using RBO, this copy paste is from the previous posting. It also says it is doing a Hash Join:

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
4 SORT (UNIQUE)
4 UNION-ALL
4 SORT (GROUP BY)
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
8 NESTED LOOPS
4128 HASH JOIN
43017 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'BUS_EVENT_STEPS'
5040 MERGE JOIN
4 SORT (JOIN)
4 NESTED LOOPS
4 NESTED LOOPS
1 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PARTY_ROLES'
4 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'ENTITLEMENT_INTERESTS'
4 INDEX (RANGE SCAN) OF 'EI_PARR_ID_IDX'
(NON-UNIQUE)
4 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'ENTITLEMENT_REGISTER'
4 INDEX (UNIQUE SCAN) OF 'ER_PK' (UNIQUE)
5040 SORT (JOIN)
14482 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'OTHER_BUS_EVENTS'
8 INDEX (UNIQUE SCAN) OF 'ERV_PK' (UNIQUE)
0 NESTED LOOPS
8 INDEX (RANGE SCAN) OF 'ERV_PK' (UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'BUS_EVENT_STEPS'
8 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'BES_PK' (UNIQUE)
8 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROCESSES'
8 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'P_PK'
(UNIQUE)
8 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'STEPS'
8 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'S_PK'
(UNIQUE)
8 INDEX (RANGE SCAN) OF 'ERT_PK' (UNIQUE)
0 SORT (GROUP BY)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
1 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'PARTY_ROLES'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'BUS_EVENT_STEPS'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'BES_PARR_IDX' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID)
OF 'STEPS'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'S_PK'
(UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'PROCESSES'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'P_PK'
(UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'ENTITLEMENT_INTERESTS'
0 INDEX (RANGE SCAN) OF 'EI_OBE_ID_IDX' (NON-UNIQUE)

0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'ENTITLEMENT_REGISTER'
0 INDEX (UNIQUE SCAN) OF 'ER_PK' (UNIQUE)
0 INDEX (RANGE SCAN) OF 'ERT_PK' (UNIQUE)

Tom Kyte
March 13, 2007 - 3:26 pm UTC

i cut and pasted from the tkprof, did you see my cut and paste, that would indicate "not the rbo"

you are using the CBO - look at your tables, you probably have an index organized table or partitioned table in there, they force the CBO into action.


A reader, March 13, 2007 - 4:26 pm UTC

If our default optimizer is RBO, why would the presence of an index organized table or partial analyzed tables force the CBO to kick in?
I would have thought we always would be using RBO, until we say 'alter session set optimizer_goal = all_rows' ?
Tom Kyte
March 13, 2007 - 8:54 pm UTC

because in order to access an IOT or partitioned table you HAVE TO USE THE CBO

it is the way it works

access an IOT - cbo, no if and or buts about it.


elapsed vs cpu

m. bonade, March 27, 2007 - 12:24 pm UTC

Hi Tom,

here's part of the result of a tkprof on a trace files but I am very puzzled by the result. My main concern is the difference between elapsed time and cpu time. When I read the results, I see that there is approximately a 60 seconds difference. Correct me if I am wrong but I see nothing in the wait events that can explain this difference. Basically on the summary of the file (end of file produced by tkprof) I see the same type of problem.

SELECT DECODE(COUNT(*),0,'NON','OUI') TOTO
FROM OE_ORDER_HEADERS_ALL OEHV
WHERE OEHV.ORG_ID = :B5
AND OEHV.ORDER_NUMBER = :B4
AND OEHV.ORDER_TYPE_ID = :B3
AND OEHV.SOLD_TO_ORG_ID = :B2
AND OEHV.ORDERED_DATE > SYSDATE - :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1969 14.73 75.95 0 0 0 0
Fetch 1969 0.34 1.01 79 7886 0 1969
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3939 15.07 76.97 79 7886 0 1969

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 44 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1969 SORT AGGREGATE (cr=7886 pr=79 pw=0 time=1021736 us)
38 TABLE ACCESS BY INDEX ROWID OE_ORDER_HEADERS_ALL (cr=7886 pr=79 pw=0 time=996517 us)
1968 INDEX RANGE SCAN OE_ORDER_HEADERS_U2 (cr=5918 pr=79 pw=0 time=967314 us)(object id 137240)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
gc current block 2-way 285 0.00 0.21
gc cr grant 2-way 36 0.00 0.01
db file sequential read 79 0.01 0.06
********************************************************************************
Tom Kyte
March 28, 2007 - 11:08 am UTC

could easily be "lack of CPU". How was the machine during this time?

help regarding tkprof

Parag J Patankar, April 04, 2007 - 10:04 am UTC

Hi,

Pl find below a trace output, of a pro-cobol program which is running for many hours ( still not finished when I have done sql_trace false ).

tkprof output :

select A0020 ,A0350 ,A0230 ,QDFPE ,A0540 ,A00201 ,A03501 ,A05401 ,
QDFPE1 ,A02301 ,A0830 ,A0750 ,B0100 ,A0755 ,B0120 ,B0140 ,A1040 ,
A0500 ,A0130 ,QCGAC ,A0260 ,A0270A ,A0220C ,A0410 ,A0420 ,A230ZB ,
C5085
from
VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and
A0540=:b5) order by QDFPE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 799 0.04 0.02 0 0 0 0
Fetch 799 50.12 64.81 1661135 1723443 0 799
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1598 50.16 64.84 1661135 1723443 0 799

Kindly help me what are reasons, it is having so many disk and query writes ? vdz031 having only 28000 records. vdz031 is a simple view on vdz031. tdz03 having unique index on a0020, a0350, a0230, qdfpe, a0540 columns.

thanks & regards
paragjp

Tom Kyte
April 04, 2007 - 11:15 am UTC

it was executed 799 times.

1,723,443/799 = 2157 LIOs per execution (big, but not huge)

check out the query plan (v$sql_plan) - how is the query being processed.

if the index is created in the order you listed, I would rather have it be on:

a0020, a0350, a0230, a0540, qdfpe

qdfpe needs to be at the end since you use >= on it for this query. If there are many rows in the index such that:


((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4)

is true, but FEW rows such that

((((A0020=:b1 and A0350=:b2) and A0230=:b3) and A0540=:b5)

is true, we'll have to scan lots of index entries that are meaningless. consider:


ops$tkyte%ORA10GR2> create table t
  2  ( a char(20), b char(20), c char(20), Q char(20), d char(20) );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t
  2  (
  3  select 1, 1, 1, rownum, 1 from
  4  (select level l from dual connect by level <= 200000)
  5  );

200000 rows created.

ops$tkyte%ORA10GR2> insert into t values ( 1, 1, 1, 1, 2 );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> create unique index t_idx on t(a,b,c,Q,d);

Index created.

ops$tkyte%ORA10GR2> select /*+ index( t t_idx ) */ *
  2    from t
  3   where a = '1' and b = '1' and c = '1' and Q > '0' and d = '2';

A                    B                    C
-------------------- -------------------- --------------------
Q                    D
-------------------- --------------------
1                    1                    1
1                    2



Execution Plan
----------------------------------------------------------
Plan hash value: 2946670127

--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |     1 |   105 |  3237   (1)| 00:00:39 |
|*  1 |  INDEX RANGE SCAN| T_IDX |     1 |   105 |  3237   (1)| 00:00:39 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"='1' AND "B"='1' AND "C"='1' AND "Q">'0' AND "D"='2'
              AND "Q" IS NOT NULL)
       filter("D"='2')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       3229  consistent gets
       3227  physical reads
          0  redo size
        710  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> drop index t_idx;

Index dropped.

ops$tkyte%ORA10GR2> create unique index t_idx on t(a,b,c,d,Q);

Index created.

ops$tkyte%ORA10GR2> select /*+ index( t t_idx ) */ *
  2    from t
  3   where a = '1' and b = '1' and c = '1' and Q > '0' and d = '2';

A                    B                    C
-------------------- -------------------- --------------------
Q                    D
-------------------- --------------------
1                    1                    1
1                    2



Execution Plan
----------------------------------------------------------
Plan hash value: 2946670127

--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |     1 |   105 |     3   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| T_IDX |     1 |   105 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"='1' AND "B"='1' AND "C"='1' AND "D"='2' AND "Q">'0'
              AND "Q" IS NOT NULL)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          2  physical reads
          0  redo size
        710  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off

tkprof output

Parag J Patankar, April 05, 2007 - 3:36 am UTC

Hi Tom,

Thanks for your very precise help. After creating index as you mentioned, everything was fine. Kindly educate me on following queries

1/ Is performance or concepts guide mentioned that if you are using ">=" in your query, that column should be last in your index ? ( if yes pl give me reference of same )

2/ In my sql trace output ( 9.2 database ), why it has not shown me waiting events, in tkprof command line I have used waits=yes

3/ Can 10046 trace can be used for tracing other sessions like dbms_system ?

4/ in my tkprof output ( above in this thread ), why it was showing "big" logical i/os, As query is using index and index entries are always in sorted order

5/ Will you pl show me how to find sid and serial# for query running ? In my test system many sql sessions are running on server and most of them are using os auth, so username was also same. Somehow, I was able to get sid and serial# for trace.

Once again, thanks for helping me, without having actual complete data in your hand.

best reagards
pjp
Tom Kyte
April 05, 2007 - 10:56 am UTC

1) sort of "obvious" from how indexes in general work. think about it - think about a list of anything sorted by a,b,c,d

if you say

give me everything where

a=:a and b=:b and d=:d AND C >= :C

you know, you can search the list to find all of the A=,B= values, then you find the first C >= value and you have to inspect all of them asking "does d=:d for this one?"

but, if you had a list sorted by a,b,d,c - you would be able to find the lines in the list fast such that a=,b=,d= is true and then find the first c>= value and you know that the rest of the list where a=,b=,d= is part of your answer. You inspect the least amount of data.


2) maybe you had none? or maybe when you enabled trace, you did not ask for wait events to be emitted into the trace file.

3) yes, you can use dbms_system to set an event

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2615941821439#20009990791621

4) I explained that via an example right above and your 1st question seemed to indicate that you saw that???

I told you why already....

5)
http://asktom.oracle.com/pls/ask/search?p_string=showsql

use my showsql script to see what session is running a given query.

showsql

Parag J Patankar, April 09, 2007 - 5:05 am UTC

Hi Tom,

Thanks for your valuable help. In one session I was running select * from all_objects order by object_id and on another session I had executed showsql.sql, it is showing me following result

14:13:04 SQL> @showsql

USERNAME SID SERIAL# PROCESS STATUS
--------------- ---------- ---------- ------------ ----------
WEBWS 18 4544 INACTIVE
WEBWS 25 5145 INACTIVE
WEBWS 29 9103 INACTIVE
PARAG 30 3559 928:3516 INACTIVE
PARAG 34 3402 1960:2264 ACTIVE
--------------------
PARAG(34,3402) ospid = 1960:2264 program = sqlplusw.exe
Thursday 20:50 Monday 14:12 last et = 0
select username||'('||sid||','||serial#||
') ospid = ' || process ||
' program = ' || program username,
to_char(LOGON_TIME,' Day HH24:MI') logon_time,

to_char(sysdate,' Day HH24:MI') current_

USERNAME MODULE ACTION CLIENT_INFO
--------------- --------------- --------------- ----------------------
PARAG(30,3559) SQL*Plus
PARAG(34,3402) SQL*Plus

It has not shown me "select * from all_objects order by object_id" is it correct ? if yes then how can I know what session is running what sql ?

thanks & regards
PJP



Tom Kyte
April 09, 2007 - 10:50 am UTC

the session that was running that query was not active when you ran this. It was INACTIVE, eg: not actually running that query.

the only active session was - you, running showsql.sql

the other session was likely printing the results of an array fetch (eg: it was in the client, not the server) when you ran this.

showsql

Parag J Patankar, April 09, 2007 - 5:32 am UTC

Hi Tom,

Thanks for your valuable help. In one session I was running select * from all_objects order by object_id and on another session I had executed showsql.sql, it is showing me following result

14:13:04 SQL> @showsql

USERNAME SID SERIAL# PROCESS STATUS
--------------- ---------- ---------- ------------ ----------
WEBWS 18 4544 INACTIVE
WEBWS 25 5145 INACTIVE
WEBWS 29 9103 INACTIVE
PARAG 30 3559 928:3516 INACTIVE
PARAG 34 3402 1960:2264 ACTIVE
--------------------
PARAG(34,3402) ospid = 1960:2264 program = sqlplusw.exe
Thursday 20:50 Monday 14:12 last et = 0
select username||'('||sid||','||serial#||
') ospid = ' || process ||
' program = ' || program username,
to_char(LOGON_TIME,' Day HH24:MI') logon_time,

to_char(sysdate,' Day HH24:MI') current_

USERNAME MODULE ACTION CLIENT_INFO
--------------- --------------- --------------- ----------------------
PARAG(30,3559) SQL*Plus
PARAG(34,3402) SQL*Plus

It has not shown me "select * from all_objects order by object_id" is it correct ? if yes then how can I know what session is running what sql ?

thanks & regards
PJP



waiting events in sql_trace

Parag J Patankar, April 09, 2007 - 11:23 am UTC

Hi Tom,

Thanks for your answers to my each question. Regarding my question about waiting events not appearing in tkprof output in 9.2 you have answered

"maybe you had none? or maybe when you enabled trace, you did not ask for wait events to be emitted into the trace file.
"

Is there any special way to tell tkprof to include waiting events ? while generating output I have said tkprof ... waits=yes and if I remeber correctly, in v$session_wait it showing "db file sequential read" waiting event.

Kindly explain.

thanks & best regards
pjp



Tom Kyte
April 09, 2007 - 1:03 pm UTC

unless your trace file (the INPUT to tkprof) has wait events recorded in it, tkprof won't be able to show you them.

I think (once again) that either

a) you HAD no waits (unlikely, but possible)
b) when you enabled trace, you did not enable wait event tracing.

performance poor after collecting statistics

Parag J Patankar, April 10, 2007 - 5:56 am UTC

Hi Tom,

In this thread you have given me an answer for "why so many disks and logical reads" ( Ans : to create index in specific order and keep qdfpe in last because in query it is using >= ). After doing further analysis, I found that statistcs on table tdz03 is creating a problem. Pl see below sql trace before and after deleting statistics ( for after statistics I used old trace output from where I had raised a original question ) and without creating additional index

Before collecting statistcs :
VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and
A0540=:b5) order by QDFPE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 14717 0.45 0.46 0 0 0 0
Fetch 14717 0.86 0.79 0 66198 0 14717
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 29434 1.31 1.25 0 66198 0 14717

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 43 (OPS$A)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 SORT (ORDER BY)
0 TABLE ACCESS (BY INDEX ROWID) OF 'TDZ03'
0 INDEX (RANGE SCAN) OF 'X01DZ03' (UNIQUE)

********************************************************************************

After collecting statistics

VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and
A0540=:b5) order by QDFPE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 3610 0.12 0.13 0 0 0 0
Fetch 3610 226.40 297.08 7505267 7786770 0 3610
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7220 226.52 297.21 7505267 7786770 0 3610

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 43 (OPS$A)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'TDZ03'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'X01DZ03' (UNIQUE)

********************************************************************************

I am collecting statistics for all tables by

execute dbms_stats.gather_schema_stats( -
ownname => 'a', -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size auto', -
degree => 3, -
cascade => true, -
options => 'GATHER AUTO' );

Kindly educate me,

1/ why this query running slowly after collecting statistics ?

2/ Is my statistics collection method is correct ? if not pl show me correct method

thanks & regards
PJP
Tom Kyte
April 10, 2007 - 10:50 am UTC

always compare the estimated cardinality (explain plan) to the real (tkprof)

in order to do that, you'll actually need to have a tkprof with row counts in it - eg: test this in sqlplus.

performance poor after gathering statistics

Parag Jayant Patankar, April 11, 2007 - 2:27 am UTC

Hi tom,

thanks for your answer. You said
"always compare the estimated cardinality (explain plan) to the real (tkprof)

in order to do that, you'll actually need to have a tkprof with row counts in it - eg: test this in sqlplus"

autotrace output with statistics :

for e.g.

select a0020, a0350, a0230, qdfpe, a0540
from
VDZ031 where
((((A0020=1 and A0350='IN01') and A0230='EUR') and QDFPE>='20070301') and
A0540=1) order by QDFPE
/

A0020 A035 A02 QDFPE A0540
---------- ---- --- -------- ----------
1 IN01 EUR 20070301 1
1 IN01 EUR 20070304 1
1 IN01 EUR 20070305 1
1 IN01 EUR 20070306 1
1 IN01 EUR 20070307 1
1 IN01 EUR 20070308 1
1 IN01 EUR 20070311 1
1 IN01 EUR 20070312 1
1 IN01 EUR 20070313 1
1 IN01 EUR 20070314 1
1 IN01 EUR 20070315 1

A0020 A035 A02 QDFPE A0540
---------- ---- --- -------- ----------
1 IN01 EUR 99999999 1

12 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=20 Bytes=480)
1 0 INDEX (RANGE SCAN) OF 'X01DZ03' (UNIQUE) (Cost=4 Card=20 B
ytes=480)

Statistics
----------------------------------------------------------
7 recursive calls
0 db block gets
7 consistent gets
0 physical reads
0 redo size
954 bytes sent via SQL*Net to client
651 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
12 rows processed

14:09:37 SQL>

My observations is there is no difference between explain plan in autotrace ( cardinality i.e no of rows ) with tkprof

Sorry, though you had give me some hints, I do not know how to proceed to get "real problem". ( I know "statistics deletion is not a real solution ).

Kindly educate me.

thanks & best regards
PJP

Tom Kyte
April 11, 2007 - 11:14 am UTC

TKPROF - not autotrace, we need a tkprof with real row counts (row source operation filled in with something OTHER THAN ZERO)


and make sure to use binds where they use binds - do the same thing they do precisely.

tkprof - row source generation

Parag Patankar, April 16, 2007 - 9:06 am UTC

Hi Tom,

Sorry for my "stupidity". As I am running pro-cobol program which is running more that 5 hours, in TKPROF how can I get "row source generation" in a quicker and better way ? Because "all cursors were not closed " ( I kept max dump size unlimited and it has reached 100 mb and still increasing ). Is it possible without modifying source.

thanks & regards
PJP


Tom Kyte
April 16, 2007 - 1:28 pm UTC

you need to get the cursors closed if you want the stat record emitted into the trace file.

or you can read it out of v$sql_plan_statistics - if you have statistics level set high enough.

why row source generation still 0 ?

Parag Patankar, April 17, 2007 - 7:18 am UTC

Hi Tom,

I have run pro-cobol program which run for more than 4 hours and in this process created trace file of 319 mb. I have not done sql trace off in between. But it has not put in STAT records into row source generation. Why ?

from
VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and
A0540=:b5) order by QDFPE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 119166 4.60 5.24 0 0 0 0
Fetch 119166 7097.17 13924.47 246747121 258113556 0 119166
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 238332 7101.77 13929.71 246747121 258113556 0 119166

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 57 (OPS$A)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'TDZ03'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'X01DZ03' (UNIQUE)

Kindly suggest.

thanks & regards
PJP
Tom Kyte
April 17, 2007 - 10:15 am UTC

because the cobol program has not closed the cursor.

row source operation

Parag Jayant Patankar, April 24, 2007 - 3:17 am UTC

Hi,

Sorry for my late reply. I am still in a process of finding a problem of pro-cobol program, why "row source operation" not filled. Can you educate me how you are going to analyze a problem using "row source operation" ?

thanks & regards
PJP


Tom Kyte
April 24, 2007 - 11:01 am UTC

you do not get row source until the cursor is closed (eg: the cobol program completes probably - since it is doing the right thing and keeping the cursor open)

we are looking for "where the work is being done, what step is doing excessive IOs, then we can ask 'why' and see if we can change that"

why row souce operation not filled ...?

Parag J Patankar, April 27, 2007 - 12:03 pm UTC

Hi Tom,

I am not still able to fix up a problem yet. I can see my cobol program is closing cursor

e.g.
7000-open-vtdn32s.
exec sql
open vtdn32s_c1
end-exec.
7000-exit.
*
7100-close-vtdn32s.
exec sql
close vtdn32s_c1
end-exec.
7100-exit. exit.

It is doing STOP run also.

close printfile .
close printfil2 .
close printfil3 .
close printfil4 .
perform 9999-quit-program thru 9999-exit .
stop run .

But oooosh.......... !!! after running cobol program more than 3 hours and generating almost 6gb trace file, still row source operation not filled.

C5085
from
VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and
A0540=:b5) order by QDFPE

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 111853 4.36 4.00 0 0 0 0
Fetch 111853 7663.44 9479.87 209464131 241266921 0 111853
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 223706 7667.80 9483.88 209464131 241266921 0 111853

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 43 (OPS$A)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'TDZ03'
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'X01DZ03' (UNIQUE)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 65869589 0.11 2933.17
db file sequential read 12866890 0.04 59.80
SQL*Net message to client 111854 0.00 0.19
SQL*Net message from client 111854 0.06 61.02
********************************************************************************

But small light in this dark, I have used oradebug utility for capturing waiting events, now they are appearing. I have two questions

1/ If my cobol program is closing cursors neatly, why still row source operation not filled ? ( it has not reached max size well ). Can you guide me what I have missed ?

2/ According to Pete Finnigan by default sql_trace level is 8 i.e. include waiting events, then why waiting events were not appearing when I was doing dbms_system.set_sql.. ? Definately db file sequential or scatter read should have been appeared. Am I correct ?

thanks & regards
pjp
Tom Kyte
April 27, 2007 - 4:28 pm UTC

that does not mean they were really closed, just that we are allowed to close them.

see parse count = 0? It was already open, the program was precompiled with options (performance options - this is a good thing) that is holding the cursors open.

The cobol program will have to EXIT.


The default trace level is 0.

What option I should use to close cursor ?

Parag J Patankar, April 30, 2007 - 8:55 am UTC

Hi Tom,

I am running a pro cobol program which is generating output file ( no database update ) for which I am facing "row source filled operation issue. Currently I am compiling pro-cobol program by following

procob iname=$pgm.scb oname=$pgm.cbl litdelim=apost ireclen=132 \
include=$ORACLE_HOME/precomp/public \
mode=ANSI maxliteral=200 VARCHAR=YES $INCCPY

My questions regarding this

1/ Which option I will have to add it to close cursor so my row source operation will be successful ?

2/ In my program I had used "commit" before "stop run" command, but it looks it has no effect. As per docs if use "commit" it will close all explicit cursors when mode=ansi. I have explicit cursor on tdz03, why pro-cobol is not closing cursors.

thanks & regards
PJP


Tom Kyte
April 30, 2007 - 9:49 am UTC

just let the program FINISH.

there is a logical close of a cursor and a physical close, the procobol stuff is (smartly) caching them, do not force them closed - THAT would really hurt your performance.

pro-cobol source operation

Parag J Patankar, May 03, 2007 - 8:41 am UTC

Hi Tom,

I am using foloowing pro-cobol operation to compile program

inccpy value :

include=/t326/inuat1/site/cpy include=/t326/inuat1/site/src include=/apps/unikix/t7.1.15/copy include=/t326/inuat1/map inclu
de=/t326/inuat1/anim include=/t326/inuat1/src include=/t326/inuat1/binbtch include=/t326/inuat1/bincics include=/usr/include include
=/usr/mqm/inc

procob iname=$pgm.scb oname=$pgm.cbl litdelim=apost ireclen=132 \
include=$ORACLE_HOME/precomp/public \
mode=ANSI maxliteral=200 VARCHAR=YES $INCCPY

even I put "commit" before stop run, but not worked. Kindly suggest what option I should use to close pro-cobol cursor so I can get row source operation get filled.

thanks & regards
PJP

Tom Kyte
May 03, 2007 - 10:12 pm UTC

I say it one more time

let the program exit.



Performance problem

A reader, May 08, 2007 - 3:20 am UTC

Tom,

I have a query which runs in seconds in my development database but it takes 20 minutes in production database. I am on Oracle 9i R2 on HP-UX.

Following is the tkproof report for the query in developemnt & production database. Strangly it is using the same query execution plan in both the databases still it is slower in production!!!

Development database tkproof:
SELECT UZVWKAL_ALLOC_EMPL_CODE,UZVWKAL_SVCO_CODE,UZVWKAL_SOTP_CODE,
UZVWKAL_PREM_CODE,UZVWKAL_REQUESTED_DATE,ROWID
FROM
UZVWKAL WHERE uzvwkal_sotp_grop_code = :1 and uzvwkal_requested_date between
:2 and :3 and uzvwkal_stus_code = 'O' and nvl(uzvwkal_alloc_empl_code,
'NONE') = decode(:4,'N','NONE',nvl(uzvwkal_alloc_empl_code,'NONE') ) and
uzvwkal_svco_code in (select ucbsvco_code from ucbsvco where
ucbsvco_comp_code = nvl(:5,ucbsvco_comp_code)) and (UZVWKAL_SOTP_CODE LIKE
'A%') order by uzvwkal_prem_zipc_code


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.01 0 0 0 0
Fetch 1 73.34 204.64 503713 554031 0 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 73.35 204.66 503713 554031 0 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 6841 (MORRISNA)

Rows Row Source Operation
------- ---------------------------------------------------
11 SORT ORDER BY (cr=554031 r=503713 w=0 time=204643957 us)
44 FILTER (cr=554031 r=503713 w=0 time=204640674 us)
44 NESTED LOOPS (cr=554031 r=503713 w=0 time=204640516 us)
44 NESTED LOOPS (cr=553897 r=503614 w=0 time=203928386 us)
44 NESTED LOOPS (cr=553763 r=503562 w=0 time=203606326 us)
18 TABLE ACCESS FULL OBJ#(4463) (cr=17 r=15 w=0 time=26588 us)
44 TABLE ACCESS BY INDEX ROWID OBJ#(1109) (cr=553746 r=503547 w=0 time=203579052 us)
3474706 INDEX RANGE SCAN OBJ#(692739) (cr=12859 r=5031 w=0 time=10012519 us)(object id 692739)
44 TABLE ACCESS BY INDEX ROWID OBJ#(1109) (cr=134 r=52 w=0 time=321297 us)
44 INDEX UNIQUE SCAN OBJ#(692737) (cr=90 r=52 w=0 time=320196 us)(object id 692737)
44 TABLE ACCESS BY INDEX ROWID OBJ#(1114) (cr=134 r=99 w=0 time=711504 us)
44 INDEX UNIQUE SCAN OBJ#(690008) (cr=90 r=56 w=0 time=379898 us)(object id 690008)

Production database TKPROOF:
SELECT UZVWKAL_ALLOC_EMPL_CODE,UZVWKAL_SVCO_CODE,UZVWKAL_SOTP_CODE,
UZVWKAL_PREM_CODE,UZVWKAL_REQUESTED_DATE,ROWID
FROM
UZVWKAL WHERE uzvwkal_sotp_grop_code = :1 and uzvwkal_requested_date between
:2 and :3 and uzvwkal_stus_code = 'O' and nvl(uzvwkal_alloc_empl_code,
'NONE') = decode(:4,'N','NONE',nvl(uzvwkal_alloc_empl_code,'NONE') ) and
uzvwkal_svco_code in (select ucbsvco_code from ucbsvco where
ucbsvco_comp_code = nvl(:5,ucbsvco_comp_code)) and (UZVWKAL_SOTP_CODE LIKE
'A%') order by uzvwkal_prem_zipc_code


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 99.73 1609.32 526562 555786 0 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 99.74 1609.36 526562 555786 0 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 6841

Rows Row Source Operation
------- ---------------------------------------------------
11 SORT ORDER BY (cr=555786 r=526562 w=0 time=1352123638 us)
33 FILTER (cr=555786 r=526562 w=0 time=1534371723 us)
33 NESTED LOOPS (cr=555786 r=526562 w=0 time=1534371610 us)
33 NESTED LOOPS (cr=555685 r=526527 w=0 time=1533786243 us)
33 NESTED LOOPS (cr=555584 r=526495 w=0 time=1533409214 us)
18 TABLE ACCESS FULL OBJ#(4463) (cr=17 r=2 w=0 time=20925 us)
33 TABLE ACCESS BY INDEX ROWID OBJ#(1109) (cr=555567 r=526493 w=0 time=1557772359 us)
3477295 INDEX RANGE SCAN OBJ#(692739) (cr=12927 r=12862 w=0 time=123952292 us)(object id 692739)
33 TABLE ACCESS BY INDEX ROWID OBJ#(1109) (cr=101 r=32 w=0 time=376456 us)
33 INDEX UNIQUE SCAN OBJ#(692737) (cr=68 r=32 w=0 time=375754 us)(object id 692737)
33 TABLE ACCESS BY INDEX ROWID OBJ#(1114) (cr=101 r=35 w=0 time=584939 us)
33 INDEX UNIQUE SCAN OBJ#(690008) (cr=68 r=7 w=0 time=144569 us)(object id 690008)


Do you see any problem?

Production parameter values:
SQL> show parameter cursor

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
open_cursors integer 2000
session_cached_cursors integer 500

Testing database parameters:
SQL> show parameter cursor

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
open_cursors integer 100000
session_cached_cursors integer 10000


Thanks,

A reader, May 21, 2007 - 11:28 am UTC

I guess the issue is with

TABLE ACCESS BY INDEX ROWID OBJ#(1109) (cr=555567 r=526493 w=0 time=1557772359 us)

Can you give a

select * from dba_indexes where index_name=' ' (for object_id 1109) in prod db and compare that with
select * from dba_indexes where index_name=' ' of dev db

Tom - is the issue with colocation or slow i/o in prod ?

Segmentation Fault ( core dump ) error

manoj pradhan, May 23, 2007 - 10:13 am UTC

Tom,
I am working on linux .
when use tkprof to generate report from trace file
I got segmentation fault ( Core dumped ).

please tell me the reason and how to over come it .
size of trace file generated is 60275 bytes .

this error I am getting for only big queries
( selecting 350 columns from 54 tables , using pivoting and inline views ) .

for small query no problem .
I tried with
alter session set max_dump_file_size=unlimited
but no benifit .

when I tried to open the report generated I got half only
i seems to be truncated .
I Exist from sql* and then run the tkprof .
please help .
Tom Kyte
May 26, 2007 - 9:29 am UTC

please use support

Manoj Pradhan, May 23, 2007 - 12:08 pm UTC

Tom ,
for your information I am working in oracle
8.1.7.0.1 . Please response ..

Manoj Pradhan, May 25, 2007 - 5:33 am UTC

Tom ,
I got the root cause ...
The problem was happening due to OS ,
In Redhat Linux machine I am getting error ,
But HP Unix it is working fine ...
Thank you ...

tkprof output

Parag J Patankar, May 28, 2007 - 3:16 am UTC

Hi Tom,

I have asked question in this thread showing following SQL plan

tkprof output : 

select A0020 ,A0350 ,A0230 ,QDFPE ,A0540 ,A00201 ,A03501 ,A05401 , 
QDFPE1 ,A02301 ,A0830 ,A0750 ,B0100 ,A0755 ,B0120 ,B0140 ,A1040 , 
A0500 ,A0130 ,QCGAC ,A0260 ,A0270A ,A0220C ,A0410 ,A0420 ,A230ZB , 
C5085 
from 
VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and 
A0540=:b5) order by QDFPE 


call  count    cpu  elapsed    disk    query  current    rows 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
Parse    0    0.00    0.00      0      0      0      0 
Execute  799    0.04    0.02      0      0      0      0 
Fetch    799  50.12    64.81  1661135  1723443      0      799 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
total  1598  50.16    64.84  1661135  1723443      0      799 


On this you have suggested very excellent solution ( index columns are not in right order ). After debugging further I thought it is a problem due to invalid statistics. ( I have checked user_tab_modifications table )
You said me to post tkprof output with row source generation, it took me long time to fix up pro-cobol and other issues. To simualate other I have done following steps

1. Restored particular date database
2. On vdz031 I have done lot of inserts and deletes so my user_tab_modifications show me lot of inserts and deletes ( double than actual count(*) of vdz031
3. Execuated program which is taking very long time

I was surprised to see results
select A0020  ,A0350  ,A0230  ,QDFPE  ,A0540  ,A00201  ,A03501  ,A05401  ,
  QDFPE1  ,A02301  ,A0830  ,A0750  ,B0100  ,A0755  ,B0120  ,B0140  ,A1040  ,
  A0500  ,A0130  ,QCGAC  ,A0260  ,A0270A  ,A0220C  ,A0410  ,A0420  ,A230ZB  ,
  C5085
from
 VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and
  A0540=:b5) order by QDFPE


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     16      0.00       0.00          0          0          0           0
Fetch       16      0.78       0.74          0      34824          0          16
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       33      0.78       0.74          0      34824          0          16

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 43  (OPS$A)

Rows     Row Source Operation
-------  ---------------------------------------------------
     16  SORT ORDER BY
     16   TABLE ACCESS FULL TDZ03
Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
     16   TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID) OF 'TDZ03'
     16    INDEX   GOAL: ANALYZED (RANGE SCAN) OF 'X01DZ03' (UNIQUE)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      16        0.00          0.00
  SQL*Net message from client                    16        0.00          0.01
********************************************************************************


My questions regarding this are

Q1. One time it is showing

from 
VDZ031 where ((((A0020=:b1 and A0350=:b2) and A0230=:b3) and QDFPE>=:b4) and 
A0540=:b5) order by QDFPE 


call  count    cpu  elapsed    disk    query  current    rows 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
Parse    0    0.00    0.00      0      0      0      0 
Execute 119166    4.60    5.24      0      0      0      0 
Fetch  119166  7097.17  13924.47 246747121 258113556      0    119166 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
total  238332  7101.77  13929.71 246747121 258113556      0    119166 

Misses in library cache during parse: 0 
Optimizer goal: CHOOSE 
Parsing user id: 57 (OPS$A) 

Rows  Execution Plan 
------- --------------------------------------------------- 
    0 SELECT STATEMENT  GOAL: CHOOSE 
    0  TABLE ACCESS  GOAL: ANALYZED (BY INDEX ROWID) OF 'TDZ03' 
    0  INDEX  GOAL: ANALYZED (RANGE SCAN) OF 'X01DZ03' (UNIQUE) 


Second time it is showing everything fine. Why it is so ? ( no parameter or nothing has changed )

Q2. I have done lot of inserts and deletes on vdz031 ( view on tdz03 )

SQL> select count(*) from tdz03;

COUNT(*)
----------
58180

1 select table_name, inserts, updates, deletes
2 from user_tab_modifications
3* where table_name='TDZ03'
SQL> /

TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ---------- ---------- ----------
TDZ03 319085 0 319085

How this kind of statistics is not affecting CBO decision making ?

Q3. If execution plan is not changed how it affecting disk and query reads ?

Kindly educate me.

thanks & regards
PJP
Tom Kyte
May 30, 2007 - 9:36 am UTC

q1) do you see that one time you executed it 16 times and the other time you executed it 119,166 times??????


taking your "and it was fine" numbers:

16 executions of consuming 34824 IOs = 2176 IOs per execution

and now apply 119,166 executions times 2,176 IOs and you get 259,305,216 which is really pretty close to 258,113,556 which is the number of IO's from the original - so by simple extrapolation - I would conclude "they are the same, nothing got magically fixed, you compared 16 executions to 119,166 which is wrong"


q2) you do not say if you created the index I suggested or not. And you do not say if you actually gathered statistics after making modifications. And you offer nothing compelling to us to show the optimizer SHOULD be changing its mind over time - perhaps it should not be given your set of indexes and data - REGARDLESS of the statistics.

q3) you are measuring that???? using tkprof??

user_tab_modifications

Parag J Patankar, June 04, 2007 - 1:15 am UTC

Hi Tom,

Thanks for guiding me regarding questions in this thread. I have asked you

Q2. I have done lot of inserts and deletes on vdz031 ( view on tdz03 )

SQL> select count(*) from tdz03;

COUNT(*)
----------
58180

1 select table_name, inserts, updates, deletes
2 from user_tab_modifications
3* where table_name='TDZ03'
SQL> /

TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ---------- ---------- ----------
TDZ03 319085 0 319085

How this kind of statistics is not affecting CBO decision making ?

You said
<
q2) you do not say if you created the index I suggested or not. And you do not say if you actually gathered statistics after making modifications. And you offer nothing compelling to us to show the optimizer SHOULD be changing its mind over time - perhaps it should not be given your set of indexes and data - REGARDLESS of the statistics.
>

I have not understood answer to this question. Will you pl explain me more ? ( for this testing case I have not created index as you specied )

thanks & best regards
pjp
Tom Kyte
June 05, 2007 - 7:54 am UTC

tell you what...

you tell us why you think the CBO would change the plan

and then we'll tell you why that idea isn't true.



user_tab_modifications is not used by the CBO. That view is used by dbms_stats to decide whether a table is stale and in need of gathering statistics.

The CBO is a mathematical model, you give it inputs (statistics) and it pumps out a decisision. If the inputs really don't change (you don't add an index, you don't change statistics, you hold all things constant) - the outputs won't either.

A reader, June 05, 2007 - 8:24 am UTC

Tom

The following is from an excerpt from the Oracle documentation, is it right in assuming that the transaction that number of block reads come from reading rollback segment blocks to retrieve data consistent with query start time?


1. Avoiding the Read Consistency Trap
The next example illustrates the read consistency trap. Without knowing that an uncommitted transaction had made a series of updates to the NAME column, it is very difficult to see why so many block visits would be incurred.
Cases like this are not normally repeatable: if the process were run again, it is unlikely that another transaction would interact with it in the same way.
SELECT name_id
FROM cq_names
WHERE name = 'FLOOR';

call count cpu elapsed disk query current rows
---- ----- --- ------- ---- ----- ------- ----
Parse 1 0.10 0.18 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.11 0.21 2 101 0 1

Misses in library cache during parse: 1
Parsing user id: 01 (USER1)

Rows Execution Plan
---- --------- ----
0 SELECT STATEMENT
1 TABLE ACCESS (BY ROWID) OF 'CQ_NAMES'
2 INDEX (RANGE SCAN) OF 'CQ_NAMES_NAME' (NON_UNIQUE)


Tom Kyte
June 06, 2007 - 1:01 pm UTC

yes

tkprof

A reader, June 06, 2007 - 12:02 pm UTC

Tom

We are experiencing some peformance problem after tabel index re-org (with alter table move ), execution time for queries is almost doubled and explain plan is not changed. I generated tkprof and found that cup spending time in fetch and elapsed. can you please help me.

select sec_map_id
from cdw_sec_map sm
where (exists (select rownum
from cdw_fund_aum
where sec_map_id = sm.sec_map_id
and approval_status = 'A')
or exists
(select rownum
from cdw_fund_nav
where sec_map_id = sm.sec_map_id
and approval_status = 'A')
or exists
(select rownum
from cdw_fund_flow
where sec_map_id = sm.sec_map_id
and approval_status = 'A')
or exists
(select rownum
from cdw_investor_data_approval
join cdw_position using
(approval_unit_id)
where sec_map_id = sm.sec_map_id
and approval_status = 'A' )
or exists
(select rownum
from cdw_position
join cdw_investor_data_approval using
(approval_unit_id)
where sec_map_id = sm.sec_map_id
and approval_status = 'A'))
and sec_id is not null

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.04 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 140 45.96 45.25 0 2568691 0 2085
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 142 46.00 45.30 0 2568691 0 2085

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 133

Rows Row Source Operation
------- ---------------------------------------------------
2085 FILTER
5665 TABLE ACCESS FULL CDW_SEC_MAP
451 COUNT
451 TABLE ACCESS BY INDEX ROWID CDW_FUND_AUM
486 INDEX RANGE SCAN CDW_FUND_AUM_UK1 (object id 34959)
225 COUNT
225 TABLE ACCESS BY INDEX ROWID CDW_FUND_NAV
228 INDEX RANGE SCAN CDW_FUND_NAV_PK (object id 8955)
90 COUNT
90 TABLE ACCESS BY INDEX ROWID CDW_FUND_FLOW
90 INDEX RANGE SCAN CDW_FUND_FLOW_PK (object id 7601)
1319 COUNT
1319 NESTED LOOPS
1118289 TABLE ACCESS FULL CDW_INVESTOR_DATA_APPROVAL
1319 BITMAP CONVERSION TO ROWIDS
1319 BITMAP AND
132493 BITMAP INDEX SINGLE VALUE CDW_POSITION_IDX2 (object id 14093)
6894 BITMAP INDEX SINGLE VALUE CDW_POSITION_IDX3 (object id 14094)
0 COUNT
0 NESTED LOOPS
995240 TABLE ACCESS FULL CDW_INVESTOR_DATA_APPROVAL
0 BITMAP CONVERSION TO ROWIDS
0 BITMAP AND
9174 BITMAP INDEX SINGLE VALUE CDW_POSITION_IDX2 (object id 14093)
569 BITMAP INDEX SINGLE VALUE CDW_POSITION_IDX3 (object id 14094)


Tom Kyte
June 06, 2007 - 9:18 pm UTC

do you have a tkprof from before?

big difference plan

Joseph Chun, June 09, 2007 - 1:33 am UTC

I have two SQL, second SQL contain all of the first SQL, here is

Query 1 :
select distinct connect_by_root(no_ke) no_ke,
      connect_by_root(sub_ke) sub_ke,
      (select proses from bh_sepr
 where no_stok = aa.no_stok
   and sub_stok = aa.sub_stok
      ) proses,
      (select c.proses
  from bh_seprc a, bh004a b, bh003d c
 where a.no_stok= aa.no_stok and a.sub_stok= aa.sub_stok
   and a.no_induk = b.no_induk and b.no_spk = c.no_spk
   and c.proses in ('lcc','lsg')
   and rownum = 1
      ) proses2
from bh_sepra aa
connect by aa.no_ke = prior aa.no_stok and aa.sub_ke = prior aa.sub_stok
start with (aa.no_ke,aa.sub_ke) in
(select p.no_ubjdiv,p.sub_ubjdiv
  from ubjdiva o, ubjdivb p
 where o.tgl < to_date(:tgl2,'yyyymmdd') + 1
   and o.tgl >= to_date(:tgl1,'yyyymmdd')
   and o.kodediv = 'bh'
   and substr(o.no_ubjdiv,1,2) = 'jb'
   and o.no_ubjdiv = p.no_ubjdiv
)


Query 2, only reselect from first SQL :
select a.no_ke,a.sub_ke,a.proses,a.proses2
 from (
       select distinct connect_by_root(no_ke) no_ke,
              connect_by_root(sub_ke) sub_ke,
              (select proses from bh_sepr
                where no_stok = aa.no_stok
                  and sub_stok = aa.sub_stok
              ) proses,
              (select c.proses
                 from bh_seprc a, bh004a b, bh003d c
                where a.no_stok= aa.no_stok and a.sub_stok= aa.sub_stok
                  and a.no_induk = b.no_induk and b.no_spk = c.no_spk
                  and c.proses in ('lcc','lsg')
                  and rownum = 1
              ) proses2
       from bh_sepra aa
       connect by aa.no_ke = prior aa.no_stok and aa.sub_ke = prior aa.sub_stok
       start with (aa.no_ke,aa.sub_ke) in
       (select p.no_ubjdiv,p.sub_ubjdiv
          from ubjdiva o, ubjdivb p
         where o.tgl < to_date(:tgl2,'yyyymmdd') + 1
           and o.tgl >= to_date(:tgl1,'yyyymmdd')
           and o.kodediv = 'bh'
           and substr(o.no_ubjdiv,1,2) = 'jb'
           and o.no_ubjdiv = p.no_ubjdiv
       )
      ) a


Here is the tkprof output for both query

first query
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.02       0.03          0          0          0           0
Fetch     2616     19.22      19.78          0     982720          0       39222
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2618     19.26      19.82          0     982720          0       39222

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 80

Rows     Row Source Operation
-------  ---------------------------------------------------
  48351  TABLE ACCESS BY INDEX ROWID BH_SEPR (cr=193614 pr=0 pw=0 time=2633085 us)
  48351   INDEX UNIQUE SCAN PK_BH_SEPR (cr=145053 pr=0 pw=0 time=1674064 us)(object id 100025)
  48349  COUNT STOPKEY (cr=532123 pr=0 pw=0 time=10911463 us)
  48349   TABLE ACCESS BY INDEX ROWID BH003D (cr=532123 pr=0 pw=0 time=9524175 us)
 145030    NESTED LOOPS  (cr=483590 pr=0 pw=0 time=13380122 us)
  48349     NESTED LOOPS  (cr=338459 pr=0 pw=0 time=5207615 us)
  48353      TABLE ACCESS BY INDEX ROWID BH_SEPRC (cr=193404 pr=0 pw=0 time=3023244 us)
  48353       INDEX RANGE SCAN BH_SEPRC_PK (cr=145053 pr=0 pw=0 time=1750126 us)(object id 421852)
  48349      TABLE ACCESS BY INDEX ROWID BH004A (cr=145055 pr=0 pw=0 time=1795628 us)
  48349       INDEX UNIQUE SCAN PK_BH004A (cr=96706 pr=0 pw=0 time=1098917 us)(object id 99384)
  96679     INDEX RANGE SCAN PK_BH003D (cr=145131 pr=0 pw=0 time=1781172 us)(object id 99374)
  39222  HASH UNIQUE (cr=982720 pr=0 pw=0 time=19630386 us)
  48497   CONNECT BY WITH FILTERING (cr=256983 pr=0 pw=0 time=5159374 us)
  19884    TABLE ACCESS BY INDEX ROWID BH_SEPRA (cr=85630 pr=0 pw=0 time=1736132 us)
  19884     FILTER  (cr=84357 pr=0 pw=0 time=1596905 us)
  19884      NESTED LOOPS  (cr=84357 pr=0 pw=0 time=1458388 us)
  19886       NESTED LOOPS  (cr=24699 pr=0 pw=0 time=761761 us)
   7810        TABLE ACCESS FULL UBJDIVA (cr=1151 pr=0 pw=0 time=674863 us)
  19886        INDEX RANGE SCAN PK_UBJDIVB (cr=23548 pr=0 pw=0 time=341332 us)(object id 100406)
  19884       INDEX UNIQUE SCAN PK_BH_SEPRA (cr=59658 pr=0 pw=0 time=592969 us)(object id 100029)
  28136    NESTED LOOPS  (cr=171353 pr=0 pw=0 time=2545803 us)
  47739     BUFFER SORT (cr=0 pr=0 pw=0 time=240663 us)
  47739      CONNECT BY PUMP  (cr=0 pr=0 pw=0 time=143330 us)
  28136     TABLE ACCESS BY INDEX ROWID BH_SEPRA (cr=171353 pr=0 pw=0 time=2055326 us)
  28136      INDEX UNIQUE SCAN PK_BH_SEPRA (cr=143217 pr=0 pw=0 time=1397812 us)(object id 100029)
      0    TABLE ACCESS FULL BH_SEPRA (cr=0 pr=0 pw=0 time=0 us)
      0    FILTER  (cr=0 pr=0 pw=0 time=0 us)
      0     NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
      0      INDEX UNIQUE SCAN PK_UBJDIVB (cr=0 pr=0 pw=0 time=0 us)(object id 100406)
      0      TABLE ACCESS BY INDEX ROWID UBJDIVA (cr=0 pr=0 pw=0 time=0 us)
      0       INDEX UNIQUE SCAN PK_UBJDIVA (cr=0 pr=0 pw=0 time=0 us)(object id 100403)


second query
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1      0.03       0.03          0          0          0           0
Fetch     2616     37.90      38.61          0     954671          0       39222
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2618     37.95      38.66          0     954671          0       39222

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 80

Rows     Row Source Operation
-------  ---------------------------------------------------
  39222  VIEW  (cr=954671 pr=0 pw=0 time=38575412 us)
  39222   HASH UNIQUE (cr=954671 pr=0 pw=0 time=38418447 us)
  48497    CONNECT BY WITH FILTERING (cr=228669 pr=0 pw=0 time=24925768 us)
  19884     TABLE ACCESS BY INDEX ROWID BH_SEPRA (cr=85630 pr=0 pw=0 time=1708199 us)
  19884      FILTER  (cr=84357 pr=0 pw=0 time=1568972 us)
  19884       NESTED LOOPS  (cr=84357 pr=0 pw=0 time=1450351 us)
  19886        NESTED LOOPS  (cr=24699 pr=0 pw=0 time=753714 us)
   7810         TABLE ACCESS FULL UBJDIVA (cr=1151 pr=0 pw=0 time=635585 us)
  19886         INDEX RANGE SCAN PK_UBJDIVB (cr=23548 pr=0 pw=0 time=329605 us)(object id 100406)
  19884        INDEX UNIQUE SCAN PK_BH_SEPRA (cr=59658 pr=0 pw=0 time=597374 us)(object id 100029)
  28136     HASH JOIN  (cr=143039 pr=0 pw=0 time=3665826 us)
  47739      CONNECT BY PUMP  (cr=0 pr=0 pw=0 time=95722 us)
15586662      TABLE ACCESS FULL BH_SEPRA (cr=143039 pr=0 pw=0 time=31175442 us)
      0     TABLE ACCESS FULL BH_SEPRA (cr=0 pr=0 pw=0 time=0 us)
      0     FILTER  (cr=0 pr=0 pw=0 time=0 us)
      0      NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
      0       INDEX UNIQUE SCAN PK_UBJDIVB (cr=0 pr=0 pw=0 time=0 us)(object id 100406)
      0       TABLE ACCESS BY INDEX ROWID UBJDIVA (cr=0 pr=0 pw=0 time=0 us)
      0        INDEX UNIQUE SCAN PK_UBJDIVA (cr=0 pr=0 pw=0 time=0 us)(object id 100403)


The questions:

1. How could we do to query number two so that the plan remain the same as the query one, as a consideration I am only reselect query 1.
2. In query two why I can not see the plan for accessing this inline subquery
              (select proses from bh_sepr
                where no_stok = aa.no_stok
                  and sub_stok = aa.sub_stok
              ) proses


Thanks.

elapsed time in tkprof output

Archana, October 24, 2007 - 8:28 am UTC

Tom,

At one place you are saying that "cpu time is a subset of the time in elapsed."

My understanding is, elapsed time is the period when the user fired the query and got the result set. Now my doubt is, whether the counting for this elapsed time starts when the query comes to listener or when the user fires it. I mean, whether the time taken by the query to come to listener (SQL*net) is also included in the elapsed time or not.

Tom Kyte
October 24, 2007 - 9:23 am UTC

queries do not go to the listener.

queries are sent either to a dedicated server that gets right on working it or they are placed into the sga and a shared server picks it up and works on it.

the listener is involved only during connect.

the elapsed time is time spent in the database. you'll see "sqlnet message from client" wait events up to the point in time the query is received by the server - you'll see sqlnet "wait" events during the processing of the query as the data is sent back and forth.

tkprof output

A reader, October 26, 2007 - 8:46 am UTC

So can I say that if an INSERT statement is showing .05 elapsed time, it is the time between when the query hits the database, finishes the execution (waits for something if wait events are there) and sends message back to the client that a row is inserted?
Tom Kyte
October 29, 2007 - 11:04 am UTC

you can say that tkprof represents time spent in the database, yes.

Grinalds, October 31, 2007 - 9:16 am UTC

Hi!
What is tkprof file name for Oracle 10g?
On 10g file name can contain _j000_ instead or _ora_?
for example tsi6_j000_245980.trc
Thank you in advance!

You wrote:

ops$tkyte@ORA9IR2> set echo on
ops$tkyte@ORA9IR2> @gettrace
ops$tkyte@ORA9IR2> column trace new_val TRACE
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  2    from v$process a, v$session b, v$parameter c, v$instance d
  3   where a.addr = b.paddr
  4     and b.audsid = userenv('sessionid')
  5     and c.name = 'user_dump_dest'
  6  /
 
TRACE
-------------------------------------------------------------------------------
/home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_1184.trc

Tom Kyte
November 01, 2007 - 4:16 pm UTC

j000 comes from a background process, not a normal session (job queues - J)

it would not be in the user dump, it would be in the background dump destination.

what parameters i need to pass?

Srikanth, November 07, 2007 - 2:52 am UTC

Hi Tom,
what all parameters i need to pass to tkproof...inorder to know the execution plan?

Thanks and Regards
Srikanth
Tom Kyte
November 07, 2007 - 5:14 pm UTC

none at all - row source operation is produced from the trace file.

You just need to make sure

a) you enable tracing
b) you close cursors and then
c) disable tracing

if you enable tracing and then "exit", that is the same as a,b,c

Execute shows High Number of CPUS and Elapsed time

Rakesh, March 10, 2008 - 7:46 am UTC

Hi Tom,
Can you please explain what can be possible causes for High Numbers of CPUS and Elapsed time for Execute.


SELECT name,roll_number
FROM
students WHERE cube_no = :b1 AND allocated_no = :b2 AND student_area = :b3 AND
student_code = 1000




call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1458694 500.32 515.33 0 0 0 0
Fetch 1479514 191.84 189.15 0 7489744 654875 1479274
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2938208 692.16 704.48 0 7489744 654875 1479274



Thanks
Tom Kyte
March 10, 2008 - 11:44 am UTC

umm, do you really how teeny tiny these numbers are???

come on. look at them.

ops$tkyte%ORA10GR2> select 692.16/1458694 from dual;

692.16/1458694
--------------
    .000474507


so, the total cpu used divided by the number of times you executed this little, itsy bitsy teeny weeny query is....

0.00047 cpu seconds per execution.

I'm impressed, that is like pretty fast don't you think?

What is not impressive is your application executing that piece of sql 1.4 million times - that - that is hugely "not a good thing"

You are obviously doing slow by slow (row by row) processing - like you were writing cobol code or something. Not using the database, not doing joins, not thinking in sets.


The only way to make this "faster" would be to do it A LOT LESS OFTEN - like "once".


I have a simple saying:


JUST JOIN.


Just join, let the database do the work.


A reader, March 12, 2008 - 1:20 am UTC

Hi Tom,
You are right. This sql gets executed 1.4m times.

The sql gets executed from a function which is getting called from different procedures and inside big loop.

My concern is values of bind variables.

Even though most of the times the set of values of bind variables are same, the cursor running multiple times.

Is there any possibility we can eliminate multiple execution of sql for same set of values.

Appriciate your response.

Thank you

Tom Kyte
March 12, 2008 - 5:28 pm UTC

... Is there any possibility we can eliminate multiple execution of sql for same
set of values.
...


yes, but only if you CODE IT THAT WAY. come on - think about this, the database does what - it does what you tell it to do. If you tell us to execute something 1.4million times - we will (well, in 11g there is client side result caching, server result set caching and plsql function caching - but you are likely not 11g and likely it would not help you anyway if there are thousands of inputs to this query)


As I said - you have 100% control over the number of times a SQL is executed - you do, not us, you. Your program does this, The possibility to make the sql more efficient by JUST JOINING - is in your hands


you have the power.

Execute shows High Number of CPUS and Elapsed time

Rakesh, March 12, 2008 - 2:31 am UTC

Hi Tom,
You are right. This sql gets executed 1.4m times.

The sql gets executed from a function which is getting called from different procedures and inside
big loop.

My concern is values of bind variables.

Even though most of the times the set of values of bind variables are same, the cursor running
multiple times.

Is there any possibility we can eliminate multiple execution of sql for same set of values.

Appriciate your response.

Thank you


db file sequential read wait events differ from different clients

A reader, March 17, 2008 - 6:56 pm UTC


Tom,

I have an interesting issue regarding db file sequential read wait event time. It varies when I run same query from different clients. I thought running query from a client which is goegraphically away, will have more "sql net message from/to client wait" events more

how the db file sequential reads are increased proportion to distance between client and server. The wait events time increased 3 times when compared to clinet running on the same server.

Can you please suggest what is worng here?

Thanks
Tom Kyte
March 24, 2008 - 9:03 am UTC

I believe this is a case of false causality - the distance is a coincidence.

give more information here, describe in more detail what is happening, are these clients at varying "distances", running precisely the same query, doing physical io on precisely the same blocks. (doubtful, actually - I'd say "impossible" more like it)

tkprof difference between instances

Brian B, April 09, 2008 - 10:39 am UTC

We have two databases, one 3-node RAC and one single-instance rman clone of the first. We have a particular query that is doing a lot more logical reads in the RAC than the Clone.

We have a particular query doing a lot more I/O in RAC than teh clone. Result sets, Execution plans, and statistics are identical. The tkprof from each is below, after running the query multiple times in each instance (so many of the blocks should be cached).

What might account for the order-of-magnitude difference in query blocks?

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.09 0 0 0 0
Execute 1 0.01 0.01 0 0 0 0
Fetch 34 186.44 189.75 3636 2202979 0 845
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 36 186.51 189.85 3636 2202979 0 845

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 218

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 2 0.00 0.00
SQL*Net message to client 35 0.00 0.00
SQL*Net message from client 34 0.65 1.26
global cache cr request 1579 0.03 0.41
db file scattered read 679 0.01 0.61
db file sequential read 889 0.03 0.53
db file parallel read 5 0.00 0.01
latch free 56 0.00 0.00
SQL*Net more data to client 34 0.00 0.00



Single-Instance


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.44 0.43 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 34 7.37 7.18 0 169361 0 845
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 36 7.81 7.62 0 169361 0 845

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 218

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 35 0.00 0.00
SQL*Net message from client 34 0.21 0.55
SQL*Net more data to client 34 0.00 0.00
Tom Kyte
April 09, 2008 - 2:56 pm UTC

got row sources??

tkprof difference between instances

Brian B, April 10, 2008 - 9:22 am UTC

"got row sources?"

Not sure what you're looking for. Sorry if I was "too" concise.

SQL? Tables involved? row counts? something else in the tkprof?

Since the previous post we've observed the same behavior in another test RAC environment... we suspect some difference with how RAC performs the fetch as opposed to single-instance.
Tom Kyte
April 10, 2008 - 11:28 am UTC

the row source report from the tkprof - that shows the actual work performed by each step of the plan

select *
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          0          3          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96
<b>
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL DUAL (cr=3 pr=0 pw=0 time=80 us)
</b>

tkprof difference between instances

Brian B, April 10, 2008 - 12:45 pm UTC

No Row Source sections were included for this query in the tkprof - wait events are the next thing. An earlier (unrelated?) query in the tkprof (select default$ from col$ where rowid=:1) has Row Source Operations, but nothing after the SQL of interest.

Oracle 9.2.0.6.0 on Solaris
The trace was a 10046 level 12


Tom Kyte
April 10, 2008 - 1:19 pm UTC

need to get it - the row source data in 10g and before is emitted into the trace file when the cursor is closed, so we must start tracing, run queries, CLOSE THE CURSORS (exiting the app does this nicely) and then end tracing.

I'm going to guess:

in instance 1 - there are no concurrent users, low LIO's
in instance 2 - there are a lot, and they are updating the data you are reading.

consider:

ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable x refcursor
ops$tkyte%ORA10GR2> variable y refcursor
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @trace
ops$tkyte%ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA10GR2> exec open :x for select * from t before_update;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec open :y for select * from t after_update;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> print x

         X
----------
         1

ops$tkyte%ORA10GR2> exec begin for i in 1 .. 500 loop update t set x = x+1; commit; end loop; end;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> print y

         X
----------
         1



tkprof will show that the second print needed to undo the changes (read consistency) and did LIO's against undo in order to do that:

SELECT * FROM T BEFORE_UPDATE

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          7          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.01          0          7          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL T (cr=7 pr=0 pw=0 time=1032 us)
********************************************************************************
SELECT * FROM T AFTER_UPDATE

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0        502          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.01       0.02          0        502          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL T (cr=502 pr=0 pw=0 time=5364 us)

tkprof difference between instances

Brian B, April 10, 2008 - 1:41 pm UTC

Thanks!

It sounds like ours is the case of "busy system" vs. "idle system" rather than RAC vs. Single-Instance.

Extra logical I/O due to undo is what I suspected but couldn't get my head around the explanation. Yours makes sense.

Thanks also for the education on how to use tkprof properly.

Expert One on One page 459 Fetch Count/rows Fetched

CG, May 02, 2008 - 8:10 am UTC

Tom,
A high fetch count/ rows fetched ratio you said in your book would mean NO BULK FETCHES.

What if you have a select into statemnt that should return one row in a row level trigger that fires like below:
SELECT 'Y', ROWNUM
FROM
INCADM.OTM_PO_STAGING WHERE PO_NUM = :B1 AND OTM_PO_STATUS = 'R' AND
SENT_DATE IS NULL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 5264 0.25 3.51 0 0 0 0
Fetch 5264 1.22 22.62 5996 54755 0 4747
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10528 1.47 26.13 5996 54755 0 4747

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 75 (INCADM) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 COUNT
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'OTM_PO_STAGING'

5264/4747 = 1.108910891...

From trigger:
CURSOR Cur_RCOrderExists IS
SELECT 'Y', rownum
FROM incadm.otm_po_staging
WHERE po_num = v_Order_No
AND otm_po_status = 'R'
AND sent_date is NULL;
<snip>
BEGIN


v_Order_No := :old.order_no;

OPEN Cur_RCOrderExists;
FETCH Cur_RCOrderExists INTO v_RCOrder_Exists, v_rc_rownum; <snip>

Can this high ratio be ignored since it is fired for each row and only should return one row?

Even thought this is a select does EXECUTE have a value because of the ROWNUM pseudo column?
Tom Kyte
May 02, 2008 - 9:20 am UTC

... Can this high ratio be ignored since it is fired for each row and only should
return one row?
....

not in my opinion


because I hate triggers
I'd rather have that code in a transactional API
which might give us the ability to avoid slow by slow processing




and this looks suspiciously like "check to see if an order for this record exists, if not fail"

which is normally called "referential integrity"

which is something that cannot be done by your application developers correctly without using the LOCK TABLE command

which I doubt you are using.


I can tell your table has about 10 blocks - you see that single row query? 10 IO's per execution - ouch - might want to index that.


tkprof and sys

Peter Tran, May 03, 2008 - 1:37 pm UTC

Hi Tom,

First, thanks for reviewing this question. I'm running some 10046 trace session and notice a few SQLs where the parsing user id is "SYS". For example,

select /*+ all_rows */ count(1)
from
"TUNE8"."PM_UI_PRICE_RECORD" where "NEW_RECORD_ID" = :1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 277 0.02 0.01 0 0 0 0
Fetch 277 146.59 143.77 0 5288761 0 277
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 556 146.61 143.79 0 5288761 0 277

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)

I can't find where these SQLs are being used in my application, so I'm a little concern that the SQLs are being invoked a lot and the column it's querying against don't have the appropriate indexes.

Why is the SYS user running these SQLs?

Thanks!
-Peter

Found it!

Peter Tran, May 03, 2008 - 4:32 pm UTC

Hi Tom,

I found the issue with my earlier post. We has some FKs against this table. When we deleted the record from the parent table, Oracle ran a select count(1) to validate the FK constraints automatically under SYS. Without the appropriate indexes, the performance was terrible.

Thanks,
-Peter

Thanks for the wisdom

CG, May 06, 2008 - 4:03 pm UTC

Despite of the fact that I need a foreign key to check if a row exists,
would it have been "Faster" to write to a collection with a row level trigger ( no processing ) then with a statement level trigger do the processing?

Why do you hate triggers? Can you post a link to a thread?
Tom Kyte
May 07, 2008 - 1:06 am UTC

did you see the bit about "you'd need the lock table" - how concurrent and fast do you think that would be??

and if you 'cached' the data in a collection, how multi-user friendly would you be?


this is why I hate triggers, you are not thinking "multi-user" at all and many triggers that work great if you are the ONLY USER in the database fall apart in real life, you have to think about concurrent users and what happens when many people access/modify the same data.


http://www.google.com/search?q=site%3Atkyte.blogspot.com+i+hate+triggers

A reader, May 06, 2008 - 4:15 pm UTC

We are running a procedure from a session and I am tracing that session. It is still running but in the meantime I ran tkprof and generated the output text file. The trace file is nearly 200 MB so far. What is the best way for me to go through the output and determine where the bottlenecks are.
Tom Kyte
May 07, 2008 - 1:09 am UTC

use tkprof to sort the most expensive queries to the top (by cpu, by logical ios, by elapsed time)

A reader, May 07, 2008 - 10:32 am UTC

I used tkprof to generate the output. But how do I bring the most expensive queries to the top ?
Tom Kyte
May 08, 2008 - 3:05 am UTC

i said to use tkprof to sort it......

tkprof has command line options - they control the order of the sql in the report.... have you looked at them???


http://docs.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#i4537

I understand your point about high MISUSE..

CG, May 07, 2008 - 10:36 am UTC

....BUT.

What do you do if you are not trying to do referential integrity? If I have a fk that references parent table, it wont help me know if a record exists in child table. I still have do SOMETHING to find that out correct?

For example:
SELECT 'Y'
FROM
INCADM.PO_STAGING WHERE ORDER_NO = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 518 0.00 0.02 0 0 0 0
Fetch 518 52.00 1249.14 362658 476560 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1036 52.00 1249.16 362658 476560 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 38 (RMS) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PO_STAGING_IND1' (NON-UNIQUE)

Would it be better to do this in the trigger instead:

merge into po_staging pstg
using ( select :new.order_no from dual ) ord
on ( pstg.order_no = ord.order_no )
when matched then update set pstg.create_date = sysdate,
pstg.status= :new.status, pstg.db_user = vUserID,
pstg.po_ind = 'P"
when not matched then insert ( pstg.create_date, pstg.processed,pstg.db_user,pstg.status, pstg.po_ind )
values ( sysdate, :new.status,vUserID, 'P' );

Tom Kyte
May 08, 2008 - 3:07 am UTC

... What do you do if you are not trying to do referential integrity? If I have a
fk that references parent table, it wont help me know if a record exists in
child table. I still have do SOMETHING to find that out correct?
...

eh? that makes *no sense* to me.


if you are not trying to do RI.

but yet you have a foreign key (hence you are doing RI?)

I don't know what you are trying to say here.


why not, instead of posting code, you post precisely (stated as a functional requirement) what you are trying to accomplish.

then, maybe we can suggest something.

What my requirements are....

CG, May 08, 2008 - 10:59 am UTC

I have a 'staging' table that holds orders that need to be sent to another database ( Oracle Tranportation Management ).

When an Purchase Order is modify for certaing columns in the order table we need CHECK to see if that purchase order has already been sent to the staging table if it has we will Update if Not we will insert.

In order to perform that check we do the code I posted above. So my question was, in the trigger, would it be wiser to use a MERGE statement versus doing a lookup to see if it exists then performing logic based on that.
Which is why I posted the stats for my row level trigger.

Sorry for the confusion.
Tom Kyte
May 12, 2008 - 9:35 am UTC

it would be wiser to NOT use the trigger - but to do things "simply", "concisely" and in the original transaction (if you ask me)


use any approach you want. I would concentrate on fixing this:


SELECT 'Y' 
FROM
 INCADM.PO_STAGING WHERE ORDER_NO = :B1 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute    518      0.00       0.02          0          0          0           0
Fetch      518     52.00    1249.14     362658     476560          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1036     52.00    1249.16     362658     476560          0           0


that is ALMOST 1,000 IO's PER EXECUTION

for a simple 'where order-no = ?'

seems like you might just be missing an index there - like a UNIQUE index on order_no. That might tend to improve your

a) merge if that is what you do

b) update/if sql%rowcount = 0 then insert, if that is what you do

c) insert/ exception when dup_val_on_index then update, if that is what you do


DO NOT however

1) count rows
2) update if 1
3) insert if 0


Your only sensible choices are

a) merge
b) update, insert if nothing to update
c) insert, update if insert failed due to duplicates



but you need that primary key on order_no in your staging table.

understanding o/p of tkprof

Reene, May 09, 2008 - 2:12 am UTC

Hi Tom,

below is the output of tkprof for a SQL query - my question is - by looking at the below output how can I find out
1) that which part or step of the below output is taking more time and need to be tuned :

2)is time represents the time taken by a specific operation.

for e.g

(cr=219 r=216 w=0 time=179331 us)

does this mean that this particuler operation has done 219 LIO, 216 PIO and 0 writes and it took 179331/100000 seconds.

Kindly explain.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.53 0.51 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 11005 12.06 57.00 18580 384465 0 11004
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11007 12.59 57.52 18580 384465 0 11004

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 65 (APPS) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
11004 NESTED LOOPS (cr=318156 r=18439 w=0 time=49380003 us)
11004 NESTED LOOPS (cr=274140 r=17110 w=0 time=35171993 us)
8118 NESTED LOOPS (cr=228311 r=15957 w=0 time=21974009 us)
8118 NESTED LOOPS (cr=212615 r=15957 w=0 time=21903314 us)
8118 NESTED LOOPS (cr=180683 r=15571 w=0 time=17070137 us)
8118 NESTED LOOPS (cr=156869 r=15571 w=0 time=16953701 us)
7752 NESTED LOOPS (cr=117786 r=14995 w=0 time=10752121 us)
7749 NESTED LOOPS (cr=79221 r=14995 w=0 time=10432172 us)
5000 NESTED LOOPS (cr=49074 r=14995 w=0 time=10144879 us)
5000 NESTED LOOPS (cr=34359 r=14995 w=0 time=10065843 us)
5000 HASH JOIN (cr=19666 r=14995 w=0 time=9949606 us)
5000 HASH JOIN (cr=1455 r=216 w=0 time=752218 us)
30477 TABLE ACCESS FULL PER_ALL_PEOPLE_F (cr=1236 r=0 w=0 time=513999 us)
5000 VIEW (cr=219 r=216 w=0 time=181055 us)
5000 COUNT STOPKEY (cr=219 r=216 w=0 time=179331 us)
5000 TABLE ACCESS FULL T1 (cr=219 r=216 w=0 time=177895 us)
265974 TABLE ACCESS FULL PO_VENDOR_SITES_ALL (cr=18211 r=14779 w=0 time=8987264 us)
5000 TABLE ACCESS BY INDEX ROWID PO_VENDORS (cr=14693 r=0 w=0 time=98778 us)
5000 INDEX UNIQUE SCAN PO_VENDORS_U1 (cr=9692 r=0 w=0 time=58807 us)(object id 1002545)
5000 TABLE ACCESS BY INDEX ROWID HR_LOCATIONS_ALL (cr=14715 r=0 w=0 time=64539 us)
5000 INDEX UNIQUE SCAN HR_LOCATIONS_PK (cr=9692 r=0 w=0 time=39770 us)(object id 686985)
7749 TABLE ACCESS BY INDEX ROWID PO_LINES_ALL (cr=30147 r=0 w=0 time=266380 us)
7749 INDEX RANGE SCAN PO_LINES_U2 (cr=21911 r=0 w=0 time=202387 us)(object id 10182)
7752 TABLE ACCESS BY INDEX ROWID PO_LINE_LOCATIONS_ALL (cr=38565 r=0 w=0 time=289958 us)
7752 INDEX RANGE SCAN PO_LINE_LOCATIONS_N2 (cr=29931 r=0 w=0 time=221780 us)(object id 10194)
8118 TABLE ACCESS BY INDEX ROWID PO_DISTRIBUTIONS_ALL (cr=39083 r=576 w=0 time=6176221 us)
8118 INDEX RANGE SCAN PO_DISTRIBUTIONS_N1 (cr=30303 r=195 w=0 time=2272688 us)(object id 10127)
8118 TABLE ACCESS BY INDEX ROWID MTL_CATEGORIES_B (cr=23814 r=0 w=0 time=94963 us)
8118 INDEX UNIQUE SCAN MTL_CATEGORIES_B_U1 (cr=15696 r=0 w=0 time=53561 us)(object id 9632)
8118 TABLE ACCESS BY INDEX ROWID GL_CODE_COMBINATIONS (cr=31932 r=386 w=0 time=4812870 us)
8118 INDEX UNIQUE SCAN GL_CODE_COMBINATIONS_U1 (cr=23814 r=125 w=0 time=1680286 us)(object id 51426)
8118 INDEX UNIQUE SCAN MTL_CATEGORIES_TL_U1 (cr=15696 r=0 w=0 time=51819 us)(object id 684692)
11004 TABLE ACCESS BY INDEX ROWID AP_INVOICE_DISTRIBUTIONS_ALL (cr=45829 r=1153 w=0 time=13168979 us)
11004 INDEX RANGE SCAN AP_INVOICE_DISTRIBUTIONS_N7 (cr=34825 r=151 w=0 time=2093259 us)(object id 7199)
11004 TABLE ACCESS BY INDEX ROWID AP_INVOICES_ALL (cr=44016 r=1329 w=0 time=14168607 us)
11004 INDEX UNIQUE SCAN AP_INVOICES_U1 (cr=33012 r=318 w=0 time=3600317 us)(object id 7168)


Tom Kyte
May 12, 2008 - 10:11 am UTC

the time is cumulative, you have to subtract prior steps to see how long an individual step consumed.

You'd want to know your wait events - you have a lot of time spent waiting, probably for physical IO.


ops$tkyte%ORA9IR2> select (57.52-12.59)/18580 from dual;

(57.52-12.59)/18580
-------------------
         .002418192


IO speed seems reasonable, you just do a ton of it.

how to know this

Reene, May 12, 2008 - 11:00 am UTC

Hi Tom

thanks for your answer.

the other question I have is -

by looking at the above tkprof - how to make out that where is the problem .

is it possible to do by reading a tkprof.
i have gone through your book expert on one and then carry milsap book as well ,also searched the metalink.
could not find a pointer which can elobarate on this.

kinldy suggest.
Tom Kyte
May 12, 2008 - 1:49 pm UTC

well, there does not seem to be "a problem", short of the fact you are doing lots of physical IO and while each one is really fast - when you do thousands of anything, it adds up.

not for the specific query for sake of understanding

Reene, May 13, 2008 - 1:56 am UTC

Hi Tom

Thanks again for explaining.

what I really want to understand is -

how to find that which step or what is wrong with a query or how to tune a query (rather waht to tune ,what should be looked at first in a query -which is farily complext,having too many table,too many joins ,subqueries and so on) using the tkprof for that query.

in your book - you have given some metrics about tkprof such as elapsed time is too high than cpu time and many other things.

what is not there ( or any other documents which i have refered so far) is - how to troublehsott a query using tkprof - is this a valid thing to ask ?

as i am part of performance tuing work - all we do is take the statspack and tkprof (for isolated queries) and try to tune. I learn the application as well before tune the query - but i am not able to use tkprof effectively ,

the main issues are :

1. how to say that this step is the bottlenck.
2. or hey this should not be hash join - it should be nested loops . (even though i understand the nested loops and hash joins ) ...
3. or this subquery should have been merged with main query ....

please share your thought.if possible.
even if you could point to a good document that will be great help.

Regards
Tom Kyte
May 13, 2008 - 10:39 am UTC

...
how to find that which step or what is wrong with a query or how to tune a
query (rather waht to tune ,what should be looked at first in a query -which is
farily complext,having too many table,too many joins ,subqueries and so on)
using the tkprof for that query.
.....

my approach - forget the query.

go back to the QUESTION
go back to the ALGORITHM

look - I can use tkprof to find "problems", where I am spending my time - but the optimizer is really typically pretty good at tuning the query plan. What it cannot do is rewrite your query from the ground up (using the QUESTION, just write a NEW query, do not get clouded by the OLD query), it cannot fix your slow by slow processing. Consider this tkprof:

SELECT *
FROM
 BIG_TABLE.BIG_TABLE WHERE ID = :B1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execte 1000000    114.68     112.24          0          0          0           0
Fetc   1000000     32.65      31.13      14036    4000000          0     1000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total  2000001    147.34     143.37      14036    4000000          0     1000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 293     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
1000000  TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=4000000 pr=14036 pw=0 time=25334507 us)
1000000   INDEX UNIQUE SCAN BIG_TABLE_PK (cr=3000000 pr=0 pw=0 time=13047414 us)(object id 145965)


No amount of SQL tuning will fix that, you need to go back to the application and say "what the HECK are you trying to accomplish".

You do appear to have my book Effective Oracle by Design based on some of your comments - take a look at the Effective SQL chapter - it starts with:

<quote>
Effective SQL

This was probably the hardest chapter of the book to write. That is not because the material is all that complex. It¿s because I know what people want, and I know what can be delivered. What people want is the ten-step process for tuning any query. What can be delivered is knowledge about how queries are processed, which you can use and apply as you develop queries.

If there were a ten-step, or even a million-step, process by which any query (or a large percentage of queries) could be tuned, we would write a program to do it. Sure, there are actually many programs that try to do this, such as Oracle Enterprise Manager with its tuning pack, SQL Navigator, and others. What they do is primarily recommend indexing schemes to tune a query, suggest materialized views, and offer to add hints to the query to try other access plans. They show you different query plans for the same statement and allow you to pick one. These tuning tools use a very limited set of rules that sometimes can suggest that index or set of indexes you really should have thought of during your design. They offer ¿rule of thumb¿ (what I generally call ROT, since the acronym and the word it maps to are so appropriate for each other) SQL optimizations. If these were universally applicable, the optimizer would do them routinely.

In fact, the CBO does tuning already. It rewrites our queries all of the time. If there were an N-step process to tuning a query¿to writing efficient SQL¿ the optimizer would incorporate it all, and we would not be having a discussion on this topic. It is like the search for the Holy Grail. Maybe someday the software will be sophisticated enough to take our SQL, understand the question being asked, and process the question rather than syntax.

This chapter will provide the foundation knowledge you need in order to begin thinking about how queries could be processed. Once you understand query processing, you can make the query do what you think might be best. Writing efficient SQL is no different from writing efficient C, Java, or even English. It takes an understanding of how things work and what is available. If you don¿t know some technique is available, you¿ll never be able to optimize to use it.
</quote>

thanks ,one last question

Reene, May 13, 2008 - 11:34 am UTC

Hi Tom

thanks for taking time again on answering the question.

yes, i do have all of your books and along with the one you have recommended .

Effective SQL - yes i have gone through it and i guess understood upto certain extent ,will go through again,

here would again my last question - you said
"look - I can use tkprof to find "problems", where I am spending my time " - -- this is exaclty - i want to learn - it seems a black magic sometime for me ,when someone looks at tkprof and pinpoint the steps (sometime they are right - more often than not ) but they are still better than me - i guess they know it better than me.

please share - how would you use tkprof to find "problems", where I am spending my time " .

even few things from you would make the things lot clear and i will try to build on them .

sorry for being presistence - but this is something which is beating me since very long..

Thanks
Tom Kyte
May 13, 2008 - 12:47 pm UTC

what I meant by "I can use tkprof to find "problems", where I am spending my time" was the example above.


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execte 1000000    114.68     112.24          0          0          0           0
Fetc   1000000     32.65      31.13      14036    4000000          0     1000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total  2000001    147.34     143.37      14036    4000000          0     1000000



Ok, I FOUND where I am spending my time - executing that silly query:

SELECT *
FROM
 BIG_TABLE.BIG_TABLE WHERE ID = :B1



Now, I'd like to spend less time executing that query.

There is nothing to tune in the query, look at it, it is a primary key read there. I cannot make that go faster (hey, maybe I could put it into an index organized table, reduce the IO's by 25%, but that would affect tons of other things - and it would get me a 25% reduction in resource usage only on the retrieval - modifications would increase in runtime/resources).

I can only look at the application. In this case, the application was:


begin
    for i in 1 .. 1000000
    loop
        for x in (select * from big_table.big_table where id = i)
        loop
            null;
        end loop;
    end loop;
end;


I need to fix that code, it is slow by slow - the logic is just warped (this is my point, tkprof - points me to a query, query points me to application, most benefits come from the application)


In the event it is a single execution of a single query taking a long time - I look at the QUESTION (forget the plan, unless it is something obvious like:


select *
from
 big_table where object_name like '/%' and owner = 'SCOTT'


Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=286160 pr=0 pw=0 time=1754264 us)
 455200   INDEX RANGE SCAN BIG_TABLE_IDX (cr=2449 pr=0 pw=0 time=461658 us)(object id 158065)




see how there were 455,200 rows in the index - but zero rows in the table - I had an index on OBJECT_NAME, I needed an index on (OWNER,OBJECT_NAME) - when I have that:

select *
from
 big_table where object_name like '/%' and owner = 'SCOTT'


Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=3 pr=16 pw=0 time=713 us)
      0   INDEX RANGE SCAN BIG_TABLE_IDX (cr=3 pr=16 pw=0 time=694 us)(object id 158066)




huge difference.


Otherwise, I look at a big plan and say "isn't that pretty, now, tell me - what was the question you were trying to ask of the database - then I'll use my knowledge of inline views, with subquery factoring, analytics - thinking in SETS - coming up with the right approach"


I don't really tune queries, I often laugh when people throw up a 15 page query with a 30 page row source execution and say "please help me tune this" - it doesn't work that way.

I want to know the question
the schema


I already know SQL backwards and forwards, I know what I have *access to*, I know what it can do, I can conceptualize how it will be processed - I use that.


I need the question (you need the question), not someone else's inefficient answer that may or may not actually answer the question.

Great Answer

CG, May 13, 2008 - 4:43 pm UTC

Even helped answer my question.

Made me want to go look up IOT in your book again.

Want to know how mods are affected from them.

That was a post you should use on a BEFORE YOU POST ABOUT STATS Question!

Thanks

A reader, May 14, 2008 - 7:37 am UTC

Tom,

Thanks for explaining it in so detail. I went through Effective SQL again today and it really does answer many questions , i was asking here.

togehter your this post + tkprof chapter in your book -expert one on one + effective SQL can answer most of the questiones - which comes during the tuning.

then i had asked a question before which you answered in one of your blogs that was about why explain plan differs with row source execution.

then i had also many posts about Hints and your answers were really helpful and most of the time ,i realized my mistakes..

it has been a great learning path here.

about now SQL - i guess it requires lot of effort,lot of practice but I still find a single source of all about good SQL - will that be a consideration in your next books if any.

or would you please point the best source -other than SQL reference manual on OTN site...becuase this manual does not have any good or easy to understand example :)

too much of asking but really we all are grateful to you for such helps.

Thanks so much,really.

Tullio, June 01, 2008 - 4:47 am UTC

My question is : lloking at totals in tkproof I see NON-RECURSIVE and RECURSIVE statements, What's the difference ?
How can I find NON-RECURSIVE starements in the tkprof body ?
Tks

Tom Kyte
June 02, 2008 - 8:10 am UTC

recursive sql is sql executed during the execution of your sql (recursively invoked as a side effect of executing your sql in the first place)


It might be your sql (see example first below), it might be sql Oracle executes to execute your SQL (see second example below)

<b>

create or replace function foo return number
as
    l_n number;
begin
    select count(*) into l_n from dual recursive;
    return l_n;
end;
/

alter session set events '10046 trace name context forever, level 12';
select foo from dual non_recursive;
</b>



select foo from dual non_recursive

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          0          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 311

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  FAST DUAL  (cr=0 pr=0 pw=0 time=7 us)
********************************************************************************
SELECT COUNT(*) FROM DUAL RECURSIVE

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          0          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS<b>
Parsing user id: 311     (recursive depth: 1)
</b>
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=0 pr=0 pw=0 time=27 us)
      1   FAST DUAL  (cr=0 pr=0 pw=0 time=5 us)




and here is an example of us doing it. A sequence needs to be updated every <cache> times it is hit - default <cache> = 20 - every 20 times, we update seq$ for you to maintain that sequence...


<b>
create sequence s;


alter session set events '10046 trace name context forever, level 12';
select s.nextval from all_users;
</b>

select s.nextval from all_users

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        4      0.00       0.00          0        116          2          40
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.02       0.02          0        116          2          40

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 311
********************************************************************************

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          2          4           2
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          2          4           2

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE<b>
Parsing user id: SYS   (recursive depth: 1)
</b>
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  UPDATE  SEQ$ (cr=1 pr=0 pw=0 time=269 us)
      1   INDEX UNIQUE SCAN I_SEQ1 (cr=1 pr=0 pw=0 time=20 us)(object id 102)



Row Source Operation

CG, June 10, 2008 - 4:00 pm UTC

Tom,
Was trying to remove full table scans on item_master and ordhead.
The other full table scans are appropriate I feel.

Ive changed db_multiblock_read_count from 32 to 64 with no change. Ive tried change the optimizer_index_caching to 50 with no change.

There is an index on item column in item_master
But no non-unique on status or loc_type
Loc_type values have a cardinality of 2 from all its values.
Do you have any suggestions? A link to more documentaion?
I have all three of your books.

SELECT TO_CHAR(oh_view.otb_eow_date,'YYYY'),
substr(nrc.WEEK_ID,2,2),
im.dept,
ad.to_loc,
oh_view.location,
ol.location ordloc_loc,
oh_view.order_type,
(SUM((NVL(ad.qty_allocated,0) - NVL(ad.QTY_RECEIVED,0))) -
(NVL(ol.qty_ordered,0) - NVL(ol.QTY_RECEIVED,0))) on_order_qty
FROM (( SELECT oh2.otb_eow_date,
oh2.order_no,oh2.location,
oh2.loc_type,
oh2.order_type
from ordhead oh2
WHERE status = 'A'
and loc_type = 'W' )) oh_view,
ordloc ol,
item_master im,
incadm.mfp_deps,
alloc_header ah,
alloc_detail ad,
incadm.mfp_locations ml,
incadm.mfp_system_variables msv,
incadm.nex_rdf_calendar nrc
WHERE im.item = ol.item
and im.dept = mfp_deps.dept
and oh_view.order_no = ah.order_no
and ah.alloc_no = ad.alloc_no
and ad.to_loc = ml.LOCATION
and oh_view.order_no = ol.order_no
and ((oh_view.otb_eow_date <= msv.next_eow_date) --starting week
or (oh_view.otb_eow_date > msv.next_eow_date)--Future start range
and (oh_view.OTB_EOW_DATE <= (msv.next_eow_date + 365))) --Future end range
and oh_view.otb_eow_date = nrc.date_id
and rownum between 1 and 5
group
by oh_view.otb_eow_date,nrc.week_id,im.dept,ad.to_loc,oh_view.location,ol.location,oh_view.order_type,ol.qty_ordered, ol.qty_received

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.41 0.61 79 547 3 0
Execute 1 0.03 0.50 81 85 0 0
Fetch 2 20.27 352.75 128361 121540 2 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 20.71 353.86 128521 122172 5 4

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 75

Rows Row Source Operation
------- ---------------------------------------------------
4 SORT GROUP BY
5 COUNT STOPKEY
5 FILTER
0 HASH JOIN
0 TABLE ACCESS FULL NEX_RDF_CALENDAR
0 HASH JOIN
0 TABLE ACCESS FULL MFP_LOCATIONS
0 HASH JOIN
0 TABLE ACCESS FULL MFP_DEPS
0 HASH JOIN
0 INDEX FAST FULL SCAN ALLOC_DETAIL_I1 (object id 359289)
0 HASH JOIN
0 HASH JOIN
254087 HASH JOIN
37588 TABLE ACCESS FULL ORDHEAD
3729575 MERGE JOIN CARTESIAN
1 TABLE ACCESS FULL MFP_SYSTEM_VARIABLES
3729575 BUFFER SORT
3729575 INDEX FAST FULL SCAN ORDLOC_I3 (object id 359291)
5065709 TABLE ACCESS FULL ITEM_MASTER
0 INDEX FAST FULL SCAN ALLOC_HEADER_I6 (object id 359288)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 116 0.04 0.41
db file parallel read 2 0.06 0.09
process startup 7 0.02 0.13
PX Deq: Join ACK 3 0.00 0.00
PX Deq: Parse Reply 5 0.02 0.04
SQL*Net message to client 2 0.00 0.00
PX Deq: Execute Reply 439 1.98 207.63
db file scattered read 4035 0.26 122.75
direct path write 20 0.17 0.17
direct path read 87 0.00 0.00
PX Deq Credit: need buffer 29 0.00 0.00
PX Deq Credit: send blkd 31 1.82 3.69
PX qref latch 20 0.01 0.15
PX Deq: Table Q Normal 1 0.00 0.00
PX Deq: Signal ACK 4 0.09 0.09
SQL*Net message from client 2 255.90 255.92
enqueue 2 0.00 0.00
********************************************************************************
Tom Kyte
June 11, 2008 - 7:24 am UTC

would need to see an explain plan to go with that, in order to see if the estimated cardinality are close to reality.

Heres the explain plan you requested

A reader, June 11, 2008 - 8:18 am UTC

SQL> alter session set db_file_multiblock_read_count = 64
 2  ;

Session altered.

SQL> @mfp.sql

TO_C SUBSTR       DEPT     TO_LOC   LOCATION ORDLOC_LOC ORD ON_ORDER_QTY
---- ------ ---------- ---------- ---------- ---------- --- ------------
2004 45            941        440       9940       9940 ARB            6
2004 45            941        440       9940       9940 ARB           36


Execution Plan
----------------------------------------------------------
  0      SELECT STATEMENT Optimizer=CHOOSE (Cost=78199 Card=5 Bytes=6
         35)

  1    0   SORT (GROUP BY) (Cost=78199 Card=5 Bytes=635)
  2    1     COUNT (STOPKEY)
  3    2       FILTER
  4    3         HASH JOIN* (Cost=7219 Card=7591226 Bytes=964085702)  :Q116615
                                                                      3007

  5    4           TABLE ACCESS* (FULL) OF 'NEX_RDF_CALENDAR' (Cost=4 :Q116615
          Card=2892 Bytes=49164)                                      3001

  6    4           HASH JOIN* (Cost=7209 Card=7594019 Bytes=835342090 :Q116615
         )                                                            3007

  7    6             TABLE ACCESS* (FULL) OF 'MFP_LOCATIONS' (Cost=4  :Q116615
         Card=452 Bytes=1808)                                         3002

  8    6             HASH JOIN* (Cost=7199 Card=7594019 Bytes=8049660 :Q116615
         14)                                                          3007

  9    8               TABLE ACCESS* (FULL) OF 'MFP_DEPS' (Cost=4 Car :Q116615
         d=196 Bytes=784)                                             3003

 10    8               HASH JOIN* (Cost=7189 Card=7322804 Bytes=74692 :Q116615
         6008)                                                        3007

 11   10                 INDEX* (FAST FULL SCAN) OF 'ALLOC_DETAIL_I1' :Q116615
          (NON-UNIQUE) (Cost=504 Card=5364520 Bytes=85832320)         3005
12   10                 HASH JOIN* (Cost=4813 Card=1597988 Bytes=137 :Q116615
         426968)                                                      3006

 13   12                   HASH JOIN* (Cost=4602 Card=54217 Bytes=395 :Q116615
         7841)                                                        3000

 14   13                     HASH JOIN (Cost=1185 Card=54217 Bytes=31
         44586)

 15   14                       TABLE ACCESS (FULL) OF 'ORDHEAD' (Cost
         =544 Card=40553 Bytes=1054378)

 16   14                       MERGE JOIN (CARTESIAN) (Cost=550 Card=
         3737680 Bytes=119605760)

 17   16                         TABLE ACCESS (FULL) OF 'MFP_SYSTEM_V
         ARIABLES' (Cost=4 Card=1 Bytes=8)

 18   16                         BUFFER (SORT) (Cost=546 Card=3737680
          Bytes=89704320)

 19   18                           INDEX (FAST FULL SCAN) OF 'ORDLOC_
         I3' (NON-UNIQUE) (Cost=546 Card=3737680 Bytes=89704320)
 19   18                           INDEX (FAST FULL SCAN) OF 'ORDLOC_
         I3' (NON-UNIQUE) (Cost=546 Card=3737680 Bytes=89704320)

 20   13                     TABLE ACCESS (FULL) OF 'ITEM_MASTER' (Co
         st=3229 Card=5067770 Bytes=76016550)

 21   12                   INDEX* (FAST FULL SCAN) OF 'ALLOC_HEADER_I :Q116615
         6' (NON-UNIQUE) (Cost=181 Card=1170650 Bytes=15218450)       3004



  4 PARALLEL_TO_SERIAL            SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) SW
                                  AP_JOIN_INPUTS(A2) */ A1.C0,A2.C1,A1

  5 PARALLEL_FROM_SERIAL
  6 PARALLEL_COMBINED_WITH_PARENT
  7 PARALLEL_FROM_SERIAL
  8 PARALLEL_COMBINED_WITH_PARENT
  9 PARALLEL_FROM_SERIAL
 10 PARALLEL_COMBINED_WITH_PARENT
 11 PARALLEL_TO_PARALLEL          SELECT /*+ INDEX_RRS(A1 "ALLOC_DETAIL_I1") *
                                  / A1."ALLOC_NO" C0,A1."QTY_RECEIVED"

 12 PARALLEL_TO_PARALLEL          SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) */
                                   A2.C1 C0,A1.C0 C1,A1.C1 C2,A1.C2 C3

 13 PARALLEL_FROM_SERIAL
 21 PARALLEL_TO_PARALLEL          SELECT /*+ INDEX_RRS(A1 "ALLOC_HEADER_I6") *
                                  / A1."ORDER_NO" C0,A1."ALLOC_NO" C1
SQL> set autotrace on statistics
SQL> show parameter optimizer_index_caching

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_index_caching              integer     0
SQL> alter session set optimizer_index_caching = 50;

Session altered.

SQL> @mfp.sql

TO_C SUBSTR       DEPT     TO_LOC   LOCATION ORDLOC_LOC ORD ON_ORDER_QTY
---- ------ ---------- ---------- ---------- ---------- --- ------------
2004 45            863        441       9940       9940 ARB           16
2004 45            863        441       9940       9940 ARB           15
2004 45            863        441       9940       9940 ARB            0

Statistics
----------------------------------------------------------
         6  recursive calls
         2  db block gets
    139453  consistent gets
    176528  physical reads
         0  redo size
      1109  bytes sent via SQL*Net to client
       656  bytes received via SQL*Net from client
         2  SQL*Net roundtrips to/from client
        19  sorts (memory)
         1  sorts (disk)
         3  rows processed

Tom Kyte
June 11, 2008 - 8:28 am UTC

Ok, now you have to answer....

using your knownledge of your data, can you think of why the optimizer was so far off in its estimate against the index fast full scan of alloc_header_i

it got 0
it estimated 1,170,650


Well

A reader, June 11, 2008 - 4:54 pm UTC

... I thought about your question.
I analyzed alloc_header_16, then the table alloc_header.
Then the table item_master.
Traced the query with db_multiblock_read_count = 32 and
optimizer_index_caching =0. Then changed them to 64 and 50 and traced that.

Still no change.

Then I thought about the fact that in my previous post the explain plan I posted was produced using
set autotrace on explain then running my @mfp.sql

dont know if that is best way.
Im stumped on this one since it is still estimating over 1100000 but returning 0.

Ill have to do more digging around. Sort_area_size issue maybe?
Tom Kyte
June 11, 2008 - 9:03 pm UTC

sort area size, no, this is a wrong cardinality estimate.

maybe you can tell us which bits of the query use this index - we don't know that :)

reading TKPROF

A reader, August 25, 2008 - 1:28 am UTC

I have read your book effective oracle by design.

From page 51 you said that :

CPU The amount of CPU time spent on this phase of the statement in thousands of seconds.

And in this url

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6904032233295#19880933194143

you said that

they are in seconds, that is 1056.38 cpu seconds. divide by 60 and you have minutes.

Please clarify it Tom, wheter cpu expressed in second OR thousand of second.

Second question, is that 1056.38 cpu seconds = 1056.38 seconds

Third, from row source operation, there are figure cr, rw, w, time. Are these figure collected for the step they are embedded or cumulative from their child operation ?
Tom Kyte
August 26, 2008 - 8:19 pm UTC

wow, that is a typo that you are the first to catch. in a tkprof report, the cpu time is the cpu time in seconds.

it was 1056.38 cpu seconds, since that number came from the cpu column


cr, pr... times are cumulative from their child steps.

Why so many Cr for 1 rows

-Ajeet, January 12, 2009 - 2:27 am UTC

Hi Tom,

I have a query and it's tkprof given below : ( I am sure this query can be re-written , which I will work on in next 1-2 days) but for now my question is -

why there are so much of cr to get just 1 rows..

1 INDEX UNIQUE SCAN PFY_PK (cr=574560 pr=0 pw=0 time=5362445 us)(object id 110251)



TKPROF: Release 10.2.0.1.0 - Production on Sat Jan 10 14:35:31 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Trace file: C:\HealthCheck\iims2_ora_14568.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

SELECT DISTINCT
(PVR_PTY_PARTY_ID),
PFY_PARTY_FUNCTION_ID,
PVR_PARTY_VERSION_ID,
PVR_EFFECTIVE_START_DATE,
PVR_CREATED_DATE,
PVR_LAST_UPDATED_DATE,
PVR_CREATED_BY,
PVR_LAST_UPDATED_BY,
PVR_EFFECTIVE_END_DATE,
PVR_PTY_PARTY_ID_PARENT,
PVR_REGISTRATION_DATE,
PVR_SEX,
PVR_TYPE_OF_ORGANIZATION,
PVR_REGISTRATION_NUMBER,
PVR_BUSINESS_NAME,
PVR_FIRST_NAME,
PVR_MIDDLE_NAME,
DECODE (PVR_MIDDLE_NAME, :"SYS_B_0", :"SYS_B_1", PVR_MIDDLE_NAME || :"SYS_B_2")
|| PVR_SURNAME,
PVR_INITIAL,
PVR_OTHER_NAME,
PVR_CITIZENSHIP_NUMBER,
PVR_NATIONALITY,
PVR_OCCUPATION,
PTY_PARTY_CODE,
PTY_INDVORG_IND,
PTY_LAST_UPDATED_DATE,
SHR_STAKE_HOLDER_FN_CODE,
SHR_STAKE_HOLDER_FN_NAME,
PFY_SHR_STAKE_HOLDER_FN_ID
FROM T_PARTY,
T_STAKE_HOLDER_FUNCTION,
T_PARTY_FUNCTION,
T_PARTY_VERSION
WHERE PTY_PARTY_ID = NVL (NULL, PTY_PARTY_ID)
AND PTY_PARTY_ID = PVR_PTY_PARTY_ID
AND PFY_PARTY_FUNCTION_ID = NVL (:"SYS_B_3", PFY_PARTY_FUNCTION_ID)
AND PTY_INDVORG_IND = NVL (:"SYS_B_4", PTY_INDVORG_IND)
AND PTY_PARTY_CODE LIKE UPPER (NVL (NULL, :"SYS_B_5"))
AND SHR_STAKE_HOLDER_FN_CODE = :"SYS_B_6"
AND SHR_STAKE_HOLDER_FN_ID = PFY_SHR_STAKE_HOLDER_FN_ID
AND PTY_PARTY_ID = PFY_PTY_PARTY_ID
AND PTY_PARTY_STATUS = UPPER (NVL (:"SYS_B_7", PTY_PARTY_STATUS))
AND ( ( NULL IS NULL
AND NULL IS NULL
AND PVR_EFFECTIVE_END_DATE IS NULL
AND PFY_EFFECTIVE_END_DATE IS NULL)
OR ( NULL IS NOT NULL
AND NULL IS NULL
AND PVR_EFFECTIVE_START_DATE >= NULL
AND PFY_EFFECTIVE_START_DATE >= NULL
AND PVR_EFFECTIVE_END_DATE IS NULL
AND PFY_EFFECTIVE_END_DATE IS NULL))
ORDER BY :"SYS_B_8"

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 13.59 13.26 0 604632 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 13.59 13.27 0 604632 0 1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT ORDER BY (cr=604632 pr=0 pw=0 time=13268828 us)
1 HASH UNIQUE (cr=604632 pr=0 pw=0 time=13268708 us)
1 CONCATENATION (cr=604632 pr=0 pw=0 time=13267536 us)
0 FILTER (cr=0 pr=0 pw=0 time=5 us)
0 TABLE ACCESS BY INDEX ROWID T_PARTY_VERSION (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID T_STAKE_HOLDER_FUNCTION (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN SHR_SHR_U1 (cr=0 pr=0 pw=0 time=0 us)(object id 110276)
0 TABLE ACCESS BY INDEX ROWID T_PARTY (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN PTY_U1 (cr=0 pr=0 pw=0 time=0 us)(object id 110242)
0 TABLE ACCESS BY INDEX ROWID T_PARTY_FUNCTION (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN PFY_PFY_U1 (cr=0 pr=0 pw=0 time=0 us)(object id 110250)
0 INDEX RANGE SCAN IDX_PVR_PTYID_ENDDATE (cr=0 pr=0 pw=0 time=0 us)(object id 128177)
1 FILTER (cr=604632 pr=0 pw=0 time=13267503 us)
1 TABLE ACCESS BY INDEX ROWID T_PARTY_VERSION (cr=604632 pr=0 pw=0 time=13267490 us)
3 NESTED LOOPS (cr=604631 pr=0 pw=0 time=1160774 us)
1 NESTED LOOPS (cr=604628 pr=0 pw=0 time=13267009 us)
287279 NESTED LOOPS (cr=30067 pr=0 pw=0 time=6895644 us)
1 TABLE ACCESS BY INDEX ROWID T_STAKE_HOLDER_FUNCTION (cr=2 pr=0 pw=0 time=192 us)
1 INDEX UNIQUE SCAN SHR_SHR_U1 (cr=1 pr=0 pw=0 time=35 us)(object id 110276)
287279 TABLE ACCESS BY INDEX ROWID T_PARTY (cr=30065 pr=0 pw=0 time=6320893 us)
315537 INDEX RANGE SCAN PTY_U1 (cr=902 pr=0 pw=0 time=2208839 us)(object id 110242)
1 TABLE ACCESS BY INDEX ROWID T_PARTY_FUNCTION (cr=574561 pr=0 pw=0 time=6674770 us)
1 INDEX UNIQUE SCAN PFY_PK (cr=574560 pr=0 pw=0 time=5362445 us)(object id 110251)
1 INDEX RANGE SCAN IDX_PVR_PTYID_ENDDATE (cr=3 pr=0 pw=0 time=47 us)(object id 128177)


Tom Kyte
January 12, 2009 - 9:02 pm UTC

lots of index probes that returned zero rows.. eg:

ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> create index t_idx on t(object_id);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @trace
ops$tkyte%ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA10GR2> select /*+ rule */ *
  2    from t
  3   where exists ( select null from t t2 where t2.object_id = -t.object_id )
  4  /

no rows selected


so, that where exists will be done not by a subquery unnesting (the rbo didn't know how to do that) but by running the lookup in a loop - when you tkprof it:


select /*+ rule */ *
  from t
 where exists ( select null from t t2 where t2.object_id = -t.object_id )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      2.64       2.87          1     100738          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      2.66       2.88          1     100738          0           0

Misses in library cache during parse: 1
Optimizer mode: RULE
Parsing user id: 103

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  FILTER  (cr=100738 pr=1 pw=0 time=2876573 us)
  50023   TABLE ACCESS FULL T (cr=692 pr=0 pw=0 time=600328 us)
      0   INDEX RANGE SCAN T_IDX (cr=100046 pr=1 pw=0 time=2047058 us)(object id 57321)



the higher step found lots of rows (like yours) but the subquery (where exists) was not satisfied for them....

      1          TABLE ACCESS BY INDEX ROWID T_STAKE_HOLDER_FUNCTION (cr=2 pr=0 
      1           INDEX UNIQUE SCAN SHR_SHR_U1 (cr=1 pr=0 pw=0 time=35 
 287279          TABLE ACCESS BY INDEX ROWID T_PARTY (cr=30065 pr=0 pw=0 
 315537           INDEX RANGE SCAN PTY_U1 (cr=902 pr=0 pw=0 time=2208839 
      1         TABLE ACCESS BY INDEX ROWID T_PARTY_FUNCTION (cr=574561 pr=0 
      1          INDEX UNIQUE SCAN PFY_PK (cr=574560 pr=0 pw=0 time=5362445 
      1        INDEX RANGE SCAN IDX_PVR_PTYID_ENDDATE (cr=3 pr=0 pw=0 time=47 


so, that means you did about 2 IO's 287,279 times
ops$tkyte%ORA10GR2> select 574560 / 287279 from dual;

574560/287279
-------------
   2.00000696



what is time in tkprof output

pedro, January 12, 2009 - 6:39 pm UTC

Hi Tom

I have this simple tkprof output

SELECT COUNT(*)
FROM
DBA_OBJECTS


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 15 0.00 0.00 0 0 0 0
Fetch 15 2.99 3.01 0 54375 0 15
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 31 3.00 3.02 0 54375 0 15


Rows Row Source Operation
------- ---------------------------------------------------
15 SORT AGGREGATE (cr=54375 pr=0 pw=0 time=3019767 us)
617730 VIEW DBA_OBJECTS (cr=54375 pr=0 pw=0 time=24403993 us)
617730 UNION-ALL (cr=54375 pr=0 pw=0 time=17979480 us)
617730 FILTER (cr=54360 pr=0 pw=0 time=6242357 us)
627135 HASH JOIN (cr=7605 pr=0 pw=0 time=16454251 us)
645 TABLE ACCESS FULL USER$ (cr=75 pr=0 pw=0 time=4720 us)
627135 TABLE ACCESS FULL OBJ$ (cr=7530 pr=0 pw=0 time=4172304 us)
16395 TABLE ACCESS BY INDEX ROWID IND$ (cr=46755 pr=0 pw=0 time=1312903 us)
23340 INDEX UNIQUE SCAN I_IND1 (cr=23370 pr=0 pw=0 time=532850 us)(object id 39)
0 NESTED LOOPS (cr=15 pr=0 pw=0 time=721 us)
0 INDEX FULL SCAN I_LINK1 (cr=15 pr=0 pw=0 time=389 us)(object id 107)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)


What is time in a line such as

VIEW DBA_OBJECTS (cr=54375 pr=0 pw=0 time=24403993 us)

How is that time calculated? Obviously is wrong in this case

Thanks
Tom Kyte
January 16, 2009 - 3:41 pm UTC

yes, that does look "funky"

I can reproduce in 10.2.0.4, but not in 9i or 11g - which release were you using?

very clear

Ajeet, January 13, 2009 - 1:13 am UTC

Hi Tom

Thanks much, you always explain so well not sure how do you create such examples so quickly for every questiones asked. I learned a bit more about tkprof as well as better understanding of cr's in tkprof.

may be you can anwer Pedro question as well on the timing stuff.i had tried asking the same few times before but you gave some very useful info but left the "how to understand the timing /cr at each step" a miss.not that your answers were not useful , infact you gave a broader picture but i think me any many others would like to understand the - "how to really figure about , just by looking at tkprof that which step is creating most trouble"...may be a pointer to a documenet if avalaible will help as well.

I know you don;t like to answer this as i have asked it before in other threads and this thread as well but just for sake of curiosity :)

Thanks
Tom Kyte
January 16, 2009 - 3:42 pm UTC

Ajeet -

well, you would look for "big numbers" (sorry, it seemed obvious, not worth mentioning). You have time stamps, you can see that to see the relative amount of time various components consume. You look for big consumers of time?

what is time in tkprof output

Pedro, January 18, 2009 - 6:20 am UTC

He Tom

I am using 10.2.0.3

Thank you
Tom Kyte
January 19, 2009 - 9:07 pm UTC

I concur, in 10g, the rolled up number is incorrect in that case.

little bit more ...

Ajeet, January 19, 2009 - 1:37 am UTC

Hi Tom,

yes,I should look for big number in time.
when we say timestamp does it mean the time=12345678 part of row source operation.if this understanding is correct then only thing which remains a mystry is -

1.how does time rollup from previous step to next step.
2.what are the different thing we should be aware of while looking at this value (value of time - i meant).

to make myself more clear :

like in response to my query about cr value , you demonstrated that why there are so much cr value for single row index scan ..what is the logic. what is the algorithim so that we can avoild incorrect,false conclusion and do a productive tuning. why this is so important for us - is that now days customers are packaging performance tuning as a service project so it is importnat that we know all this as essentially we have to rely on tkprof upto some extent and just by understnading it we need to give a honest try to tune the query/program (othe other parts can not be ignorned such as understand the system,application, question as you mentioned before) but it tkprof is a diagnostic tool , we should be able to know what it is, where is the problem (in a definitive manner) to help us increase our productivity at work and at the same time it is fun..if we can read the tkprof as say that this is where the problem is ...

THanks again for your time
Regards
Ajeet
Tom Kyte
January 19, 2009 - 9:13 pm UTC

1) it is supposed to just be math, add them up in the hierarchy.
2) not sure what you are looking for, really


why is so much time spent on a step? could be about one of a billion things. You would need to have an example and then logic coupled with knowledge could be applied to that to figure out "why" (cpu, io are the two likely culprits)

supporting example

Ajeet, January 19, 2009 - 2:53 am UTC

Hi Tom,


In order to explain the difficulties faced in understanding/interpreting the "time" in tkprof please see a the tkprof of a query ( 9.2.0.6) :

the thing which is not understandable is

the intermediate step

634 TABLE ACCESS BY INDEX ROWID OBJ#(688843) (cr=1895972 r=511794 w=0 time=3861513324 us)
1692886 INDEX SKIP SCAN OBJ#(1365539) (cr=13377 r=12995 w=0 time=240552985 us)(object id 1365539)

show the time which is greate than the total time or time taken upto the last step

that is

Rows Row Source Operation
------- ---------------------------------------------------
543 NESTED LOOPS (cr=1913498 r=513308 w=0 time=3627268709 us)

other than this examlple -there are several such doubts.this tells that there is not sufficient knowledge or understanding (from my part ) of this rather than the time calcuated by oralce is incorrect. there is lot of things behind these calcuation -can you please share at least the one which are more important and of general interset.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.58 0.58 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 194.91 4577.45 513308 1913498 0 543
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 195.49 4578.04 513308 1913498 0 543

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 65 (APPS)

Rows Row Source Operation
------- ---------------------------------------------------
543 NESTED LOOPS (cr=1913498 r=513308 w=0 time=3627268709 us)
543 NESTED LOOPS (cr=1912404 r=513292 w=0 time=3626761417 us)
540 NESTED LOOPS (cr=1910825 r=513225 w=0 time=3638919214 us)
540 NESTED LOOPS (cr=1909200 r=513136 w=0 time=3636942627 us)
540 NESTED LOOPS (cr=1907575 r=513045 w=0 time=3635380865 us)
540 NESTED LOOPS (cr=1905950 r=512958 w=0 time=3633768694 us)
540 NESTED LOOPS (cr=1904865 r=512925 w=0 time=3633247361 us)
540 NESTED LOOPS (cr=1903230 r=512815 w=0 time=3631170344 us)
540 NESTED LOOPS (cr=1901605 r=512727 w=0 time=3629696106 us)
634 NESTED LOOPS (cr=1899786 r=512549 w=0 time=3876051443 us)
634 NESTED LOOPS (cr=1897879 r=512462 w=0 time=3874176141 us)
634 TABLE ACCESS BY INDEX ROWID OBJ#(688843) (cr=1895972 r=511794 w=0 time=3861513324 us)
1692886 INDEX SKIP SCAN OBJ#(1365539) (cr=13377 r=12995 w=0 time=240552985 us)(object id 1365539)
634 TABLE ACCESS BY INDEX ROWID OBJ#(9945) (cr=1907 r=668 w=0 time=12658919 us)
634 INDEX UNIQUE SCAN OBJ#(331834) (cr=1273 r=256 w=0 time=4676461 us)(object id 331834)
634 TABLE ACCESS BY INDEX ROWID OBJ#(688602) (cr=1907 r=87 w=0 time=1870832 us)
634 INDEX UNIQUE SCAN OBJ#(688729) (cr=1273 r=42 w=0 time=717426 us)(object id 688729)
540 TABLE ACCESS BY INDEX ROWID OBJ#(688523) (cr=1819 r=178 w=0 time=3226195 us)
541 INDEX RANGE SCAN OBJ#(688572) (cr=1278 r=72 w=0 time=1668253 us)(object id 688572)
540 TABLE ACCESS BY INDEX ROWID OBJ#(687132) (cr=1625 r=88 w=0 time=1470936 us)
540 INDEX UNIQUE SCAN OBJ#(1939450) (cr=1085 r=48 w=0 time=836943 us)(object id 1939450)
540 TABLE ACCESS BY INDEX ROWID OBJ#(688727) (cr=1635 r=110 w=0 time=2072987 us)
540 INDEX RANGE SCAN OBJ#(688773) (cr=1095 r=63 w=0 time=1129173 us)(object id 688773)
540 INDEX UNIQUE SCAN OBJ#(688885) (cr=1085 r=33 w=0 time=518850 us)(object id 688885)
540 TABLE ACCESS BY INDEX ROWID OBJ#(689048) (cr=1625 r=87 w=0 time=1609007 us)
540 INDEX UNIQUE SCAN OBJ#(1939181) (cr=1085 r=47 w=0 time=741401 us)(object id 1939181)
540 TABLE ACCESS BY INDEX ROWID OBJ#(1137904) (cr=1625 r=91 w=0 time=1558579 us)
540 INDEX UNIQUE SCAN OBJ#(1939177) (cr=1085 r=51 w=0 time=947944 us)(object id 1939177)
540 TABLE ACCESS BY INDEX ROWID OBJ#(688268) (cr=1625 r=89 w=0 time=1973210 us)
540 INDEX UNIQUE SCAN OBJ#(1939182) (cr=1085 r=49 w=0 time=1011468 us)(object id 1939182)
543 TABLE ACCESS BY INDEX ROWID OBJ#(5404468) (cr=1579 r=67 w=0 time=1290575 us)
543 INDEX RANGE SCAN OBJ#(5404788) (cr=1091 r=38 w=0 time=697238 us)(object id 5404788)
543 INDEX RANGE SCAN OBJ#(5404776) (cr=1094 r=16 w=0 time=504853 us)(object id 5404776)

thanks Again

execution plan / Row Source OPeration

Chen, March 25, 2009 - 6:30 pm UTC

Tom,
I had tkprof on the query and found the plans are Not consistance shown Row Source OPeration and execution plan.
Why so?

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 50 0.02 0.02 5 4383 0 731
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 52 0.02 0.02 5 4383 0 731

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 34 (PRAXIS3)

Rows Row Source Operation
------- ---------------------------------------------------
731 SORT ORDER BY (cr=4383 pr=5 pw=0 time=25737 us)
731 NESTED LOOPS (cr=4383 pr=5 pw=0 time=24150 us)
731 HASH JOIN (cr=2919 pr=5 pw=0 time=20479 us)
731 HASH JOIN (cr=2914 pr=5 pw=0 time=17408 us)
731 NESTED LOOPS (cr=2898 pr=5 pw=0 time=337042 us)
752 NESTED LOOPS (cr=1392 pr=5 pw=0 time=55660 us)
1 TABLE ACCESS BY INDEX ROWID HCPEOPLE (cr=2 pr=0 pw=0 time=49 us)
1 INDEX UNIQUE SCAN HC32_PRIMARYKEY (cr=1 pr=0 pw=0 time=18 us)(object id 12820)
752 TABLE ACCESS BY INDEX ROWID RELMESSUSERS (cr=1390 pr=5 pw=0 time=54858 us)
10319 INDEX RANGE SCAN RMU_IDU_IDX (cr=7 pr=5 pw=0 time=540 us)(object id 17775)
731 TABLE ACCESS BY INDEX ROWID MESSAGES (cr=1506 pr=0 pw=0 time=4816 us)
752 INDEX UNIQUE SCAN ME48_PRIMARYKEY (cr=754 pr=0 pw=0 time=1961 us)(object id 1293
4)
210 TABLE ACCESS FULL HCPEOPLE (cr=16 pr=0 pw=0 time=247 us)
1072 INDEX FAST FULL SCAN PRXPREVCGRP_IDX1 (cr=5 pr=0 pw=0 time=27 us)(object id 18629)
731 TABLE ACCESS BY INDEX ROWID PATIENTSPRAXIS (cr=1464 pr=0 pw=0 time=2996 us)
731 INDEX UNIQUE SCAN PP78_PRIMARYKEY (cr=733 pr=0 pw=0 time=1307 us)(object id 13149)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
731 FILTER
731 SORT (ORDER BY)
731 HASH JOIN
731 HASH JOIN
731 TABLE ACCESS MODE: ANALYZED (FULL) OF 'HCPEOPLE'
(TABLE)
752 HASH JOIN
1 NESTED LOOPS
1 NESTED LOOPS
752 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'HCPEOPLE' (TABLE)
10319 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'HC32_PRIMARYKEY' (INDEX (UNIQUE))
731 INDEX MODE: ANALYZED (RANGE SCAN) OF
'RELMESSUSERS_IDX' (INDEX)
752 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MESSAGES' (TABLE)
210 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'ME48_PRIMARYKEY' (INDEX (UNIQUE))
1072 INDEX MODE: ANALYZED (FAST FULL SCAN) OF
'PRXPREVCGRP_IDX1' (INDEX)
731 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PATIENTSPRAXIS'
(TABLE)


AA, April 24, 2009 - 9:36 am UTC

Hi Tom,
I have run tkprof on a complex query that takes 20-30 minutes to run on 9.2.0.6. The totals are:

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 500.00 9988.02 0 0 0 0
Execute 5 900.00 12338.62 0 0 1 3
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 1400.00 22326.64 0 0 1 3

Misses in library cache during parse: 2
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2404 8600.00 12502.28 0 0 0 0
Execute 24051126300.00 8302202.00 330457 5803003 33783 139204
Fetch 3179 900.00 2473.10 19 5588 0 2405
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 79881135800.00 8317177.38 330476 5808591 33783 141609

I don't know how to interpret the time values as I thought they were in seconds and related to the wall clock time.

Tom Kyte
April 27, 2009 - 12:21 pm UTC

looks like you are using 8i's tkprof to read a 9i trace file - could that be true? If so, everything is multiplied by 100.... As the level of timing recorded in the trace file changed between those releases.

A reader, May 28, 2009 - 3:33 pm UTC

Hi Tom,

Tkprof on one of my complex query takes more than 2 hrs to run on 10.2.0.3

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.12 7752.51 0 26 2799 5102366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.13 7752.52 0 26 2799 5102366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 34 (RU)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: ALL_ROWS
0 HASH (GROUP BY)
0 HASH JOIN
0 HASH JOIN
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'toltp' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'rbfol'
(TABLE)
0 PARTITION RANGE (ALL) PARTITION: START=1 STOP=3
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'rblasp' (TABLE) PARTITION: START=1 STOP=3


can you please explain me why elapsed is taking almost 2 hrs. and there is nothing under fetch row? is there any way to improve this no.


Tom Kyte
May 28, 2009 - 5:19 pm UTC

well, it is an insert, you don't fetch from them.


You have an explain plan there - probably not the plan that was used, I see a serial plan, but I'll bet it was parallel (that has to be true - no other way I can think that this query would only use 0.13 cpu seconds).

So, that explain plan - ignore it, get the real information from v$sql* tables or make sure the statement is closed BEFORE you end tracing so you get a row source operation.



A reader, May 28, 2009 - 7:23 pm UTC

HI Tom,

what do you mean by STATEMENT closed before ending trace..
i am doing

1. enable trace
2. insert
3. commit;
4. end trace

is it correct? pl. confirm or let me know if any changes require in this steps.

second how to get plan from SQL* tables?




Tom Kyte
May 29, 2009 - 8:12 am UTC

the cursor performing the insert was not closed - perhaps it is in a stored procedure whereby we do not really close cursors.

enable trace
run code
EXIT (logout)

then tkprof

read about dbms_xplan

A reader, May 29, 2009 - 1:32 pm UTC

Hi Tom,

Now new tkprof report is:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 9928.82 9778.20 571238 408298 365513 5102366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 9928.83 9778.21 571238 408298 365513 5102366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 34 (RU)

Rows Row Source Operation
------- ---------------------------------------------------
5102366 HASH GROUP BY (cr=328853 pr=571236 pw=243495 time=555867252 us)
2496905034 HASH JOIN (cr=328853 pr=366171 pw=38430 time=5001040869 us)
534920 HASH JOIN (cr=15456 pr=14405 pw=0 time=2178864 us)
27536 TABLE ACCESS FULL toltp (cr=248 pr=0 pw=0 time=55229 us)
536066 TABLE ACCESS FULL rbfol (cr=15208 pr=14405 pw=0 time=1072247 us)
44630411 PARTITION RANGE ALL PARTITION: 1 3 (cr=313397 pr=313336 pw=0 time=223152407 us)
44630411 TABLE ACCESS FULL rblasp PARTITION: 1 3 (cr=313397 pr=313336 pw=0 time=89261143 us)


Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: ALL_ROWS
5102366 HASH (GROUP BY)
2496905034 HASH JOIN
534920 HASH JOIN
27536 TABLE ACCESS MODE: ANALYZED (FULL) OF
'toltp' (TABLE)
536066 TABLE ACCESS MODE: ANALYZED (FULL) OF 'rbfol'
(TABLE)
44630411 PARTITION RANGE (ALL) PARTITION: START=1 STOP=3
44630411 TABLE ACCESS MODE: ANALYZED (FULL) OF
'rblasp' (TABLE) PARTITION: START=1 STOP=3


1. It shows 2.75 hr for processing CPU and thats why almost same amount of time for waiting. am i correct?
2. what are you saying about this report,
3. is there any way to improve this?

second following is plan generated through dbms_xplan utility.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2538688044

-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 32M| 6805M| | 1744K (1)| 05:48:54 | | |
| 1 | HASH GROUP BY | | 32M| 6805M| 14G| 1744K (1)| 05:48:54 | | |
|* 2 | HASH JOIN | | 32M| 6805M| 5056K| 188K (2)| 00:37:44 | | |
|* 3 | HASH JOIN | | 27536 | 4732K| | 3168 (3)| 00:00:39 | | |
| 4 | TABLE ACCESS FULL | toltp | 27536 | 1505K| | 50 (2)| 00:00:01 | | |
| 5 | TABLE ACCESS FULL | rbfol | 536K| 61M| | 3110 (2)| 00:00:38 | | |
| 6 | PARTITION RANGE ALL| | 44M| 1925M| | 63020 (3)| 00:12:37 | 1 | 3 |
| 7 | TABLE ACCESS FULL | rblasp | 44M| 1925M| | 63020 (3)| 00:12:37 | 1 | 3 |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("P_MONTH"=LEAST(GREATEST(TRUNC(INTERNAL_FUNCTION("SFOLT"."S_DATE"),'fmmm'),TO_DATE('01-M
AR-09')),TO_DATE('01-MAY-09')) AND "SFOLT"."P_ID"="LPTP"."P_ID" AND
"SFOLT"."DOW"="LPTP"."DOW")
3 - access("TP"."O_ID"="SFOLT"."O_ID" AND "TP"."OL_ID"="SFOLT"."OL_ID" AND
"TP"."FS_DATE"="SFOLT"."FS_DATE" AND "TP"."FE_DATE"="SFOLT"."FE_DATE")

here it is showing 5:48 hrs while actual it takes around 2.45 hrs.

4. why big diff?

Thanks for yr help


Tom Kyte
June 01, 2009 - 7:20 pm UTC

sorry, but this is obviously a tkprof of an *entirely different query plan*


new versus old:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Execute 1 9928.82 9778.20 571238 408298 365513 5102366
Execute 1 0.12 7752.51 0 26 2799 5102366


how can we compare them ????



...1. It shows 2.75 hr for processing CPU and thats why almost same amount of time
for waiting. am i correct?
....

no, when cpu = elapsed, that means you were not waiting for ANYTHING. When cpu < elapsed, you were waiting for something.


you show us no query, we cannot comment on a queryless thing - we cannot tell if there is a better way to do that query without actually *seeing* the query.

A reader, June 01, 2009 - 5:01 pm UTC

Hi tom,

Pl. let me know if u need any other information. i am waiting yr valuable answers.

Thanks
Tom Kyte
June 01, 2009 - 8:24 pm UTC

"pl", "u" and "yr"

Just like my 13 year old daughter.

When she does it, it is cute.

But one day, she'll be an adult and use proper words....


A reader, June 02, 2009 - 5:49 pm UTC

Hi Tom,

I apologie for using such words. I should not use such words.

First query was run in parallel mode while second was in single thread. Following is query. rbfol table contains order wise details (orderid, product, and its amount). We have to distribute this amount into subproduct wise based on proportion ratio wise (rblasp). ralation between this two table is ONE-MANY(RBFOL-RBLASP)

insert into fas
SELECT
trunc(order_date, 'mm')
,as_id
,ac_id
,it_id
,dt_id
,sum(sc_amt * (CASE WHEN total_pro = 0 THEN 1/total_as ELSE prop/total_pro END)) sc_amt
,sum(price * (CASE WHEN total_pro = 0 THEN 1/total_as ELSE prop/total_pro END)) price
FROM
toltp tp
,rbfol sfolt
,rblasp lptp
WHERE
tp.o_id = sfolt.o_id
AND tp.ol_id = sfolt.ol_id
AND tp.s_date = sfolt.s_date
AND tp.e_date = sfolt.e_date
and p_month = least(greatest(trunc(sfolt.order_date, 'mm'), to_date('01-mar-09')), to_date('01-may-09') )
AND sfolt.p_id = lptp.p_id
AND sfolt.dow = lptp.dow
group by trunc(order_date, 'mm')
,as_id
,ac_id
,it_id
,dt_id



Tom Kyte
June 02, 2009 - 6:57 pm UTC

explain this

and p_month = least(greatest(trunc(sfolt.order_date, 'mm'),
to_date('01-mar-09')), to_date('01-may-09') )


p_month is a plsql variable?

why least of greastest?

but, give us the logic/reasoning here.

how many rows out of how many rows will that predicate return?



that would be the only thing to look at - everything else is pretty simple (lots of data, lots of work, takes some time).

In order to understand if parallel can help, we'd need to know if this data is spread out over lots of disk (or not), if you have lots of cpu (or not) if you have tons of memory (or not). But basically the plan I see looks 100% sound so far.

A reader, June 02, 2009 - 8:33 pm UTC

Thanks for quick response

P_month is one of the column of table rblasp. This table contains month wise, product wise, sub product wise, day of week wise distribution ratio.

The sample data looks like

Month, productid, subproductid, dayofweek,ratio
01-Mar-09, Prod1, SubProd1, Sun, 0.04
01-Mar-09, Prod1, SubProd1, Mon, 0.02
01-Mar-09, Prod1, SubProd1, Tue, 0.01
01-Mar-09, Prod1, SubProd1, Wed, 0.03
01-Mar-09, Prod1, SubProd1, Thu, 0.04
01-Mar-09, Prod1, SubProd1, Fri, 0.05
01-Mar-09, Prod1, SubProd1, Sat, 0.06

01-Mar-09, Prod1, SubProd2, Sun, 0.04
01-Mar-09, Prod1, SubProd2, Mon, 0.04
01-Mar-09, Prod1, SubProd2, Tue, 0.04
01-Mar-09, Prod1, SubProd2, Wed, 0.04
01-Mar-09, Prod1, SubProd2, Thu, 0.04
01-Mar-09, Prod1, SubProd2, Fri, 0.04
01-Mar-09, Prod1, SubProd2, Sat, 0.04
..

01-Apr-09, Prod1, SubProd2, Sun, 0.04
01-Apr-09, Prod1, SubProd2, Mon, 0.04
01-Apr-09, Prod1, SubProd2, Tue, 0.04
01-Apr-09, Prod1, SubProd2, Wed, 0.04
01-Apr-09, Prod1, SubProd2, Thu, 0.04
01-Apr-09, Prod1, SubProd2, Fri, 0.04
01-Apr-09, Prod1, SubProd2, Sat, 0.04
¿.
01-May-09, Prod1, SubProd2, Sun, 0.04
01-May-09, Prod1, SubProd2, Mon, 0.04
01-May-09, Prod1, SubProd2, Tue, 0.04
01-May-09, Prod1, SubProd2, Wed, 0.04
01-May-09, Prod1, SubProd2, Thu, 0.04
01-May-09, Prod1, SubProd2, Fri, 0.04
01-May-09, Prod1, SubProd2, Sat, 0.04
¿



Now for all the orders whose order_date <= ¿01-MAR-09¿ we want to apply 01-mar-09, for all the orders whose order date between ¿01-apr-09¿ and ¿30-apr-09¿ it should apply 01-APR-09 data and for all the orders whose order date >= 01-may-09 it should apply 01-MAY-09 data. This 3 month data is not fixed, i.e. next month we will have new data for Jun 09 but Mar 09 data will go away. Because of this logic I use least, greatest.



Tom Kyte
June 03, 2009 - 11:53 am UTC

we'd need create table statements
we'd want to have EVERY column reference in the query qualified so we can easily see where they come from

the only idea I see might be to try to group by sooner (group by something and then join to that instead of grouping 2.5 billion rows into 0.5 billion rows)


Doubt about fetch shown in the output

aman, June 02, 2009 - 11:32 pm UTC

Hi sir,

This is small doubt about the fetch count shown as +1 in the formatted output. I am posting here the steps that I did. I am on 10201, RHEL4.
-bash-3.00$ tkprof orcl_ora_911_SYSTEM.trc result sys=no

TKPROF: Release 10.2.0.1.0 - Production on Wed Jun 3 08:34:43 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


-bash-3.00$ more result.prf

TKPROF: Release 10.2.0.1.0 - Production on Wed Jun 3 08:34:43 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Trace file: orcl_ora_911_SYSTEM.trc
Sort options: default

********************************************************************************count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
select *
from
dual


call count cpu elapsed disk query current rows------- ------ -------- ---------- ---------- ---------- ---------- ----------Parse 1 0.00 0.01 0 3 0 0Execute 1 0.00 0.00 0 0 0 0Fetch 2 0.00 0.00 0 3 0 1------- ------ -------- ---------- ---------- ---------- ---------- ----------total 4 0.00 0.01 0 6 0 1
Misses in library cache during parse: 1

My doubt is that I always get Fetch with an additional 1 added to it, as like over here too. Teh rows returned are 1 but the fetch is 2. Why is that so?

Thanks and regards

Tom Kyte
June 03, 2009 - 12:02 pm UTC

see below - I believe that reader hit it dead on.

in sqlplus - you have to live with how they decided to code it - they always fetch a single row AND THEN the second fetch they do goes into array mode.

I ran the following three queries in plus with the default arraysize of 15:

with data as (select * from dual)
select * from data one_row

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Fetch        2      0.00       0.00          0          3          0           1
********************************************************************************
with data as (select * from dual connect by level <= 15)
select * from data fifteen_rows

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Fetch        2      0.00       0.00          0          3          0          15
********************************************************************************
with data as (select * from dual connect by level <= 16)
select * from data fifteen_rows

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Fetch        3      0.00       0.00          0          3          0          16



the first one fetched twice because:

a) it fetched a single row, got a row, printed it - it did not know if there were more rows yet
b) it tried to array fetch the next 15 rows (second fetch), and discovered there were no more.

the second one fetched twice because:

a) it fetched a single row, got a row, printed it - it did not know if there were more rows yet
b) it tried to array fetch the next 15 rows - it discovered that there were only 14 more (the 15th array fetched row raised no data) and printed the 14 and finished

the third on fetched three times because:

a)it fetched a single row, got a row, printed it - it did not know if there were more rows yet
b) it tried to fetch 15 more and got them
c) it tried again to get 15 more and discovered "no data" and finished

For aman

Narendra, June 03, 2009 - 3:50 am UTC

Aman,

This could be due to the way SQL*Plus works. Did you execute this statement from SQL*Plus directly? In other words, did you do something like below?
SQL> alter session set sql_trace = true;
SQL> select * from dual
SQL> exit

What happens when you try following?
SQL> alter session set sql_trace = true;
SQL> declare
     v_dummy varchar2(2);
     begin
       select dummy into v_dummy from dual;
     end;
SQL>/
SQL> exit


Pratik, June 08, 2009 - 3:39 am UTC

Hello Tom,

I am very much confuse in interpreting tkprof output.

please correct me if I am wrong....

In tkprof output:-

Disk = Physical I/O
Query + current = Logical I/O

Physical I/O + Logical I/O = Total I/O Perform

But somewhere I read that Memory Read(RAM) = query - disk,
that means disk I/O is inculded in Query I/O,
so in this case Total I/O should be equal to query + current.
And after reading this thing My confusion start.

My question is, is disk I/O is part of query?
Which is true:-
(1)Physical I/O + Logical I/O = Total I/O Perform
or
(2)query I/O + current I/O = Total I/O Perform

Please help me, to solve out this.

(Here, the case is normal query execution and not the case of Direct Read or
parallel query operation.)

Thanks
Pratik


Tom Kyte
June 08, 2009 - 1:36 pm UTC

A logical IO (query mode gets plus current mode gets) is an IO from the buffer cache.

An IO against the buffer cache MAY or MAY NOT require a physical IO. If the block is there in the cache - no physical IO's will be performed. If the block is NOT there, we will do a physical IO to put it into the buffer cache and then a logical IO to read it out.

Hence, in general, physical IO's are a subset of logical IO's, a logical IO might need to invoke a physical IO to get the block into the cache.


In general - no reads from temp, no direct reads - I'd still say neither (1) nor (2) are a correct way to phrase things.


It would be correct to say "query+current is the number of reads from the buffer cache. physical IO is the number of times we had to read from disk to get a block into the cache so we could read it from the cache"


Thanks Tom sir and Narendra

aman, June 14, 2009 - 11:42 pm UTC

I got it :) , thanks a bunch!

regards
Aman....

disk statistic

Car Elcaro, September 07, 2009 - 9:14 am UTC

I know that disk of TKPROF report is the number of blocks read with physical reads.

When Oracle doing multiblock read, will this statistic incremented by the number of block read through multiblock read or just by one block.

Thanks Tom.
Tom Kyte
September 07, 2009 - 11:11 am UTC

it is the number of blocks read - number of blocks, not number of IO requests

and remember a multiblock read count of 16 does NOT mean "16 blocks will be read", it means as many as 16 MIGHT be read

if the buffer cache has blocks 1, 5, 7, 14 in it and you do a multiblock read of blocks 1-16 then we will:

a) get block 1 from cache
b) read blocks 2,3,4 into cache and read them from cache (3 block read)
c) read block 5 from cache
d) read block 6 into cache and read it from cache (1 block read)
e) read block 7 from cache
f) read blocks 8,9,10,11,12,13 into cache and read them from cache (6 block read)
g) read block 14 from cache
h) read blocks 15 and 16 into cache and read them from cache (2 block read)

so, a multiblock read is almost always a bunch of smaller reads as the cache typically has some of the blocks already.

Help Reading TKPROF

A reader, June 19, 2012 - 1:28 pm UTC

I have a select statement that takes 4 secs to return 7500 rows. But if I do an INSERT /*+APPEND*/ or a CREATE TABLE AS SELECT it takes around 8-10 mins.
Can you please review the attached TKPROF output and help me figure out where the waits are?
INSERT /*+ APPEND*/ INTO aes_ppo_stg_vendor_n_temp
      (vendor_id, vendor_nm, vendor_type_cd,
       street_address, city, county, state,
       country, zip_code, start_dt, end_dt,
       manager, company, business_unit, region,
       district, vendor_group, vendor_subgroup)
      (SELECT sup.vendor_id, sup.vendor_nm, NULL vendor_type_cd,
              adr.street_address, adr.city, NULL county, adr.state,
              adr.country, adr.zip_code, NULL start_dt, NULL end_dt,
              NULL manager, NULL company, NULL business_unit, NULL region,
              NULL district, NULL vendor_group, NULL vendor_subgroup
       FROM (SELECT a.key_value_1 supplier,
                    a.add_1||DECODE(a.add_2, NULL, NULL, :"SYS_B_0"||a.add_2)||DECODE(a.add_3, NULL, NULL, :"SYS_B_1"||a.add_3) street_address,
                    a.city, a.state, a.country_id country, a.post zip_code
             FROM addr a
             WHERE a.module = :"SYS_B_2"
              AND  a.addr_type = :"SYS_B_3") adr,
            (SELECT DISTINCT sm.supplier vendor_id, sm.sup_name vendor_nm
             FROM sups sm, aes_master_supp_det amsd, sups s
             WHERE sm.supplier = NVL(amsd.master_supplier, s.supplier)
              AND  amsd.supplier(+) = s.supplier
              AND  EXISTS(SELECT :"SYS_B_4"
                          FROM aes_dlp_iscl adi
                          WHERE adi.supplier = s.supplier
                           AND  ROWNUM = :"SYS_B_5")) sup
       WHERE adr.supplier(+) = sup.vendor_id)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1    477.27     478.48          2    1695930         50        7556
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    477.28     478.49          2    1695930         50        7556

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: 488  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD AS SELECT  (cr=1695798 pr=0 pw=47 time=478407048 us)
   7556   VIEW  VM_NWVW_0 (cr=1695798 pr=0 pw=0 time=478366030 us cost=126 size=119 card=1)
   7556    HASH UNIQUE (cr=1695798 pr=0 pw=0 time=478357588 us cost=126 size=139 card=1)
   7612     NESTED LOOPS OUTER (cr=1695798 pr=0 pw=0 time=469282168 us cost=91 size=174862 card=1258)
   7611      NESTED LOOPS  (cr=986660 pr=0 pw=0 time=4252313 us cost=15 size=56610 card=1258)
   7611       HASH JOIN RIGHT OUTER (cr=978613 pr=0 pw=0 time=4107822 us cost=3 size=22608 card=1256)
     70        INDEX FULL SCAN PK_MASTER_SUPP_DET (cr=1 pr=0 pw=0 time=98 us cost=1 size=840 card=70)(object id 446118)
   7576        INDEX FULL SCAN PK_SUPS (cr=978612 pr=0 pw=0 time=13109354 us cost=1 size=7536 card=1256)(object id 26621)
   7576         COUNT STOPKEY (cr=978581 pr=0 pw=0 time=5315413 us)
   7576          PARTITION HASH ALL PARTITION: 1 16 (cr=978581 pr=0 pw=0 time=5235669 us cost=1 size=66366 card=11061)
   7576           INDEX RANGE SCAN AES_DLP_ISCL_I1 PARTITION: 1 16 (cr=978581 pr=0 pw=0 time=4751794 us cost=1 size=66366 card=11061)(object id 2467472)
   7611       TABLE ACCESS BY INDEX ROWID SUPS (cr=8047 pr=0 pw=0 time=118937 us cost=1 size=27 card=1)
   7611        INDEX UNIQUE SCAN PK_SUPS (cr=436 pr=0 pw=0 time=45916 us cost=1 size=0 card=1)(object id 26621)
   7588      TABLE ACCESS BY INDEX ROWID ADDR (cr=709138 pr=0 pw=0 time=472602919 us cost=1 size=94 card=1)
   7588       INDEX SKIP SCAN ADDR_I4 (cr=704770 pr=0 pw=0 time=472487804 us cost=1 size=0 card=1)(object id 446129)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        2        0.00          0.00
  direct path sync                                1        0.00          0.00
  direct path write temp                          1        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************

select decode(u.type#, 2, u.ext_username, u.name), o.name,        t.update$, 
  t.insert$, t.delete$, t.enabled,        decode(bitand(t.property, 8192),
  8192, 1, 0),        decode(bitand(t.property, 65536), 65536, 1, 0),       
  decode(bitand(t.property, 131072), 131072, 1, 0),       (select o.name 
from
 obj$ o          where o.obj# = u.spare2 and o.type# =57)  from sys.obj$ o, 
  sys.user$ u, sys.trigger$ t, sys.obj$ bo where t.baseobject=bo.obj# and 
  bo.name = :1 and bo.spare3 = :2  and bo.namespace = 1  and t.obj#=o.obj# 
  and o.owner#=u.user#  and o.type# = 12 and bitand(property,16)=0 and 
  bitand(property,8)=0  order by o.obj#


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.01          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.01       0.01          0          3          0           0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us cost=1 size=34 card=1)
      0   INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 2210060)
      0  SORT ORDER BY (cr=3 pr=0 pw=0 time=162 us cost=5 size=202 card=1)
      0   NESTED LOOPS  (cr=3 pr=0 pw=0 time=133 us cost=4 size=202 card=1)
      0    NESTED LOOPS  (cr=3 pr=0 pw=0 time=129 us cost=3 size=101 card=1)
      0     NESTED LOOPS  (cr=3 pr=0 pw=0 time=126 us cost=2 size=64 card=1)
      1      INDEX RANGE SCAN I_OBJ5 (cr=2 pr=0 pw=0 time=70 us cost=1 size=36 card=1)(object id 2210064)
      0      TABLE ACCESS BY INDEX ROWID TRIGGER$ (cr=1 pr=0 pw=0 time=53 us cost=1 size=28 card=1)
      0       INDEX RANGE SCAN I_TRIGGER1 (cr=1 pr=0 pw=0 time=35 us cost=1 size=0 card=1)(object id 130)
      0     TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us cost=1 size=37 card=1)
      0      INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 2210060)
      0    TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us cost=1 size=101 card=1)
      0     INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 11)

********************************************************************************

select condition 
from
 cdef$ where rowid=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY USER ROWID CDEF$ (cr=1 pr=0 pw=0 time=26 us cost=1 size=15 card=1)

********************************************************************************

select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, 
  sample_size, minimum, maximum, distcnt, lowval, hival, density, col#, 
  spare1, spare2, avgcln 
from
 hist_head$ where obj#=:1 and intcol#=:2


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     23      0.01       0.00          0          0          0           0
Fetch       23      0.00       0.00          0         49          0           3
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       47      0.01       0.00          0         49          0           3

Misses in library cache during parse: 0
Optimizer mode: RULE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=3 pr=0 pw=0 time=53 us)
      1   INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=0 pw=0 time=40 us)(object id 221)

********************************************************************************

select /*+ rule */ bucket, endpoint, col#, epvalue 
from
 histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by bucket


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     26      0.00       0.00          0          0          0           0
Fetch       26      0.02       0.03          2         78          0         504
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       53      0.02       0.04          2         78          0         504

Misses in library cache during parse: 0
Optimizer mode: RULE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
     20  SORT ORDER BY (cr=3 pr=1 pw=0 time=12685 us cost=0 size=0 card=0)
     20   TABLE ACCESS CLUSTER HISTGRM$ (cr=3 pr=1 pw=0 time=12488 us)
      1    INDEX UNIQUE SCAN I_OBJ#_INTCOL# (cr=2 pr=0 pw=0 time=20 us)(object id 216)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                         2        0.01          0.02



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1    477.27     478.48          2    1695930         50        7556
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    477.28     478.49          2    1695930         50        7556

Misses in library cache during parse: 1
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        0.00          0.01
  Disk file operations I/O                        2        0.00          0.00
  direct path sync                                1        0.00          0.00
  direct path write temp                          1        0.00          0.00
  log file sync                                   1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        4      0.00       0.00          0          0          0           0
Execute     51      0.02       0.02          0          0          0           0
Fetch       51      0.02       0.03          2        132          0         508
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      106      0.04       0.06          2        132          0         508

Misses in library cache during parse: 1
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                         2        0.01          0.02

    1  user  SQL statements in session.
    4  internal SQL statements in session.
    5  SQL statements in session.
********************************************************************************
Trace file: /in/RMD2/oracle/diag/rdbms/rmd2/RMD2/trace/RMD2_ora_626732.trc
Trace file compatibility: 10.01.00
Sort options: default

       1  session in tracefile.
       1  user  SQL statements in trace file.
       4  internal SQL statements in trace file.
       5  SQL statements in trace file.
       5  unique SQL statements in trace file.
     993  lines in trace file.
 9223509  elapsed seconds in trace file.

Tom Kyte
June 19, 2012 - 3:04 pm UTC

you don't ever need to post all of the recursive sql like that, that just makes it big and unreadable.

what you want to do is start by comparing the plan of the select by itself versus the insert as select - are they the same or different.

that is the very first place to start.

query up v$sql_plan using dbms_xplan to see what plans were used.

TKPROF Output

A reader, June 19, 2012 - 3:17 pm UTC

Thanks for the reply.

The INSERT was using a NESTED OUTER JOIN and the SELECT was run from TOAD and for some reason it used HASH OUTER JOIN. When I put a USER_HASH hint on the INSERT, it comes back fast.


tk script

Parthiban Nagarajan, August 26, 2012 - 2:37 am UTC

Hi Tom

I couldn't search for the tk script in the site.
Could you publish it for me?

Thanks and regards

Tom Kyte
August 29, 2012 - 11:58 am UTC

column trace new_val TRACE

select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  from v$process a, v$session b, v$parameter c, v$instance d
 where a.addr = b.paddr
   and b.audsid = userenv('sessionid')
   and c.name = 'user_dump_dest'
/

disconnect
!tkprof &TRACE ./tk.prf &1
connect /
edit tk.prf



only works if you are on the database server itself.

Current blocks in select

DimON, September 16, 2012 - 9:27 am UTC

Hi Tom,

I noted that tkprof output in initial post shows quite a lot blocks in current mode for select statement.
The only case I know for Oracle 10G/11G that produces non zero current blocks for select statement is using subquery factoring clause + materialize. But behavior seems to be different in 8.1.7.
SQL> create table t as
  2  select lpad('*',4000,'*') id from dual
  3  ,(select 1 from all_objects where rownum <= 100)
  4  ,(select 1 from all_objects where rownum <= 100);

Table created.

Elapsed: 00:00:02.64
SQL> delete from t;

10000 rows deleted.

Elapsed: 00:01:76.50

Statistics
----------------------------------------------------------
      13752  recursive calls
      43361  db block gets
      13861  consistent gets
      10000  physical reads
   46039004  redo size
        856  bytes sent via SQL*Net to client
        539  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed

SQL> commit;

Commit complete.

Elapsed: 00:00:00.10
SQL> select * from t;

no rows selected

Elapsed: 00:00:04.06

Statistics
----------------------------------------------------------
          0  recursive calls
         15  db block gets
      20005  consistent gets
       3389  physical reads
     600000  redo size
        210  bytes sent via SQL*Net to client
        314  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

If I increase rows in test case from 10000 to 50000 I'm getting following result:
SQL> select * from t;

no rows selected

Elapsed: 00:00:42.81

Statistics
----------------------------------------------------------
          0  recursive calls
         18  db block gets
     100006  consistent gets
      43685  physical reads
    3000180  redo size
        210  bytes sent via SQL*Net to client
        314  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed
So, there is 15 current blocks in case of 10000 rows and 18 current blocks in case of 50000 rows. Thus these numbers are not quite correlated.
SQL> select * from v$version where rownum = 1;

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production

Could you explain why is number of current blocks for select statement is non zero and what are these blocks?
What changes were made in 10G/11G regarding current blocks for select statements.

Thanks
Tom Kyte
September 18, 2012 - 2:04 am UTC

I don't have the 8i database installed at all. Since it is software from the last century, I am not really going to be able to investigate it.

these numbers are so teeny tiny as to be not really even worth thinking about.

I do recall that the number of current mode gets for full scans changed a lot (got reduced) over the years, that could definitely have something to do with it.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.