Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Alex.

Asked: October 17, 2002 - 9:38 am UTC

Last updated: November 05, 2012 - 9:46 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hello Tom,

Thanks for the great job that you are doing maintaining this site.
My question is : Can I use tkprof to format trace file generated by setting event 10046 ? I checked couple of sites ( www.hotsos.com and www.oraperf.com ) and seems like that they advice to read the "raw" trace file generated instead of formatting it first with tkprof. What is your take on that ?

Regards,
Alex

and Tom said...

If you have my book "Expert one on one Oracle", I walk through this in detail.

There I explain the 10046 event and describe how to read the raw trace file. If you use tkprof in 8i and before, it'll ignore the binds and wait events in there.

If you use 9i, it'll show you the wait events IN the tkprof report (very nice). You still have to mine the raw trace file to get the binds tho.

Rating

  (183 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks

Alex, October 17, 2002 - 4:30 pm UTC

Thanks for the reply. I will get your book today and read it ( I should have done this long time ago ).

But waits do not show up individually

Areader, October 17, 2002 - 5:14 pm UTC

Tom,

Please correct me if I am wrong, but doesn't the tkprof output show ONLY the summary of wait events - how many times each occured, etc. In order to diagnose a problem - say to find out the segments experiencing buffer busy waits, we still have to look at teh raw trace files and somehow parse the WAIT lines. The purpose of 10046 event is to provide the segment related information, and tkprof does not provide that. Or is there some setting in tkprof I'm missing?

Tom Kyte
October 17, 2002 - 7:34 pm UTC

It shows you -- at a glance -- what wait is causing the issue.

Do you want to look at 1000 lines of trace, with dozens of wait events and figure out "ok, which one is the really important one?"

Not me, 9i's tkprof is really nice like that. Most of the time all you need to know is "I waited on X". Most of the time - given that -- you are well on your way to fixing "X".

Yes, you can go back to the trace and get the hundreds or thousands of lines to see the details when and if you like.

Can you see wait events with 9.0.1.4.0 tkprof?

Matt, January 20, 2003 - 11:02 pm UTC

I don't seem to be able to see wait events in my tkprof output. say for instance I have an output like:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.72 1.98 508 523 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.73 1.98 508 523 0 1

from tkprof. I expect that the difference between CPU total and elapsed total (there are examples of larger differences)is attributed to waits. However, I don't see any in the trace files nor in the tkprof output. Might this be a DB configuration issue (my guess is that the customer who sent me the trace has some init.ora parameter unset (timed_stats maybe) so these are not included in the trace).

BTW: The app enforced times_statistics=true as the session level when switching on tracing.

Any idea what the problem is?

Tom Kyte
January 21, 2003 - 9:51 am UTC

you need to use the 10046 trace event to get waits in the trace file.

alter session set events '10046 trace name context forever, level 12'

instead of sql_trace=true;

Makes Sense

Matt, January 21, 2003 - 7:53 pm UTC

I inferred from your comment above that this came out of the box with tkprof and sql_trace=true. I now understand that the 10046 event needs to be set.

I had a search through the 9.0.1.4.0 doco and could not find a reference to this event. Can you reference some Oracle Corp documentation? (I already have your book :o) )

What is the difference between sql_trace=true and the 10046 event in terms of performance degradation of the process? (I can hear what you have to say about this comment already)

What I mean is, I have a choice between event 10046 tracing and sql_trace (only one of which seems to be formally documented by Oracle). Which do I choose? Typically you don't know you need the WAITS until you see a trace.

Is there any reason NOT to always generate a 10046 trace?

Thanks and Regards





Tom Kyte
January 21, 2003 - 8:45 pm UTC

It is an undocumented but widely acknowledged feature...


the 10046 trace can be orders of magnitude larger then sql_trace=true. I do it when I need it, using sql_trace=true for most stuff.

Additional to previous Post

Matt, January 21, 2003 - 7:56 pm UTC

That should have read:

"Is there any reason NOT to always generate a 10046 trace when investigating performance issues?"


What granularity for waits ?

Adrian Billington, July 04, 2003 - 11:12 am UTC

I've been looking at waits in tkprof and it's very useful information, but I can't find anywhere the granularity of the waits timings - if you don't know how long you've waited, then the information is of no use ;o)

Could you tell me the 9.2.0 wait time unit ?

Thanks

Sorry, just found it !!!

Adrian Billington, July 04, 2003 - 11:20 am UTC

Ignore last review, I just found it on one of your pages...

Adrian

How to read the raw trace file

A reader, October 16, 2003 - 10:41 am UTC

Hi Tom,
I have the session trace on with bind variable option on the Oracle APPS.
The following is from the trace file:
How do we match b1:, b2:..?
bind 1 had value=1, but non for other bind x

PARSE #52:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3552165300
BINDS #52:
bind 0: dty=1 mxl=2000(2000) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=2000 offset=0
bfp=02060f30 bln=2000 avl=00 flg=09
bind 1: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=24 offset=0
bfp=02075470 bln=22 avl=02 flg=09
value=1
bind 2: dty=1 mxl=32(00) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=32 offset=0
bfp=00000000 bln=32 avl=00 flg=09
bind 3: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=24 offset=0
bfp=0206ff34 bln=22 avl=00 flg=09
EXEC #52:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=3552165301
=====================
PARSING IN CURSOR #52 len=89 dep=1 uid=173 oct=3 lid=173 tim=3552165301 hv=3341915018 ad='8ce97ba0'
SELECT TO_CHAR(:b1) FROM PROJWBS@P3E WBS WHERE WBS.WBS_ID = :b1 AND WBS.PROJ_ID = :b3
END OF STMT
PARSE #52:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=3552165301
EXEC #52:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3552165301
FETCH #52:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3552165301
=====================
PARSING IN CURSOR #53 len=51 dep=1 uid=173 oct=3 lid=173 tim=3552165302 hv=4266541333 ad='8ce97a70'
SELECT COUNT(*) FROM TASK@P3E WHERE WBS_ID = :b1
END OF STMT

Tom Kyte
October 16, 2003 - 3:11 pm UTC

ops$tkyte@ORA920> variable x varchar2(2000)
ops$tkyte@ORA920> variable y number
ops$tkyte@ORA920> variable z varchar2(32)
ops$tkyte@ORA920> variable a number
ops$tkyte@ORA920> exec :y := 1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA920> @trace
 
Session altered.
 
ops$tkyte@ORA920> select :x, :y, :z, :a from dual;
 
:X
-----------------------------------------------------------------------------------------------------------------------------------
        :Y :Z                                       :A
---------- -------------------------------- ----------
 
         1
 
PARSE #3:c=1953,e=955,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=0,tim=1041339395494142
BINDS #3:
 bind 0: dty=1 mxl=2000(2000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=2000 offset=0
   bfp=4051661c bln=2000 avl=00 flg=05
 bind 1: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=80 offset=0
   bfp=4051bee0 bln=22 avl=02 flg=05
   value=1
 bind 2: dty=1 mxl=32(32) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=0 offset=24
   bfp=4051bef8 bln=32 avl=00 flg=01
 bind 3: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=56
   bfp=4051bf18 bln=22 avl=00 flg=01
EXEC #3:c=0,e=357,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1041339395495142


they were just null, no values.

 

time unit of cpu/parse seconds in tkprof output (not raw trace file)

A reader, November 21, 2003 - 3:41 pm UTC

Tom
what are the time unit of cpu seconds and/or parse
seconds? Are they in seconds? If you can answer for
8i and 9i, it would be great!

thanx a bunch!!

Tom Kyte
November 21, 2003 - 5:34 pm UTC

in tkprof -- seconds.

I Tried to Trigger a Trace on User System After Logon

Doug Wingate, February 25, 2004 - 8:33 pm UTC

Tom,

I wanted to trace the database activity done on behalf of an Import process. I wanted to trace the entirety of the session associated with the Import process rather than start the importation and then use Set_Ev to start the trace in the middle of the action. So I created an after-logon trigger on user System and I immediately disabled it until I was nearly ready to start the Import process.

CREATE OR REPLACE TRIGGER System.Start_Extended_Trace
AFTER LOGON
ON System.SCHEMA
BEGIN
IF USER = 'SYSTEM' THEN
EXECUTE IMMEDIATE
'ALTER SESSION SET max_dump_file_size = UNLIMITED';
EXECUTE IMMEDIATE
'ALTER SESSION SET EVENTS ''10046 trace name context forever, level 8''';
END IF;
END;
/
ALTER TRIGGER System.Start_Extended_Trace DISABLE;

When I was ready, I enabled the trigger and started the Import process, which logged on as user System. After the process had finished, I disabled the trigger so that other administrators who were testing imports wouldn't also generate traces. When I went to get my trace from the dump destination, I discovered to my horror that every background process was generating a trace file and had been from the moment I created immediately disabled the trigger. The only way I could find to stop all those traces was to bounce the database. My questions are

(1) How did it happen that my after-logon trigger caused tracing in all of those background processes? I didn't realize that "logon" was a category applicable to background processes, for one thing, and for another, I think I would have supposed that if "logon" is a category applicable to background processes, those processes were already logged on and therefore wouldn't have triggered tracing.

(2) What could I have done to shut off tracing without bouncing the instance?

Thanks.


Tom Kyte
February 26, 2004 - 8:02 am UTC

1) how did you determine it was backgrounds -- they write to a different directory then where you would have gone.

Where these sessions using a username OTHER THAN system?

I could not reproduce your findings at all in 9204.

2) dbms_system.set_ev to turn it back off.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6793026818923#7781074432964 <code>



#0

Vinnie, March 02, 2004 - 2:13 pm UTC

What does the #0 refer to in the RAW trace file?

Tom Kyte
March 02, 2004 - 7:04 pm UTC

show me one in context.

#0 * Trace

Vinnie, March 03, 2004 - 3:22 pm UTC

Tom,

Here is my Trace file:

UPDATE EVENT_10 SET STATUS = :1 WHERE ID = :2 AND CLASS_NAME = :3 AND TYPE = :4 AND TIME_ID = :5 AND ELAPSED_TIME = :6
END OF STMT
PARSE #14:c=0,e=400,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=2411694204303
BINDS #14:
bind 0: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5cdf0 bln=4000 avl=01 flg=05
value="P"
bind 1: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5be10 bln=4000 avl=34 flg=05
value="TESTID"
bind 2: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5ae30 bln=4000 avl=35 flg=05
value="id.test"
bind 3: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac59e50 bln=4000 avl=01 flg=05
value="U"
bind 4: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=48 offset=0
bfp=ffffffff7ac5e5f8 bln=22 avl=02 flg=05
value=10
bind 5: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=24
bfp=ffffffff7ac5e610 bln=22 avl=03 flg=01
value=8227
EXEC #14:c=0,e=2179,p=0,cr=3,cu=1,mis=0,r=1,dep=0,og=4,tim=2411694206811
WAIT #14: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #14: nam='SQL*Net message from client' ela= 1015 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 21890 p1=2460 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 9924 p1=1413697536 p2=1 p3=0

1. What is a #0, system level cursor?
...
...
BINDS #14:
bind 0: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5cdf0 bln=4000 avl=01 flg=05
value="P"
bind 1: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5be10 bln=4000 avl=34 flg=05
value="TESTID2"
bind 2: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5ae30 bln=4000 avl=35 flg=05
value="id.test"
bind 3: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac59e50 bln=4000 avl=01 flg=05
value="U"
bind 4: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=48 offset=0
bfp=ffffffff7ac5e5f8 bln=22 avl=02 flg=05
value=10
bind 5: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=24
bfp=ffffffff7ac5e610 bln=22 avl=03 flg=01
value=8227
EXEC #14:c=0,e=131592,p=0,cr=3,cu=2,mis=0,r=1,dep=0,og=4,tim=2411697170266
WAIT #14: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #14: nam='SQL*Net message from client' ela= 537 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0

2. Why is the e-time so high with no WAITs?
..
..
bind 0: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5cdf0 bln=4000 avl=01 flg=05
value="P"
bind 1: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5be10 bln=4000 avl=34 flg=05
value="TESTID3"
bind 2: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac5ae30 bln=4000 avl=35 flg=05
value="id.test"
bind 3: dty=1 mxl=4000(4000) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=4000 offset=0
bfp=ffffffff7ac59e50 bln=4000 avl=01 flg=05
value="U"
bind 4: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=48 offset=0
bfp=ffffffff7ac5e5f8 bln=22 avl=02 flg=05
value=10
bind 5: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=0 offset=24
bfp=ffffffff7ac5e610 bln=22 avl=03 flg=01
value=8297
WAIT #14: nam='db file sequential read' ela= 1319396 p1=77 p2=14780 p3=1
EXEC #14:c=0,e=1321771,p=1,cr=3,cu=1,mis=0,r=1,dep=0,og=4,tim=2411728135779
WAIT #14: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #14: nam='SQL*Net message from client' ela= 725 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0

You can see from the last update we had to wait on a sequential read for 1319396. WE have others waits for Why db file sequential reads that are very fast! Why would some waits for db file sequential reads be very fast & others very slow?

Thanks again.



Tom Kyte
March 03, 2004 - 4:30 pm UTC

1) the wait is wait not associated with any cursor.

log file sync = wait during "commit". it is associated with your

XCTEND rlbk=0, rd_only=0

record (commit) right above. The next two waits are also not associated with a cursor, this for example:

WAIT #0: nam='SQL*Net message from client' ela= 9924 p1=1413697536 p2=1 p3=0

just means you "sat there after you committed and did nothing for a while"

2) different loads = different waits. sometimes disk is fast, sometimes not. Biggest differences could be due to this file being on a buffered OS file system. You could be comparing:

a) physical IO the OS provided you the answer with from its cache versus
b) physical IO the OS actually truly had to goto disk to read versus
c) physical IO under little or no contention versus
d) physical IO under heavy contention



even in a SAN the location of datasets counts

idai, March 04, 2004 - 4:30 am UTC


I experienced that even in a SAN Storage System the location of some specific Oracle Datasets counts ...

We figured out that on our FAStT 900 Storage Subsystem controlfile and redologs where put on the same spindel in a multivolume LUN. That caused our performance decrease enormously


more WAIT event but very little "ela=" time

Sami, March 11, 2004 - 5:14 pm UTC

Dear Tom,

Thanks for your help & support in the past.

One of the SQL query is getting timed out in the application. We generated EXTENDED trace, it has almost 9000+ WAIT events but for most of them it has ela= 0.

Could you kindly give some hint what could be wrong here OR where should I look in for more detail.
The wait event sequence is mostly

1)rdbms ipc reply
2)global cache freelist wait
3)global cache cr request
4)db file sequential read


1)TRACE file content
===================
WAIT #3: nam='global cache freelist wait' ela= 0 p1=0 p2=0 p3=0
WAIT #3: nam='db file sequential read' ela= 0 p1=7 p2=1178443 p3=1
WAIT #3: nam='rdbms ipc reply' ela= 0 p1=33 p2=120 p3=0
WAIT #3: nam='global cache freelist wait' ela= 0 p1=0 p2=0 p3=0
WAIT #3: nam='db file sequential read' ela= 1 p1=7 p2=1178453 p3=1
WAIT #3: nam='rdbms ipc reply' ela= 0 p1=33 p2=120 p3=0
WAIT #3: nam='global cache freelist wait' ela= 0 p1=0 p2=0 p3=0
WAIT #3: nam='global cache cr request' ela= 0 p1=7 p2=1178474 p3=18591
WAIT #3: nam='db file sequential read' ela= 0 p1=7 p2=1178474 p3=1
WAIT #3: nam='rdbms ipc reply' ela= 0 p1=33 p2=120 p3=0
WAIT #3: nam='global cache freelist wait' ela= 0 p1=0 p2=0 p3=0
WAIT #3: nam='global cache cr request' ela= 0 p1=7 p2=1178544 p3=18592
WAIT #3: nam='db file sequential read' ela= 0 p1=7 p2=1178544 p3=1
WAIT #3: nam='rdbms ipc reply' ela= 0 p1=33 p2=120 p3=0


2)TOTAL WAIT count
===================

$ grep WAIT prod1_ora_16860_sql_2.trc|wc -l
9245



3)STAT
===================


STAT #3 id=1 cnt=1724 pid=0 pos=0 obj=0 op='SORT ORDER BY '
STAT #3 id=2 cnt=1724 pid=1 pos=1 obj=0 op='NESTED LOOPS '
STAT #3 id=3 cnt=1749 pid=2 pos=1 obj=0 op='MERGE JOIN CARTESIAN '
STAT #3 id=4 cnt=2 pid=3 pos=1 obj=951537 op='TABLE ACCESS BY INDEX ROWID COUNTRIES '
STAT #3 id=5 cnt=2 pid=4 pos=1 obj=1131957 op='INDEX RANGE SCAN '
STAT #3 id=6 cnt=1749 pid=3 pos=2 obj=0 op='SORT JOIN '
STAT #3 id=7 cnt=1748 pid=6 pos=1 obj=5839 op='TABLE ACCESS BY INDEX ROWID PRIMARYTABLE '
STAT #3 id=8 cnt=1750 pid=7 pos=1 obj=1010125 op='INDEX RANGE SCAN '
STAT #3 id=9 cnt=1724 pid=2 pos=2 obj=951553 op='TABLE ACCESS BY INDEX ROWID SECTABLE '
STAT #3 id=10 cnt=3496 pid=9 pos=1 obj=951554 op='INDEX UNIQUE SCAN '

4)QUERY
===========

SELECT PRIMARYTABLE.PRIMARYTABLE,
PRIMARYTABLE.USERID,
PRIMARYTABLE.LAST_NAME,
PRIMARYTABLE.FIRST_NAME,
PRIMARYTABLE.COMPANY_NAME,
PRIMARYTABLE.NAME,
PRIMARYTABLE.BUSINESS_COUNTRY_ID,
PRIMARYTABLE.CAMLEVELID,
PRIMARYTABLE.SEARCH_LAST_NAME,
PRIMARYTABLE.SEARCH_FIRST_NAME,
PRIMARYTABLE.SEARCH_COMPANY_NAME,
PRIMARYTABLE.REGION,PRIMARYTABLE.PROCESSED_BY,
SECTABLE.COMPANYINCCOUNTRY,
SECTABLE.USERSTATUSID,
SECTABLE.UPDATEDBY,
SECTABLE.CUSTOMERID,
SECTABLE.LASTUPDATEDATE,
countries.COUNTRYNAME
FROM PRIMARYTABLE,
SECTABLE,
countries
WHERE PRIMARYTABLE.PRIMARYTABLE=SECTABLE.PRIMARYTABLE
AND PRIMARYTABLE.BUSINESS_COUNTRY_ID=countries.COUNTRYABBREV
AND COMPANY_user_category in ('VALUE1','VALUE2')
AND SECTABLE.USERSTATUSID IN (1,2,3,4,5,6,7,8,9)
AND PRIMARYTABLE.BUSINESS_COUNTRY_ID IN ('USA')
ORDER BY SECTABLE.LASTUPDATEDATE desc
END OF STMT



Tom Kyte
March 12, 2004 - 9:26 am UTC

so, is this a query that stands a chance at all to "perform" -- what is the application, what TIME does it currently take (in sqlplus, get the app out of the loop for a minute), what time does it NEED to take, what does the tkprof report look like (it'll do the adding up of wait events for you, you don't need to grep).

timed statistics is turned on right?

elapsed time greater than CPU time

A reader, April 02, 2004 - 7:57 am UTC

Tom,

I have cut off few similiar lines from tkprof output.
In this i see the cpu time is 0.33 and elapsed time is 6.04 . If i am correct, it should be nearly equal for a better tuned application ... if that is the case, where could my problem be? .. also disk:1172.

Please let me know, how i can improve on this. Fetch being 9, i feel i should use, bulk fetch ( as i read from your book) ... I will do that, immd. and please let me know how still i can make this better..

how can better write this part of sql
And ( groupnames
like '%CF6-80C2_fn_rli_000106%' OR groupnames like
'%CF6-80C2_fn_rw_000050%' OR groupnames like '%CF6-80C2_fn_rw_000107%'
OR groupnames like '%CF6-80E_fn_rw_000089%' OR groupnames like
'%CF6-80E_fn_rwk_000090%' OR groupnames like '%GE90_fn_rwdwk_000133%' OR
groupnames like '%GE90_rw_000187%' OR groupnames like
'%GP7000_rw_000199%' OR groupnames like '%eng_rl_000100%' OR groupnames
like '%eng_rl_000156%' OR groupnames like '%eng_rl_000157%' OR
groupnames like '%npss_rlidwka_000121%' OR groupnames like
'%npss_rlidwka_000159%' OR groupnames like '%sys_rl_000079%' OR
groupnames like '%sys_rl_000098%' )

...
Thanks a lot for your time and considerations ...

select filename
from
NPSS_TEST_2 where filename in
('/aef/.ae.ge.com/npss/npssSys/src/models/GE90/sub_dir1/sub_dir2/sub_dir3_d
ir3_dir3_dir3_dir3/4/filenames/100000000166610.java',
'/aef/.ae.ge.com/npss/npssSys/src/models/GE90/sub_dir1/sub_dir2/sub_dir3_di
r3_dir3_dir3_dir3/4/filenames/100000000038827.java',
.
.
.
.
. ### 1000 files were searched
.
.
.
.

'/aef/.ae.ge.com/npss/npssSys/src/models/GE90/sub_dir1/sub_dir2/sub_dir3_di
r3_dir3_dir3_dir3/4/filenames/100000000229069.java',
'/aef/.ae.ge.com/npss/npssSys/src/models/GE90/sub_dir1/sub_dir2/sub_dir3_di
r3_dir3_dir3_dir3/4/filenames/100000000003230.java',
'/aef/.ae.ge.com/npss/npssSys/src/models/GE90/sub_dir1/sub_dir2/sub_dir3_di
r3_dir3_dir3_dir3/4/filenames/100000000031842.java',
'/aef/.ae.ge.com/npss/npssSys/src/models/GE90/sub_dir1/sub_dir2/sub_dir3_di
r3_dir3_dir3_dir3/4/filenames/100000000106922.java') And ( groupnames
like '%CF6-80C2_fn_rli_000106%' OR groupnames like
'%CF6-80C2_fn_rw_000050%' OR groupnames like '%CF6-80C2_fn_rw_000107%'
OR groupnames like '%CF6-80E_fn_rw_000089%' OR groupnames like
'%CF6-80E_fn_rwk_000090%' OR groupnames like '%GE90_fn_rwdwk_000133%' OR
groupnames like '%GE90_rw_000187%' OR groupnames like
'%GP7000_rw_000199%' OR groupnames like '%eng_rl_000100%' OR groupnames
like '%eng_rl_000156%' OR groupnames like '%eng_rl_000157%' OR
groupnames like '%npss_rlidwka_000121%' OR groupnames like
'%npss_rlidwka_000159%' OR groupnames like '%sys_rl_000079%' OR
groupnames like '%sys_rl_000098%' )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.09 0.11 0 0 0 0
Execute 1 0.02 0.01 0 0 0 0
Fetch 9 0.22 5.92 1172 4980 0 996
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11 0.33 6.04 1172 4980 0 996

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 5

Rows Row Source Operation
------- ---------------------------------------------------
996 CONCATENATION
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
.
.
.
.
.
.
. ## like this nearl 2*1000(appr) times

2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)
1 TABLE ACCESS BY INDEX ROWID NPSS_TEST_2
2 INDEX UNIQUE SCAN (object id 3245)





Tom Kyte
April 02, 2004 - 10:19 am UTC

the relationship of cpu to elapsed is not indicative of tuned vs not tuned.

you could have the best tuned SQL on the planet -- run it on a machine with too many other concurrent things and you'll wait for CPU

or run it without have the data cached and it'll wait for IO

or <infinite number of things possible here>



in your case it is probably the physical IO that was required to process your query (run it again and see what happens)

PLEASE, use bind variables!

A reader, April 04, 2004 - 1:12 am UTC

Tom,

We are using java pgm and that as you suggested in your book - we are using bulkfetch (rowsperfetch) and pepared statement - we are using bind variable, one thing i noticed it is - after i restart my server this pgm runs for 15 secs, then 2nd time it takes 8, then 7,6,5, ... it keeps decreasing.

Please let me know, hot to decrease the physical IO, and reasons why everytime i see a reduce by 1 sec.

Thanks a lot

Tom Kyte
April 04, 2004 - 8:37 am UTC

you mean it goes to zero and then negative?

there is some lower bound, what you are seeing is the natural "we got warmed up and everything is dandy" sort of effect one expects after shutting down and starting.

Just like your browser takes a long time to start after you reboot --but goes faster the second time.

how can i reduce physical i/o

A reader, April 05, 2004 - 9:07 am UTC

Tom,

what could be the best way to decrease the physical IO, given 1 million records, if i search 1000 records from that, randomly - how can i reduce physical i/o in any system

Please let me know this, sorry for my ignorance.

thannks

Tom Kyte
April 05, 2004 - 9:54 am UTC

for pure randomness - you have to assume the entire table would be processed and no indexing scheme or physical organization scheme would do anything for you.

indexing techniques are useful for solving some issues.

physical storage (clustering, iot's, hash tables) are useful

10046 can't reconstruct sql

bob, April 20, 2004 - 7:34 am UTC

Tom,

A third party app uses an Oracle object type in the where clause and the constructor for the object takes another object. The inner object is bound in. The 10046 trace gives nothing intelligble for this inner object. It just does a memory dump for the bind variable value, but it isn't human readable.

So am I out of luck with regards to figuring out what bind variables/sql text they actually sent to the database when the bind variable was an object instead of a literal?


Tom Kyte
April 20, 2004 - 8:49 am UTC

yes, object types are not printable in a human readable/friendly format.

tkprof waiting events

pjp, August 09, 2004 - 2:50 am UTC

Hi Tom,

In this thread you have mentioned that :- "If you use 9i, it'll show you the wait events IN the tkprof report (very nice). You still have to mine the raw trace file to get the binds tho.".

So my questions are

1. Is it necessary to read raw trace file in Oracle 9i ? As now in Oracle 9i it will show waiting events

2. Can you give us example in Oracle 9i tkprof output how wait events are appearing ? ( I am not able to simulate situcation of waiting events )

thanks & regards


Tom Kyte
August 09, 2004 - 7:54 am UTC

1) if you just want waits, probably not (although you can still see patterns in there that you cannot in the tkprof report, but 9999 times out of 10000, the tkprof will more than suffice).  

if you want to know the bind variable values -- yes. you need to mine the raw trace file.


Ok, in some session, I locked table emp in exclusive mode and then in a fresh session:

ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2> update emp set ename = ename;


and waited a bit, then committed the lock table...



and tkprof says:

update emp set ename = ename
                                                                                                                  
                                                                                                                  
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.02          0          0          0           0
Execute      1      0.00      22.14          1          3         17          14
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00      22.16          1          3         17          14
                                                                                                                  
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 200
                                                                                                                  
Rows     Row Source Operation
-------  ---------------------------------------------------
      0  UPDATE  (cr=25 r=5 w=0 time=61876 us)
     14   TABLE ACCESS FULL OBJ#(129759) (cr=3 r=1 w=0 time=421 us)
                                                                                                                  
                                                                                                                  
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  enqueue                                         8        2.99         22.14
  db file sequential read                         1        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************


showing i was blocked on an enqueue wait  

*** TKPROF queries *****

pjp, August 10, 2004 - 5:53 am UTC

Hi Tom,

I have following output generated from sql trace with tkprof
by following command in Oracle 8.1.7

tkprof <filename> explain=oracle/oracle output=test.txt

07:43:54 ops$oracle@INFOD> show parameters timed

NAME TYPE VALUE
------------------------------------ ------- ------------------------------
timed_os_statistics integer 0
timed_statistics boolean TRUE

select b.a2560, a.a0090, a.a1230a, a.a1010, a.a6000,a.a6010,a.a0230,
a.a6520,c.a2215||c.a2211, d.a9060,
d.a0110, d.a9030, d.a9200
from tdc31 a, tdt31 b, tdt36 c, tdz12 d
where a.a0090 = b.a0090
and a.a1010 = b.a1010
and b.a0090 = c.a0090
and b.a1010 = c.a1010
and b.a2200a = c.a2200
and a.a0090 = d.a0090
and a.a1010 = d.a1010
and a.a6000 = d.a6000
and a.a0230 = d.a0230
and d.a9000 = '20040809'
and b.a2560 != 'BNK'
order by b.a2560, a.a0090, a.a1230a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.03 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1254 4.17 6.71 21040 21237 118 18789
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1256 4.21 6.74 21040 21237 118 18789

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 903 (ATLAS)

Rows Row Source Operation
------- ---------------------------------------------------
18789 SORT ORDER BY
18789 HASH JOIN
18789 HASH JOIN
44448 TABLE ACCESS FULL TDT31
31508 HASH JOIN
62587 TABLE ACCESS FULL TDT36


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
18789 SORT (ORDER BY)
18789 HASH JOIN
18789 HASH JOIN
44448 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDT31'
31508 HASH JOIN
43012 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDC31'
31508 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDZ12'
62587 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDT36'

********************************************************************************
select b.a2560, a.a0090, a.a1230a, a.a1010, a.a6000,a.a6010,a.a0230,
a.a6520,c.a2215||c.a2211, d.a9060,
d.a0110, d.a9030, d.a9200
from tdc31 a, tdt31 b, tdt36 c, tdz12 d
where a.a0090 = b.a0090
and a.a1010 = b.a1010
and b.a0090 = c.a0090
and b.a1010 = c.a1010
and b.a2200a = c.a2200
and b.a2560 != 'BNK'
and a.a0090 = d.a0090
and a.a1010 = d.a1010
and a.a6000 = d.a6000
and a.a0230 = d.a0230
and d.a9000 = '20040809'
order by b.a2560, a.a0090, a.a1230a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.03 0.05 0 0 0 0
Execute 3 0.00 0.01 0 0 0 0
Fetch 1404 7.85 17.55 41547 42474 236 21025
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 903 (ATLAS)

Rows Row Source Operation
------- ---------------------------------------------------
2236 SORT ORDER BY
18789 HASH JOIN
18789 HASH JOIN
44448 TABLE ACCESS FULL TDT31
31508 HASH JOIN
43012 TABLE ACCESS FULL TDC31
31508 TABLE ACCESS FULL TDZ12
62587 TABLE ACCESS FULL TDT36


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
2236 SORT (ORDER BY)
18789 HASH JOIN
18789 HASH JOIN
44448 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDT31'
31508 HASH JOIN
43012 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDC31'
31508 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDZ12'
62587 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TDT36'

********************************************************************************

alter session set sql_trace=false


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.01 0 0 0 0

My Questions regarding this

1. Why I am not able to see ID and Parent id in explain plan ?

2. In Explain plan why it is not showing ( cost, card bytes ) clause for e.g hash join ( Cost=3 Card=8 Bytes=248)?

3. Other than full table scan you can arrive at what conclusions ? ( I am new to sql trace area so your conclusions will help me to clear the concepts )

thanks & regards
pjp

Tom Kyte
August 10, 2004 - 8:01 am UTC

1) you don't need to? they used it to format and indent the report???

2) because tkprof doesn't do that.

3) i would suggest you never use explain=u/p with tkprof. use the actual plan that was used and is supplied in the report "for free" already. explain plan can "lie" at times and explain plan says "this is what would happen right now, today, at this point", it doesn't tell you what would have happened when the query actually ran necessarily.

Questions for earlier questions

pjp, August 10, 2004 - 5:56 am UTC

Hi Tom,

Sorry, I would like to add one question to earlier 3 questions asked by me

4. Why explain plan is not showing ID and Parent ID nos ?

thanks & regards
pjp

Tom Kyte
August 10, 2004 - 8:04 am UTC

that looks just like #1 doesn't it.

Missing execution plan

Peter Tran, November 02, 2004 - 2:04 pm UTC

Hi Tom,

I'm doing a level 12 10046 trace. I notice on some SQLs the execution plan is missing. Any reason why?

Thanks,
-Peter

Tom Kyte
November 03, 2004 - 6:26 am UTC

the cursor was not closed when you looked at the trace file.

if you are in sqlplus, exit sqlplus before tkprofing it.

if you are in an application, you may well have to "exit" it to close out all of the cursors.

8.1.7

A reader, November 16, 2004 - 2:03 pm UTC

Hi Tom. How can I make tkprof to show the waits events formatted as you did above ?

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
enqueue 8 2.99 22.14
db file sequential read 1 0.00 0.00
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************


They are not even showed under 8.1.7. I am missing something ? Thank you!

Tom Kyte
November 16, 2004 - 10:57 pm UTC

by upgrading to 9ir2 :)

it is new with that now old release.

raw trace file - bind values all over

Venkat, December 20, 2004 - 5:14 pm UTC

Tom,

Here are some portions of a raw trace file (cut and pasted different segments):

Questions:
1. The binds for CURSOR#4 (and other cursors), show up all over the trace file. After CURSOR#5 and again after CURSOR #17 (towards the end of the file). Is this typical?. If I am looking for a particular bind value, how can I link all the binds used in execution of CURSOR #4?

2. Do the cursor numbers (#3, #4,..#17) have any relevance to their sequence. For example, does statement corresponding to #3 always occur before #4 and cursor number #17? In the trace file, binds for CURSOR#5 appear and then the CURSOR#5 statement appears. So, I guess not.

Thanks,
Venkat

==================
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /u01_c4/dev1db/9.2.0
System name: HP-UX
Node name: red
Release: B.11.11
Version: U
Machine: 9000/800
Instance name: dev1
Redo thread mounted by this instance: 1
Oracle process number: 184
Unix process pid: 4389, image: oracle@red (TNS V1-V3)

*** 2004-12-20 15:32:52.826
*** SESSION ID:(64.2462) 2004-12-20 15:32:52.821

...
...
=====================
PARSING IN CURSOR #4 len=185 dep=1 uid=651 oct=3 lid=651 tim=3199027268318 hv=2455344326 ad='b532dfd8'
SELECT PROFILE_OPTION_ID,APPLICATION_ID FROM FND_PROFILE_OPTIONS WHERE PROFILE_OPTION_NAME = UPPER(:b1)
AND START_DATE_ACTIVE <= SYSDATE AND NVL(END_DATE_ACTIVE,SYSDATE) >= SYSDATE
END OF STMT
EXEC #4:c=0,e=157,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027268310
FETCH #4:c=0,e=35,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=3199027268447
BINDS #5:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=96 offset=0
bfp=800003fa40082a90 bln=22 avl=03 flg=05
value=1991
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=24
bfp=800003fa40082aa8 bln=22 avl=01 flg=01
value=0
bind 2: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=48
bfp=800003fa40082ac0 bln=22 avl=04 flg=01
value=10004
bind 3: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=72
bfp=800003fa40082ad8 bln=22 avl=03 flg=01
value=9804
=====================
PARSING IN CURSOR #5 len=160 dep=1 uid=651 oct=3 lid=651 tim=3199027268862 hv=1523956159 ad='b61af640'
SELECT PROFILE_OPTION_VALUE FROM FND_PROFILE_OPTION_VALUES WHERE PROFILE_OPTION_ID = :b1 AND APPLICATIO
N_ID = :b2 AND LEVEL_ID = :b3 AND LEVEL_VALUE = :b4
END OF STMT
EXEC #5:c=0,e=336,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027268855
FETCH #5:c=0,e=36,p=0,cr=2,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027268978
BINDS #4:
bind 0: dty=1 mxl=128(06) mal=00 scl=00 pre=00 oacflg=13 oacfl2=c000000100000001 size=128 offset=0
bfp=800003fa40082cb0 bln=128 avl=06 flg=05
value="ORG_ID"
EXEC #4:c=0,e=77,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027269146
FETCH #4:c=0,e=9,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=3199027269188
BINDS #6:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=96 offset=0
bfp=800003fa40082ae0 bln=22 avl=03 flg=05
value=1991
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=24
bfp=800003fa40082af8 bln=22 avl=01 flg=01
value=0
bind 2: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=48
bfp=800003fa40082b10 bln=22 avl=04 flg=01
value=51002
bind 3: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=72
bfp=800003fa40082b28 bln=22 avl=04 flg=01
value=20140
=====================

......
PARSING IN CURSOR #17 len=85 dep=1 uid=651 oct=3 lid=651 tim=3199027306706 hv=1354864710 ad='b53b5db8'
SELECT NVL(MULTI_ORG_FLAG,'N'),NVL(MULTI_CURRENCY_FLAG,'N') FROM FND_PRODUCT_GROUPS
END OF STMT
PARSE #17:c=0,e=49,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027306698
BINDS #17:
EXEC #17:c=0,e=40,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027306833
FETCH #17:c=0,e=40,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=3199027306896
BINDS #4:
bind 0: dty=1 mxl=128(06) mal=00 scl=00 pre=00 oacflg=13 oacfl2=8000000100000001 size=128 offset=0
bfp=800003fa400b05f0 bln=128 avl=06 flg=05
value="ORG_ID"
EXEC #4:c=0,e=107,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027307167
FETCH #4:c=0,e=17,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=3199027307220
BINDS #5:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=96 offset=0
bfp=800003fa400b0100 bln=22 avl=03 flg=05
value=1991
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=24
bfp=800003fa400b0118 bln=22 avl=01 flg=01
value=0
bind 2: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=48
bfp=800003fa400b0130 bln=22 avl=04 flg=01
value=10004
bind 3: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=72
bfp=800003fa400b0148 bln=22 avl=03 flg=01
value=9804
EXEC #5:c=0,e=215,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027307486
FETCH #5:c=0,e=11,p=0,cr=2,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027307529
BINDS #4:
bind 0: dty=1 mxl=128(06) mal=00 scl=00 pre=00 oacflg=13 oacfl2=c000000100000001 size=128 offset=0
bfp=800003fa400b05f0 bln=128 avl=06 flg=05
value="ORG_ID"
EXEC #4:c=0,e=85,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3199027307726
FETCH #4:c=0,e=9,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=3199027307768
BINDS #6:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=96 offset=0
bfp=800003fa400b0410 bln=22 avl=03 flg=05
value=1991
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=24
bfp=800003fa400b0428 bln=22 avl=01 flg=01
value=0
bind 2: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=48
bfp=800003fa400b0440 bln=22 avl=04 flg=01
value=51002
bind 3: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=4000000000000001 size=0 offset=72
bfp=800003fa400b0458 bln=22 avl=04 flg=01
value=20140
....




Tom Kyte
December 20, 2004 - 5:22 pm UTC

a trace file is a linear thing -- as events happen they are written down.

the binds are noted as they are bound, so if you

open #4
bind #4
open #5
bind #5
exec #4
bind #4
exec #5
bind #5
exec #4


that is exactly what you would see in the trace file.


You would just search for "binds #4" and stop looking if you ever see #4 get closed....



The cursor numbers are just assigned as statements are opened, so you could have a program that


opens #3
opens #4
opens #5
opens #5
closes #4
opens #4


and so on -- think of them as "slots in an array" -- the lowest slot that is free is the next slot that will be used (they are really slots in an array)


Wait events in 8.1.6+

DaPi, December 21, 2004 - 3:17 am UTC

The trace analyser will show you wait events for 8.1.6 onwards:
</code> http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=224270.1 <code>

Events

Rory, January 25, 2005 - 10:36 pm UTC

Hi Tom,

Just a quick question. Where do you get a list of the events that can be traced, like 10046 and 10053, and their descriptions too? Are the parameters the same when enabling them (alter session set events .....) or are there other combinations?
Thanks a bunch.

Tom Kyte
January 26, 2005 - 8:25 am UTC

google:

10046 oracle
10053 oracle


they are the only two I use. There are hundreds of others, settable when support says "set this"

A reader, March 09, 2005 - 3:59 pm UTC

Rows Row Source Operation
------- ---------------------------------------------------
0 SORT GROUP BY (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL MKTG_TEAM_DEF (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL LOCATION (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL MKTG_CUST_MDM_TAG (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL MKTG_ACTIVITY_TYPE_DEF (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL MKTG_TEAM_DEF (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 INDEX FAST FULL SCAN MKTG_TEAM_HIER_I0 (cr=0 r=0 w=0 time=0 us)(object id 1868021)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 INDEX FAST FULL SCAN MKTG_PERSON_TEAM_I0 (cr=0 r=0 w=0 time=0 us)(object id 1868013)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL MKTG_PERSON (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 HASH JOIN (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL DATE_HIERARCHY (cr=0 r=0 w=0 time=0 us)
0 PARTITION RANGE ITERATOR PARTITION: KEY KEY (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL MKTG_ACTIVITY_DEF PARTITION: KEY KEY (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS FULL MKTG_ACTIVITY_EMPLOYEE (cr=0 r=0 w=0 time=0 us)
0 INDEX FAST FULL SCAN MKTG_ACTIVITY_CUST_I0 (cr=0 r=0 w=0 time=0 us)(object id 1735523)

Hi Tom,

A portion of the output after doing 10046 trace and tkprofing the resultant file is shown above
Why are the rows shown as 0 in tkprof output.

This is true for all SQLs shown in tkprof output

Tom Kyte
March 09, 2005 - 6:17 pm UTC

you did not close the cursors

the stat records are written to the trace file when the cursors are closed


OR you turned trace on, then turned it off - then closed the cursors.

best to a) turn on trace, b) run sql, c) exit session, d) tkprof

A reader, March 09, 2005 - 4:01 pm UTC

Continuation of above post

Not only rows but even other stats like physical reads, consistent reads etc are 0

Timed_statistics is set to ON



A reader, March 09, 2005 - 4:03 pm UTC

Is it due to the query being run in parallel ?

A reader, March 09, 2005 - 5:00 pm UTC

Hi Tom,

When I disabled the parallel query, I got the stats.

IS there any way of getting similar tkprof stats for parallel queries ?

thanks

Tom Kyte
March 09, 2005 - 6:31 pm UTC

the stats go into the pq slave traces.

Which tkprof options we should use ?

Parag J Patankar, March 31, 2005 - 10:26 am UTC

Hi Tom,

In my question in this thread you suggested that I should not use explain_plan option with tkprof as it may be misleading. Can you guide me which options I should use with tkprof in Oracle 9iR2 if I want to use tkprof ? and secondly you are using which options for generating tkprof outputs ?

regards & thanks
pjp

Tom Kyte
March 31, 2005 - 10:51 am UTC

I use

tkprof input_trace_name output_report_name


in general, and when needed:

sys=no to get rid of the recursive sql when I find it uninteresting or in the way.
agg=no to see reports by executing of sql, rather than summed up.

10046

Yogesh, April 05, 2005 - 9:47 am UTC

IÂ’m facing one performance issue. To pin point the waits I used following

alter session set timed_statistics=true;
alter session set max_dump_file_size = unlimited;
alter session set events '10046 trace name context forever, level 12';

In my raw file, I found 270 waits. All of them were nam='write complete waits'. When I analyzed the values of P1, I found 2 files (#5,#450) facing this problem.

#5 belongs to RBS, #450 belongs to DEV TBS.

sum(ela) for #5 = 2073
sum(ela) for #450 = 1169

How can I avoid this WAIT?

When I formatted the file using TKPROF, I found one weird thing.

SELECT COUNT(*) FROM CHK WHERE CNO = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 62986 17.07 16.27 0 0 0 1983649591
Fetch 62986 3.21 3.00 0 125826 0 62986
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 125973 20.28 19.27 0 125826 0 1983712577

Table CHK do not have more than 10000 rows. CanÂ’t understand the figure 1983649591Â….


Tom Kyte
April 05, 2005 - 12:13 pm UTC

rows in an execute would be a fluke -- ignore it.

you wait on the write complete waits when you are trying to get a buffer in the cache but dbwr has to flush it to make room.

you either need to checkpoint more aggressively (keeping the buffer cache cleaner) or look at the overall size of the cache, is it right.

Wait listing

Yogesh, April 05, 2005 - 12:44 pm UTC

Where Can I find more details about waits? any URL? I can't find one on metalink except 39817.1.

Tom Kyte
April 05, 2005 - 6:31 pm UTC

have you checked out the reference guide?

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96536/apa.htm#968373 <code>

wait even summary for 8i traces

Andrew Fraser, May 05, 2005 - 12:28 pm UTC

For people (like me) still stuck with some 8i databases, 9i tkprof works against trace files produced from 8i databases, and displays the level 8 wait event summary. For example:

$ /ora/9ir2/bin/tkprof 8i.trc 8i_waits.prf

stats in tkprof

daniel, May 15, 2005 - 9:59 am UTC

I'm executing this pl/sql:

DECLARE
CURSOR cdd IS
SELECT sor_account_number,changed_column_name,common_account_id,cdm.PERSON_SEQUENCE ,MAX(sor_timestamp) m_ts
FROM v_cis_change_detail cdm
WHERE cdm.fk_system_info_id = 11
AND cdm.common_account_id IS NOT NULL
AND cdm.is_processed_ind IS NULL
GROUP BY sor_account_number,cdm.changed_column_name,cdm.common_account_id,cdm.PERSON_SEQUENCE
HAVING COUNT(1) > 1;
BEGIN
---
FOR cdd_rec IN cdd LOOP
UPDATE v_cis_change_detail cd
SET cd.bypassed_ind =1,is_processed_ind = 1
WHERE cd.is_processed_ind IS NULL
AND cd.changed_column_name = cdd_rec.changed_column_name
AND cd.fk_system_info_id = 11
AND cd.common_account_id = cdd_rec.common_account_id
AND cd.PERSON_SEQUENCE = cdd_rec.PERSON_SEQUENCE
AND cd.SOR_TIMESTAMP < cdd_rec.m_ts;
END LOOP;
---
END;

And here is the output from tkprof.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.16 0 0 0 0
Execute 1 3.51 3.79 1 0 9 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.55 3.96 1 0 9 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29 ()

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 34332 0.23 343.89
log file sync 1 0.00 0.00
SQL*Net break/reset to client 2 0.02 0.02
SQL*Net message to client 1 0.00 0.00
********************************************************************************

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$

********************************************************************************

SELECT SOR_ACCOUNT_NUMBER,CHANGED_COLUMN_NAME,COMMON_ACCOUNT_ID,
CDM.PERSON_SEQUENCE ,MAX(SOR_TIMESTAMP) M_TS
FROM
V_CIS_CHANGE_DETAIL CDM WHERE CDM.FK_SYSTEM_INFO_ID = 11 AND
CDM.COMMON_ACCOUNT_ID IS NOT NULL AND CDM.IS_PROCESSED_IND IS NULL GROUP BY
SOR_ACCOUNT_NUMBER,CDM.CHANGED_COLUMN_NAME,CDM.COMMON_ACCOUNT_ID,
CDM.PERSON_SEQUENCE HAVING COUNT(1) > 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 2 0.15 0.14 0 0 0 0
Fetch 8399 810.98 980.35 1889459 1265101 1343 8399
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8402 811.14 980.51 1889459 1265101 1343 8399

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29 () (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 FILTER
0 SORT (GROUP BY)
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CIS_CHANGE_DETAIL'


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 159076 0.19 212.47
db file sequential read 957 0.06 0.35
direct path write 5580 0.17 28.82
direct path read 77933 0.22 138.59
********************************************************************************

UPDATE V_CIS_CHANGE_DETAIL CD SET CD.BYPASSED_IND =1,IS_PROCESSED_IND = 1
WHERE
CD.IS_PROCESSED_IND IS NULL AND CD.CHANGED_COLUMN_NAME = :B4 AND
CD.FK_SYSTEM_INFO_ID = 11 AND CD.COMMON_ACCOUNT_ID = :B3 AND
CD.PERSON_SEQUENCE = :B2 AND CD.SOR_TIMESTAMP < :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 8399 4334.83 13185.97 37240 98249311 165094851 9360
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8400 4334.84 13185.97 37240 98249311 165094851 9360

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29 () (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 UPDATE STATEMENT GOAL: CHOOSE
0 UPDATE OF 'CIS_CHANGE_DETAIL'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'CIS_CHANGE_DETAIL'
0 INDEX (RANGE SCAN) OF 'IDX_CISCHGDTL_CCN_FSII_CAI_IPI'
(NON-UNIQUE)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 37207 0.17 332.11
log file sync 9547891 0.35 8552.20
db file parallel read 12 0.06 0.18
log file switch completion 2298 0.11 28.64
buffer busy waits 63 0.09 0.85
undo segment extension 178 0.00 0.00
latch free 793 0.02 3.12
db file scattered read 5 0.00 0.00
rdbms ipc reply 221 0.00 0.28
********************************************************************************
Q1. Why do I see stats for 3 queries? I would expect one for the cursor and one for the updates.
Q2. On the wait events for the update looks like the biggest wait is log file sync. I thought that it was caused by a commit,
but we're not commiting in the loop. Is there a "hidden" commit in there?

Tom Kyte
May 15, 2005 - 10:47 am UTC

q1) you don't share with us the first query, so no guesses here. You've sliced and diced this together.

q2) there is something hugely strange going on there -- do you see those IO's? how could that be? what is really going on there - triggers, indexes, what features of the database are you using. something else is very much happening here. are you missing the sys recursive SQL? (recursive SQL commits, just wrote about this on a different site yesterday:

</code> http://www.phpbbserver.com/phpbb/viewtopic.php?t=44&mforum=dizwellforum#468 <code>

I'd be much much MUCH more concerned about the logical IO there, that is *huge* -- why????

tkprof

daniel, May 15, 2005 - 11:53 am UTC

q2. there are 7 indexes on that table/no triggers/
q1. there was no first query...that's why I'm confused.
I had the developer to enable the trace and then run the pl/sql. Here is the full tkprof file.


TKPROF: Release 9.2.0.6.0 - Production on Fri May 13 12:08:44 2005

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: dcmt_ora_3376.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

alter session set events '10046 trace name context forever, level 12'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 29 ()

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 50.17 50.17
********************************************************************************

DECLARE
CURSOR cdd IS
SELECT sor_account_number,changed_column_name,common_account_id,cdm.PERSON_SEQUENCE ,MAX(sor_timestamp) m_ts
FROM v_cis_change_detail cdm
WHERE cdm.fk_system_info_id = 11
AND cdm.common_account_id IS NOT NULL
AND cdm.is_processed_ind IS NULL
GROUP BY sor_account_number,cdm.changed_column_name,cdm.common_account_id,cdm.PERSON_SEQUENCE
HAVING COUNT(1) > 1;
BEGIN
---
FOR cdd_rec IN cdd LOOP
UPDATE v_cis_change_detail cd
SET cd.bypassed_ind =1,is_processed_ind = 1
WHERE cd.is_processed_ind IS NULL
AND cd.changed_column_name = cdd_rec.changed_column_name
AND cd.fk_system_info_id = 11
AND cd.common_account_id = cdd_rec.common_account_id
AND cd.PERSON_SEQUENCE = cdd_rec.PERSON_SEQUENCE
AND cd.SOR_TIMESTAMP < cdd_rec.m_ts;
END LOOP;
---
END;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.16 0 0 0 0
Execute 1 3.51 3.79 1 0 9 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.55 3.96 1 0 9 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29 ()

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 34332 0.23 343.89
log file sync 1 0.00 0.00
SQL*Net break/reset to client 2 0.02 0.02
SQL*Net message to client 1 0.00 0.00
********************************************************************************

select user#
from
sys.user$ where name = 'OUTLN'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 2 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 2 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID USER$
1 INDEX UNIQUE SCAN I_USER1 (object id 44)

********************************************************************************

select text
from
view$ where rowid=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.00 0.00 0 4 0 2

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$

********************************************************************************

SELECT SOR_ACCOUNT_NUMBER,CHANGED_COLUMN_NAME,COMMON_ACCOUNT_ID,
CDM.PERSON_SEQUENCE ,MAX(SOR_TIMESTAMP) M_TS
FROM
V_CIS_CHANGE_DETAIL CDM WHERE CDM.FK_SYSTEM_INFO_ID = 11 AND
CDM.COMMON_ACCOUNT_ID IS NOT NULL AND CDM.IS_PROCESSED_IND IS NULL GROUP BY
SOR_ACCOUNT_NUMBER,CDM.CHANGED_COLUMN_NAME,CDM.COMMON_ACCOUNT_ID,
CDM.PERSON_SEQUENCE HAVING COUNT(1) > 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 2 0.15 0.14 0 0 0 0
Fetch 8399 810.98 980.35 1889459 1265101 1343 8399
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8402 811.14 980.51 1889459 1265101 1343 8399

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29 () (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 FILTER
0 SORT (GROUP BY)
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CIS_CHANGE_DETAIL'


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 159076 0.19 212.47
db file sequential read 957 0.06 0.35
direct path write 5580 0.17 28.82
direct path read 77933 0.22 138.59
********************************************************************************

UPDATE V_CIS_CHANGE_DETAIL CD SET CD.BYPASSED_IND =1,IS_PROCESSED_IND = 1
WHERE
CD.IS_PROCESSED_IND IS NULL AND CD.CHANGED_COLUMN_NAME = :B4 AND
CD.FK_SYSTEM_INFO_ID = 11 AND CD.COMMON_ACCOUNT_ID = :B3 AND
CD.PERSON_SEQUENCE = :B2 AND CD.SOR_TIMESTAMP < :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 8399 4334.83 13185.97 37240 98249311 165094851 9360
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8400 4334.84 13185.97 37240 98249311 165094851 9360

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29 () (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 UPDATE STATEMENT GOAL: CHOOSE
0 UPDATE OF 'CIS_CHANGE_DETAIL'
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'CIS_CHANGE_DETAIL'
0 INDEX (RANGE SCAN) OF 'IDX_CISCHGDTL_CCN_FSII_CAI_IPI'
(NON-UNIQUE)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 37207 0.17 332.11
log file sync 9547891 0.35 8552.20
db file parallel read 12 0.06 0.18
log file switch completion 2298 0.11 28.64
buffer busy waits 63 0.09 0.85
undo segment extension 178 0.00 0.00
latch free 793 0.02 3.12
db file scattered read 5 0.00 0.00
rdbms ipc reply 221 0.00 0.28
********************************************************************************

select file#
from
file$ where ts#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1881 0.44 0.39 0 0 0 0
Execute 1881 0.57 0.56 0 0 0 0
Fetch 7524 0.39 0.41 0 13167 0 5643
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11286 1.40 1.37 0 13167 0 5643

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
3 TABLE ACCESS BY INDEX ROWID FILE$
3 INDEX RANGE SCAN I_FILE2 (object id 42)

********************************************************************************

select name
from
undo$ where file#=:1 and block#=:2 and ts#=:3 and status$ != 1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 223 0.04 0.01 0 0 0 0
Execute 223 0.05 0.07 0 0 0 0
Fetch 223 0.07 0.02 4 669 0 223
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 669 0.16 0.11 4 669 0 223

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL UNDO$


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.00 0.00



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.16 0 0 0 0
Execute 2 3.51 3.80 1 0 9 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 3.55 3.96 1 0 9 0

Misses in library cache during parse: 1

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 1 50.17 50.17
db file sequential read 34332 0.23 343.89
log file sync 1 0.00 0.00
SQL*Net break/reset to client 2 0.02 0.02


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2109 0.50 0.42 0 0 0 0
Execute 10508 4335.60 13186.76 37240 98249311 165094851 9360
Fetch 16149 811.44 980.79 1889463 1278943 1343 14268
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 28766 5147.54 14167.98 1926703 99528254 165096194 23628

Misses in library cache during parse: 5

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 159081 0.19 212.48
db file sequential read 38168 0.17 332.46
direct path write 5580 0.17 28.82
direct path read 77933 0.22 138.59
log file sync 9547891 0.35 8552.20
db file parallel read 12 0.06 0.18
log file switch completion 2298 0.11 28.64
buffer busy waits 63 0.09 0.85
undo segment extension 178 0.00 0.00
latch free 793 0.02 3.12
rdbms ipc reply 221 0.00 0.28

4 user SQL statements in session.
2107 internal SQL statements in session.
2111 SQL statements in session.
2 statements EXPLAINed in this session.
********************************************************************************
Trace file: dcmt_ora_3376.trc
Trace file compatibility: 9.02.00
Sort options: default

1 session in tracefile.
4 user SQL statements in trace file.
2107 internal SQL statements in trace file.
2111 SQL statements in trace file.
8 unique SQL statements in trace file.
2 SQL statements EXPLAINed using schema:
OPS$ORACLE.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
10026753 lines in trace file.




Tom Kyte
May 15, 2005 - 12:15 pm UTC

so there is recusive sql in there (sql we execute to execute your sql). This happens all of the time.

it would account for part of the log file syncs, but there is something fishy still happening here.

the numbers are way too high to be accounted for. and we don't see any rows associated with our row source operations -- meaning this trace was tkprof'ed before the session closed all cursors.

you should

a) enable trace
b) run things to be traced
c) EXIT session
d) then run tkprof.


I can say your sort area size:

direct path write 5580 0.17 28.82
direct path read 77933 0.22 138.59

or pga_aggregate_target looks to be set far too low, you spent a lot of time doing reads/writes for sorting data.

How to inteprete this

Mo, May 16, 2005 - 11:14 pm UTC

Hi Tom,

I have gathered the following info from your advice on.
Thanks for your valuable time.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 1.21 1.22 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 857 3849.77 3964.05 18820 10367618 0 12830
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 859 3850.98 3965.28 18820 10367618 0 12830


Rows Row Source Operation
------- ---------------------------------------------------
12830 SORT GROUP BY (cr=10367618 r=18820 w=0 time=922823764 us)
16488 NESTED LOOPS (cr=10367618 r=18820 w=0 time=3961895689 us)
16488 NESTED LOOPS (cr=276960 r=18819 w=0 time=203754220 us)
16488 NESTED LOOPS (cr=243982 r=18819 w=0 time=203353162 us)
21432 NESTED LOOPS (cr=179684 r=15356 w=0 time=165868828 us)
21432 NESTED LOOPS OUTER (cr=179682 r=15355 w=0 time=165627439 us)
21432 NESTED LOOPS OUTER (cr=179680 r=15354 w=0 time=165350958 us)
21432 NESTED LOOPS (cr=179680 r=15354 w=0 time=165214807 us)
21432 NESTED LOOPS OUTER (cr=179678 r=15353 w=0 time=164816695 us)
21432 NESTED LOOPS OUTER (cr=179678 r=15353 w=0 time=164722274 us)
21432 NESTED LOOPS (cr=179678 r=15353 w=0 time=164409199 us)
21432 NESTED LOOPS OUTER (cr=158244 r=15353 w=0 time=163455504 us)
21432 NESTED LOOPS (cr=158242 r=15352 w=0 time=162840664 us)
3068 NESTED LOOPS (cr=53503 r=847 w=0 time=9561402 us)
3068 NESTED LOOPS (cr=47365 r=847 w=0 time=9457819 us)
3068 NESTED LOOPS (cr=41227 r=830 w=0 time=9115754 us)
3068 NESTED LOOPS (cr=38157 r=830 w=0 time=9026327 us)
8591 NESTED LOOPS (cr=16676 r=829 w=0 time=8500419 us)
3068 NESTED LOOPS OUTER (cr=7338 r=824 w=0 time=8178670 us)
3068 HASH JOIN (cr=1200 r=824 w=0 time=7790053 us)
722 TABLE ACCESS BY INDEX ROWID OBJ#(33438) (cr=97 r=71 w=0 time=1023156 us)
724 NESTED LOOPS (cr=35 r=28 w=0 time=358721 us)
1 NESTED LOOPS (cr=30 r=23 w=0 time=259841 us)
1 TABLE ACCESS FULL OBJ#(33662) (cr=28 r=23 w=0 time=259722 us)
1 INDEX UNIQUE SCAN OBJ#(34078) (cr=2 r=0 w=0 time=70 us)(object id 34078)
722 INDEX RANGE SCAN OBJ#(33494) (cr=5 r=5 w=0 time=98175 us)(object id 33494)
3068 INLIST ITERATOR (cr=1103 r=753 w=0 time=6680501 us)
3068 TABLE ACCESS BY INDEX ROWID OBJ#(33834) (cr=1103 r=753 w=0 time=6671612 us)
3068 INDEX RANGE SCAN OBJ#(661539) (cr=24 r=14 w=0 time=175022 us)(object id 661539)
3068 INDEX RANGE SCAN OBJ#(32878) (cr=6138 r=0 w=0 time=345864 us)(object id 32878)
8591 TABLE ACCESS BY INDEX ROWID OBJ#(33438) (cr=9338 r=5 w=0 time=282363 us)
8591 INDEX RANGE SCAN OBJ#(33469) (cr=3070 r=4 w=0 time=154274 us)(object id 33469)
3068 TABLE ACCESS BY INDEX ROWID OBJ#(33662) (cr=21481 r=1 w=0 time=448287 us)
12888 INDEX RANGE SCAN OBJ#(33712) (cr=8593 r=1 w=0 time=252370 us)(object id 33712)
3068 INDEX UNIQUE SCAN OBJ#(34078) (cr=3070 r=0 w=0 time=48881 us)(object id 34078)
3068 TABLE ACCESS BY INDEX ROWID OBJ#(33845) (cr=6138 r=17 w=0 time=314275 us)
3068 INDEX UNIQUE SCAN OBJ#(33908) (cr=3070 r=2 w=0 time=61582 us)(object id 33908)
3068 TABLE ACCESS BY INDEX ROWID OBJ#(33845) (cr=6138 r=0 w=0 time=74700 us)
3068 INDEX UNIQUE SCAN OBJ#(33908) (cr=3070 r=0 w=0 time=33565 us)(object id 33908)
21432 TABLE ACCESS BY INDEX ROWID OBJ#(34028) (cr=104739 r=14505 w=0 time=153194216 us)
112891 INDEX RANGE SCAN OBJ#(2284050) (cr=6672 r=2196 w=0 time=24675213 us)(object id 2284050)
0 INDEX RANGE SCAN OBJ#(34966) (cr=2 r=1 w=0 time=239758 us)(object id 34966)
21432 TABLE ACCESS BY INDEX ROWID OBJ#(34290) (cr=21434 r=0 w=0 time=752365 us)
21432 INDEX UNIQUE SCAN OBJ#(34341) (cr=2 r=0 w=0 time=250519 us)(object id 34341)
0 TABLE ACCESS BY INDEX ROWID OBJ#(34363) (cr=0 r=0 w=0 time=41654 us)
0 INDEX RANGE SCAN OBJ#(34421) (cr=0 r=0 w=0 time=9338 us)(object id 34421)
0 INDEX UNIQUE SCAN OBJ#(34607) (cr=0 r=0 w=0 time=8596 us)(object id 34607)
21432 INDEX UNIQUE SCAN OBJ#(34122) (cr=2 r=1 w=0 time=169574 us)(object id 34122)
0 INDEX UNIQUE SCAN OBJ#(34731) (cr=0 r=0 w=0 time=7224 us)(object id 34731)
1561 INDEX UNIQUE SCAN OBJ#(34360) (cr=2 r=1 w=0 time=58531 us)(object id 34360)
21432 INDEX UNIQUE SCAN OBJ#(34535) (cr=2 r=1 w=0 time=101785 us)(object id 34535)
16488 TABLE ACCESS BY INDEX ROWID OBJ#(33979) (cr=64298 r=3463 w=0 time=37304862 us)
21432 INDEX UNIQUE SCAN OBJ#(34017) (cr=42866 r=741 w=0 time=8499650 us)(object id 34017)
16488 INDEX UNIQUE SCAN OBJ#(34017) (cr=32978 r=0 w=0 time=239011 us)(object id 34017)
16488 INDEX FULL SCAN OBJ#(33939) (cr=10090658 r=1 w=0 time=3758024395 us)(object id 33939)

Q1) How do I read the explain plan ? Top-down or Bottom-Up.
Q2) From your experience, what should be the first thing to look at from the explain plan ? The highest time consumed for each execution , the higest cr or FTS.
Q3) What is the bottleneck from the above explain plan ?
INDEX FULL SCAN OBJ#(33939) (cr=10090658 ..time=3758024395) and 12830 SORT GROUP BY (cr=10367618..time=922823764) consumed the higest time count. Is this the problematic area we should look at ?
Q3) What is the relationship from the following items ? How do I read it?
16488 TABLE ACCESS BY INDEX ROWID OBJ#(33979) (cr=64298 r=3463 w=0 time=37304862 us)
21432 INDEX UNIQUE SCAN OBJ#(34017) (cr=42866 r=741 w=0 time=8499650 us)(object id 34017)
16488 INDEX UNIQUE SCAN OBJ#(34017) (cr=32978 r=0 w=0 time=239011 us)(object id 34017)
16488 INDEX FULL SCAN OBJ#(33939) (cr=10090658 r=1 w=0 time=3758024395 us)(object id 33939)

Thanks. Do you have any plan to come out another book mainly for SQL tuning?

Rgds
Mo




Tom Kyte
May 17, 2005 - 8:20 am UTC

q1) inside out :) if you have Effective Oracle by Design, I walk through "reading a plan".

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:231814117467#7344298017927 <code>

q2) depends.

I would compare the actuals to the estimated row counts (the autotrace traceonly to the tkprof)

I would look for things like:

3068 TABLE ACCESS BY INDEX ROWID OBJ#(33662) (cr=21481 r=1
12888 INDEX RANGE SCAN OBJ#(33712) (cr=8593 r=1 w=0

that says we found 12,888 rows in the index, but when we went to the table and applied the rest of the predicate, we only needed 3,068 of those rows. Perhaps we needed to have added one more column to the end of our index to avoid those extra 9,000 table accesses we did.


q3) there is alot of logical IO happening, I'd verify the actuals against the estimate and make sure we should really be doing all of those nested loops with indexes.

q4) 16488 TABLE ACCESS BY INDEX ROWID OBJ#(33979) (cr=64298 r=3463 w=0
^^^^^ rows flowing out of that step
^^^^^^ the operation performed
logical IOs done on this step ^^^^^
physical IOs done on this step ^^^^^^
physical writes (temp for example) ^^^

Where are the Parallel Slave trace files?

Script Kid, May 25, 2005 - 3:41 pm UTC

Tom,

You have replied to "A Reader" in the Mar 09 2005 thread that "Followup: the stats go into the pq slave traces." Do they also go to USER_DUMP_DEST? Do they have any different extension or any such characteristic that makes them different?

Because, I do the trace (with parallel) and see no other files being generated, except the main(server processes)which does not contain any parallel slave information.

Please note, my access to the UNIX box is restricted and I can see the files on a webpage and download them as necessary. Maybe the PQ slave traces are different and hence missing from my webpage?



Tom Kyte
May 25, 2005 - 7:42 pm UTC

pq slaves are 'backgrounds', they trace to background dump dest.



tracing db job

A reader, June 08, 2005 - 7:05 am UTC

Hi Tom,
I need to make a trace of a program which is run as a database job. It is created as a job dynamically.
job_queue_processes is set to 6
the thing is that the background process is not running permanently so what might happen is that it I won't know the process pid until the job is run.
Is there any way to do this trace apart from making the change in the program.

version is 9.2
thanks

Tom Kyte
June 08, 2005 - 9:05 am UTC

have the job turn trace on and off itself.


And one more Extended Trace for your expert review

Arul, August 10, 2005 - 10:42 am UTC

Hi Tom,
Please go through this and let me know your expert comments.

We are testing the performance of an SQL in test & production, it is giving different timings.

In test, completing in 1 second and in production around 5 to 8 seconds. What is surprising to me is: The execution plan is same & the test environment is actually a copy (clone!) of production database and both are almost running with similar hardware / software.

Infact, production instance has MORE resources interms of DB buffer cache / SGA / etc.

Ofcourse, I do agree that the physical arrangement of the data may cause this delay in time but would like to know your views.

Execution plan:

Test:

SELECT DISTINCT A.EMPLID , A.ROLEUSER , B.SOLD_TO_CUST_ID ,
( SELECT C.NAME1 FROM PS_CUSTOMER C
WHERE C.CUST_ID = B.SOLD_TO_CUST_ID) Name1
FROM
( SELECT A.EMPLID , A.ROLEUSER , A.BUSINESS_UNIT
FROM PS_E1_SECROLUSERBU A
WHERE A.E1_SEC_COMPANY = 'N'
AND A.E1_SEC_ALL = 'N'
UNION
SELECT B.EMPLID , B.ROLEUSER , C.E1_FROM_BU
FROM PS_E1_SECROLUSERBU B , PS_E1_COMP_REGION C
WHERE B.E1_SEC_COMPANY = 'Y'
AND B.E1_SEC_ALL = 'N'
AND
EXISTS ( SELECT 'X' FROM PS_JOB D , PS_E1_COMP_REGION E
WHERE D.EFFDT = ( SELECT MAX(D_ED.EFFDT)
FROM PS_JOB D_ED
WHERE D.EMPLID = D_ED.EMPLID
AND D.EMPL_RCD = D_ED.EMPL_RCD
AND D_ED.EFFDT <= SYSDATE)
AND D.EFFSEQ = ( SELECT MAX(D_ES.EFFSEQ)
FROM PS_JOB D_ES
WHERE D.EMPLID = D_ES.EMPLID
AND D.EMPL_RCD = D_ES.EMPL_RCD
AND D.EFFDT = D_ES.EFFDT)
AND D.EMPLID = B.EMPLID
AND D.BUSINESS_UNIT = E.E1_FROM_BU
AND E.E1_COMP_REGION = C.E1_COMP_REGION)
UNION
SELECT F.EMPLID , F.ROLEUSER , G.E1_FROM_BU
FROM PS_E1_SECROLUSERBU F , PS_E1_COMP_REGION G
WHERE F.E1_SEC_ALL = 'Y') A
, PS_CA_CONTR_HDR B
WHERE A.BUSINESS_UNIT =B.BUSINESS_UNIT
AND A.EMPLID='02167'
AND A.ROLEUSER= 'E1_DEPT_MGR';

First Time: Elapsed: 00:00:01.21
Second Time: Elapsed: 00:00:01.54

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1041 Card=1860 Bytes
=76260)

1 0 SORT (UNIQUE) (Cost=1041 Card=1860 Bytes=76260)
2 1 HASH JOIN (Cost=25 Card=143196 Bytes=5871036)
3 2 VIEW (Cost=15 Card=482 Bytes=13496)
4 3 SORT (UNIQUE) (Cost=15 Card=482 Bytes=15526)
5 4 UNION-ALL
6 5 INDEX (RANGE SCAN) OF 'PS_E1_SECROLUSERBU' (UNIQ
UE) (Cost=2 Card=1 Bytes=27)

7 5 NESTED LOOPS (Cost=4 Card=41 Bytes=4059)
8 7 MERGE JOIN (CARTESIAN) (Cost=3 Card=440 Bytes=
32120)

9 8 MERGE JOIN (CARTESIAN) (Cost=2 Card=1 Bytes=
47)

10 9 TABLE ACCESS (BY INDEX ROWID) OF 'PS_JOB'
(Cost=1 Card=1 Bytes=25)

11 10 INDEX (RANGE SCAN) OF 'PSAJOB' (NON-UNIQ
UE) (Cost=2 Card=1)

12 11 SORT (AGGREGATE)
13 12 FIRST ROW (Cost=2 Card=11 Bytes=176)
14 13 INDEX (RANGE SCAN (MIN/MAX)) OF 'P
SAJOB' (NON-UNIQUE) (Cost=2 Card=11)

15 11 SORT (AGGREGATE)
16 15 FIRST ROW (Cost=2 Card=1 Bytes=19)
17 16 INDEX (RANGE SCAN (MIN/MAX)) OF 'P
SAJOB' (NON-UNIQUE) (Cost=2 Card=1)

18 9 SORT (JOIN) (Cost=1 Card=1 Bytes=22)
19 18 INDEX (RANGE SCAN) OF 'PS_E1_SECROLUSERB
U' (UNIQUE) (Cost=2 Card=1 Bytes=22)

20 8 SORT (JOIN) (Cost=2 Card=440 Bytes=11440)
21 20 INDEX (FAST FULL SCAN) OF 'PS_E1_COMP_REGI
ON' (UNIQUE) (Cost=1 Card=440 Bytes=11440)

22 7 INDEX (UNIQUE SCAN) OF 'PS_E1_COMP_REGION' (UN
IQUE)

23 5 MERGE JOIN (CARTESIAN) (Cost=2 Card=440 Bytes=11
440)

24 23 INDEX (RANGE SCAN) OF 'PS_E1_SECROLUSERBU' (UN
IQUE) (Cost=2 Card=1 Bytes=20)

25 23 SORT (JOIN) (Cost=1 Card=440 Bytes=2640)
26 25 INDEX (FULL SCAN) OF 'PSBE1_COMP_REGION' (NO
N-UNIQUE) (Cost=3 Card=440 Bytes=2640)

27 2 INDEX (FAST FULL SCAN) OF 'PSBCA_CONTR_HDR' (UNIQUE) (
Cost=9 Card=30600 Bytes=397800)


Now, in production the same above query with the same execution plan!!:

Elapsed: 00:00:06.57
Elapsed: 00:00:05.99


I will be happy to send you the detailed trace (extended trace level 12), please let me know your email id.

Thanks a lot!!

Tom Kyte
August 10, 2005 - 12:46 pm UTC

You can analyze this just as well as I can. My suggestion is to review the 10046 level 12 trace using tkprof and see what is different (don't post them here, you can look at them and see "what is different")



tim in Raw Trace File

cg, August 10, 2005 - 4:25 pm UTC

This is a snippet of a raw trace file

PARSING IN CURSOR #1 len=130 dep=0 uid=104 oct=2 lid=104 tim=110724287504 hv=3717670596 ad='6befee10'
Insert into SAMS.LAB_TYPE2
(ACTIVE_IND, LAB_TYPE_NAME, LAB_TYPE_SEQ, PRELOADED_IND)
Values
('Y', '24 HOUR URINE', 236, 'Y')
END OF STMT
PARSE #1:c=0,e=609,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=110724287494
EXEC #1:c=0,e=326,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=4,tim=110724305704

What is 11072430 in seconds?

Is it 0.11072430?

Im trying to figure out the total time from the start of one insert statement to the execution of the last.

110724978592 - 110724305704 = 0.672888 seconds ??

there is some seed data that is only 14 rows I made one an extrnal pipe delimited table the other 14 insert statements just seeing what the diff really was. The stats are from tracing the insert statements.

Tom Kyte
August 11, 2005 - 9:12 am UTC

it is time measured from some arbitrary epoch. fancy way of saying "just a ticker, subtract later ticks from early ticks and you have elapsed time"

You would have to goto the head of the trace to see when it started and relate that to the first time.

and it depends in the version of the database if time is measured in 1/100th or 1/1000000 of a second.

in 8i, that would have been 0.00672888
in 9i and above that would have been 0.000000672888


(Note: tkprof will make this file readable by you and me very nicely)

Thanks Tom

cg, August 11, 2005 - 9:45 am UTC

And I used tkprof on the raw trace file but I ASSUME the tim ticker values were too small because the resulting tkprof file showed all the elasped times as 0.00.

Except for the statements were I set sql_trace to true and set transaction to read write.

All insert statements were 0.00. See below
<snippet>
nsert into SAMS.LAB_TYPE2
(ACTIVE_IND, LAB_TYPE_NAME, LAB_TYPE_SEQ, PRELOADED_IND)
Values
('Y', '24 HOUR URINE', 236, 'Y')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 4 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 4 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 104
********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 14 0.00 0.00 0 0 0 0
Execute 14 0.00 0.00 0 0 0 14
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 28 0.00 0.00 0 0 0 14

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 104
********************************************************************************

Insert into SAMS.LAB_TYPE2
(ACTIVE_IND, LAB_TYPE_NAME, LAB_TYPE_SEQ, PRELOADED_IND)
Values
('Y', 'CHEM 12', 237, 'Y')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 3 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 3 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 104
<snippet>



Tom Kyte
August 11, 2005 - 6:05 pm UTC

that just means they happened so fast that a human would not notice.

I pray that you don't really have code that builds inserts like that, that would be *a really hugely bad idea*

My goal was .....

CG, August 19, 2005 - 3:59 pm UTC

.... too see if I was doing something "really hugely bad"

by not transforming those insert statements to SQLLoader, External table etc.

This trace file showed me that the difference in time between an external table of these insert statements and a .sql file with these insert statements are " ... not noticable by the human eye..."

Are there any other reasons other then performance that I would not want to have individual insert statements in a file?

Thank you

Tom Kyte
August 20, 2005 - 4:41 pm UTC

performance.
scalability.
wipes out my shared pool.
I don't like it.

I wouldn't let you do it more than once during an install at best. If you tried to do it every day, I would stop you.


A reader, August 24, 2005 - 9:43 am UTC

Tom,

This is puzzling, I've had a script do the same query, it runs very slow on the first round, but more importantly, it seems to do a lot of work doing Index Scan. The Stats are the same in the tkprof. The table being queried is a OLTP one which is constantly in use. Is it right to suspect that the Index is being re-organised and hence the huge difference?

Cheers

Ravi

Tkprof output:

FAST:
--------------------------------------------------------------------------------

SELECT NVL(SDN_SCPS_RES,0)
FROM
STOCK_DENSITY WHERE SDN_ID = (SELECT MAX(SDN_ID) FROM STOCK_DENSITY WHERE
SDN_YEAR = :B2 AND SDN_PN_ID = :B1)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 7 0.02 0.02 0 1456 0 0
Fetch 7 0.00 0.00 0 28 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.02 0.02 0 1484 0 7

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 562 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
7 TABLE ACCESS BY INDEX ROWID STOCK_DENSITY
7 INDEX UNIQUE SCAN SDN_PK (object id 373235)


SLOW tkprof output:
--------------------------------------------------------------------------------

SELECT NVL(SDN_SCPS_RES,0)
FROM
STOCK_DENSITY WHERE SDN_ID = (SELECT MAX(SDN_ID) FROM STOCK_DENSITY WHERE
SDN_YEAR = :B2 AND SDN_PN_ID = :B1)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 7 0.14 2.96 208 1456 0 0
Fetch 7 0.00 1.55 8 28 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.14 4.51 216 1484 0 7

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 562 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
7 TABLE ACCESS BY INDEX ROWID STOCK_DENSITY
7 INDEX UNIQUE SCAN SDN_PK (object id 373235)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 216 0.97 4.46
------------------------------------------------------------------------------
--------------

RAW TRACE OUTPUT:
=====================
PARSING IN CURSOR #11 len=141 dep=1 uid=562 oct=3 lid=562 tim=2617712548795 hv=3
842628271 ad='91959910'
SELECT NVL(SDN_SCPS_RES,0) FROM STOCK_DENSITY WHERE SDN_ID = (SELECT MAX(SDN_ID)
FROM STOCK_DENSITY WHERE SDN_YEAR = :B2 AND SDN_PN_ID = :B1)
END OF STMT
PARSE #11:c=0,e=89,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=2617712548788
BINDS #11:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=48 offset
=0
bfp=ffffffff7cf74ee0 bln=22 avl=02 flg=05
value=2000
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=0 offset=
24
bfp=ffffffff7cf74ef8 bln=22 avl=04 flg=01
value=117853
EXEC #11:c=0,e=2812,p=0,cr=208,cu=0,mis=0,r=0,dep=1,og=3,tim=2617712551841
FETCH #11:c=0,e=56,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=3,tim=2617712552016
=====================
STAT #11 id=1 cnt=7 pid=0 pos=1 obj=373234 op='TABLE ACCESS BY INDEX ROWID STOCK
_DENSITY '
STAT #11 id=2 cnt=7 pid=1 pos=1 obj=373235 op='INDEX UNIQUE SCAN SDN_PK '

=====================
PARSING IN CURSOR #11 len=141 dep=1 uid=562 oct=3 lid=562 tim=2617685098932 hv=3
842628271 ad='91959910'
SELECT NVL(SDN_SCPS_RES,0) FROM STOCK_DENSITY WHERE SDN_ID = (SELECT MAX(SDN_ID)
FROM STOCK_DENSITY WHERE SDN_YEAR = :B2 AND SDN_PN_ID = :B1)
END OF STMT
PARSE #11:c=0,e=93,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=2617685098926
BINDS #11:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=48 offset
=0
bfp=ffffffff7cf74ee0 bln=22 avl=02 flg=05
value=2000
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=0 offset=
24
bfp=ffffffff7cf74ef8 bln=22 avl=04 flg=01
value=117853
WAIT #11: nam='db file sequential read' ela= 9572 p1=90 p2=79353 p3=1
WAIT #11: nam='db file sequential read' ela= 10998 p1=90 p2=80470 p3=1
WAIT #11: nam='db file sequential read' ela= 10037 p1=90 p2=138236 p3=1
WAIT #11: nam='db file sequential read' ela= 11295 p1=83 p2=23538 p3=1
WAIT #11: nam='db file sequential read' ela= 1839 p1=83 p2=23875 p3=1
WAIT #11: nam='db file sequential read' ela= 215088 p1=83 p2=24257 p3=1
WAIT #11: nam='db file sequential read' ela= 114602 p1=83 p2=24298 p3=1
WAIT #11: nam='db file sequential read' ela= 92144 p1=83 p2=27347 p3=1
WAIT #11: nam='db file sequential read' ela= 20808 p1=83 p2=27348 p3=1
WAIT #11: nam='db file sequential read' ela= 32513 p1=83 p2=27694 p3=1
WAIT #11: nam='db file sequential read' ela= 1382 p1=83 p2=27697 p3=1
WAIT #11: nam='db file sequential read' ela= 13775 p1=83 p2=27946 p3=1
WAIT #11: nam='db file sequential read' ela= 11388 p1=83 p2=27953 p3=1

And so on....
Until
FETCH #11:c=0,e=1516374,p=3,cr=4,cu=0,mis=0,r=1,dep=1,og=3,tim=2617689558618
=====================

STAT #11 id=1 cnt=7 pid=0 pos=1 obj=373234 op='TABLE ACCESS BY INDEX ROWID STOCK
_DENSITY '
STAT #11 id=2 cnt=7 pid=1 pos=1 obj=373235 op='INDEX UNIQUE SCAN SDN_PK '


Tom Kyte
August 24, 2005 - 2:21 pm UTC

you had physical IO the time it was slow. Your average wait for IO was 0.02

that seem right to you?

A reader, August 24, 2005 - 2:33 pm UTC

Two Questions

1) Why physical I/O on the first run, but there was very little on the second run, is it that the blocks of data are read to the Block buffer the first time and then accessed from there the second time?

2) How did you calculate the average wait I/O?

Thanks

Tom Kyte
August 25, 2005 - 3:15 am UTC

1) yes

2) i divided.

216 0.97 4.46
^^^ ^^^^

4.46/216


A reader, August 24, 2005 - 3:22 pm UTC

If you can bear with me a couple of more questions

3) The execution plan talks "SDN_ID = " bit which seem to call on the Primary Key SDN_PK, but is the rest of the query (the part doing the Select max(sdn_id) missing in the plan?

4) The Stats think there could be 7 rows but the output is only 1 row as in the tkprof output, is this sometimes expected, ie inconsistency in what the Stats think and what the actual output is?

Tom Kyte
August 25, 2005 - 3:23 am UTC

3) depends on the release whether the scalar subquery you there will appear or not, earlier releases it did not.

current releases you will

select *
from
dual where dummy = ( select :x from dual )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 6 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 6 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 61

Rows Row Source Operation
------- ---------------------------------------------------
0 FILTER
1 TABLE ACCESS FULL DUAL
1 TABLE ACCESS FULL DUAL


4) that isn't "the stats think", those numbers are what really happened.

Fetch 7 0.00 1.55 8 28 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.14 4.51 216 1484 0 7

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 562 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
7 TABLE ACCESS BY INDEX ROWID STOCK_DENSITY
7 INDEX UNIQUE SCAN SDN_PK (object id 373235)

You got 7 rows??

A reader, August 25, 2005 - 9:40 am UTC

Had a good look and it appears its 7 rows because it was executed 7 times with 1 row output each time.

Also, you are right in that the first time the query got executed it bought all the data, because the SECOND runs were quicker, here's an example

BINDS #7:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=48 offset
=0
bfp=ffffffff7cf73420 bln=22 avl=04 flg=05
value=117853
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=0 offset=
24
bfp=ffffffff7cf73438 bln=22 avl=02 flg=01
value=2000
EXEC #7:c=0,e=595,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=2623001402227
WAIT #7: nam='db file sequential read' ela= 4671 p1=92 p2=94494 p3=1
WAIT #7: nam='db file sequential read' ela= 9488 p1=80 p2=28394 p3=1

Repeat similar, until:


FETCH #7:c=70000,e=4668170,p=147,cr=288,cu=0,mis=0,r=1,dep=1,og=3,tim=2623006070
478
=====================

----- Now SECOND RUN in the same trace file, later on ----------
BINDS #7:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=48 offset
=0
bfp=ffffffff7cf9d9f0 bln=22 avl=04 flg=05
value=117853
bind 1: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=0 offset=
24
bfp=ffffffff7cf9da08 bln=22 avl=02 flg=01
value=2000
EXEC #7:c=0,e=496,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=2623008309729
FETCH #7:c=10000,e=3594,p=0,cr=288,cu=0,mis=0,r=1,dep=1,og=3,tim=2623008313402



The second fetch was pretty quick and NO DB SEQ file read!

Thanks


Ravi, August 30, 2005 - 6:42 am UTC

From Posting above:

WAIT #11: nam='db file sequential read' ela= 215088 p1=83 p2=24257 p3=1
WAIT #11: nam='db file sequential read' ela= 114602 p1=83 p2=24298 p3=1
WAIT #11: nam='db file sequential read' ela= 92144 p1=83 p2=27347 p3=1
WAIT #11: nam='db file sequential read' ela= 20808 p1=83 p2=27348 p3=1
WAIT #11: nam='db file sequential read' ela= 32513 p1=83 p2=27694 p3=1
WAIT #11: nam='db file sequential read' ela= 1382 p1=83 p2=27697 p3=1
WAIT #11: nam='db file sequential read' ela= 13775 p1=83 p2=27946 p3=1


The time to read ONE block of data (p3=1) seems varying wildly, Why?

We have a development environment with severe contention for OS resources, could it be that the Reader was waiting for a serialized resource to read from the Hard Disk?

2)Can I interpret one such line as

WAIT #11: nam='db file sequential read' ela= 215088 p1=83 p2=24257 p3=1


I asked Block 24257, File 83 to be read and then I waited and waited for 2.42 seconds before I got a reply?

Thanks

Ravi


Tom Kyte
August 30, 2005 - 7:23 am UTC

physics. (disk access times)
contention.
competition.

The elapsed time is measured with different granularity in different releases. Use tkprof and it'll sum them all up for you in 9i.

A reader, August 30, 2005 - 7:34 am UTC

Not sure what you mean, are you saying that the disk read access time indeed changed in the example due to contention/competition?

A simple yes or no would be just about enough for my simple mind.

We have a 9.2.0.5 and 8192 block size.

Tom Kyte
August 30, 2005 - 9:25 am UTC

could be physics (seek time, someone moved the heads and we have to move them back)

could be contention/competition for this resource, you had to WAIT in line for your turn to move the heads.

could be contention/competition for the wire from the computer to the disk.

could be the OS (some of the IO's were not REAL IO, but IO to the file system cache)

could be the disk (some of the IO's were not REAL IO, but IO to the cache on the disk/disk array)

and so on........

So the answer is a simple "maybe", cannot say yes or no :)

A reader, August 30, 2005 - 3:54 pm UTC

1) Its hard for me to define what "abnormal" is, but is the elapsed time HIGH for a SINGLE block read in the example?

2) Is it something we could raise with our Sysdbas, in other words is this an Hardware related (contention/competion included, we need a quicker box?) issue and not an Oracle related one?

Tom Kyte
August 31, 2005 - 1:28 am UTC

1) use tkprof to get it readable.

2) look at the tkprof first, not the raw trace, aggregate it up and see what you see.

A reader, August 31, 2005 - 3:36 am UTC

Oh I see, thanks, in that case you had said that "Your average wait for IO was 0.02". Would 0.02 seconds avg for physical I/O be on the higher side?

Tom Kyte
August 31, 2005 - 1:37 pm UTC

I would say "yes, i would not consider that fast"

100 IO's would be taking 2 seconds.

Further question on TKPROF output ...

Gary Wicke, September 13, 2005 - 6:29 pm UTC

Hi Tom

Greetings from Indiana and the INOUG.

Here is a portion of my TKPROF output. I had to kill the process that was running the query as it was taking way too long.

I set the trace via the dbms_system.set_sql_trace_in_session package as it was a spawned database session.

My question is really about the number under the ROWS column. My table has about 18 million rows in it and is being read across a DB link but it does NOT have 3672886260 rows in it!!

What gives?

Many thanks for your tireless efforts, they are indeed greatly appreciated by us, your groupies.

-gary

TKPROF output:
********************************************************************************

SELECT
/*+ NO_MERGE */
"PRODDTA_F42119_pse1.472798"."SDKCOO" "SDKCOO",
"PRODDTA_F42119_pse1.472798"."SDDOCO" "SDDOCO",
"PRODDTA_F42119_pse1.472798"."SDDCTO" "SDDCTO",
"PRODDTA_F42119_pse1.472798"."SDLNID" "SDLNID",
"PRODDTA_F42119_pse1.472798"."SDSFXO" "SDSFXO",
"PRODDTA_F42119_pse1.472798"."SDMCU" "SDMCU",
"PRODDTA_F42119_pse1.472798"."SDAN8" "SDAN8",
"PRODDTA_F42119_pse1.472798"."SDSHAN" "SDSHAN",
"PRODDTA_F42119_pse1.472798"."SDDRQJ" "SDDRQJ",
"PRODDTA_F42119_pse1.472798"."SDTRDJ" "SDTRDJ",
"PRODDTA_F42119_pse1.472798"."SDADDJ" "SDADDJ",
"PRODDTA_F42119_pse1.472798"."SDIVD" "SDIVD",
"PRODDTA_F42119_pse1.472798"."SDDGL" "SDDGL",
"PRODDTA_F42119_pse1.472798"."SDITM" "SDITM",
"PRODDTA_F42119_pse1.472798"."SDLITM" "SDLITM",
"PRODDTA_F42119_pse1.472798"."SDLNTY" "SDLNTY",
"PRODDTA_F42119_pse1.472798"."SDNXTR" "SDNXTR",
"PRODDTA_F42119_pse1.472798"."SDLTTR" "SDLTTR",
"PRODDTA_F42119_pse1.472798"."SDUOM" "SDUOM",
"PRODDTA_F42119_pse1.472798"."SDUORG" "SDUORG",
"PRODDTA_F42119_pse1.472798"."SDSOQS" "SDSOQS",
"PRODDTA_F42119_pse1.472798"."SDSOCN" "SDSOCN",
"PRODDTA_F42119_pse1.472798"."SDURRF" "SDURRF",
"PRODDTA_F42119_pse1.472798"."SDTORG" "SDTORG",
"PRODDTA_F42119_pse1.472798"."SDUSER" "SDUSER",
"PRODDTA_F42119_pse1.472798"."SDPID" "SDPID",
"PRODDTA_F42119_pse1.472798"."SDJOBN" "SDJOBN",
"PRODDTA_F42119_pse1.472798"."SDUPMJ" "SDUPMJ",
"PRODDTA_F42119_pse1.472798"."SDTDAY" "SDTDAY",
"PRODDTA_F42119_pse1.472798"."SDCO" "SDCO",
"PRODDTA_F42119_pse1.472798"."SDVR01" "SDVR01",
"PRODDTA_F42119_pse1.472798"."SDURAB" "SDURAB",
"PRODDTA_F42119_pse1.472798"."SDSOBK" "SDSOBK",
"PRODDTA_F42119_pse1.472798"."SDPSN" "SDPSN"
FROM "PRODDTA"."F42119"@"pse1.deltafaucet.com" "PRODDTA_F42119_pse1.472798"

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 4753 352.46 999.83 0 1 4789 3672805408
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4753 352.46 999.83 0 1 4789 3672805408

Misses in library cache during parse: 0
Parsing user id: 104 (???) (recursive depth: 1)


Tom Kyte
September 13, 2005 - 6:59 pm UTC

looks like bad trace data, if you take chapter 10 from expert one on one Oracle - I detail the trace file records in there, you can use that to do a santity check to see where it might be going bad.

Huge elapsed time in 10046 tracefile

Kobus Theron, November 03, 2005 - 2:37 pm UTC

Tom,

Any idea why this large elapsed time would be reported in a 10046 tracefile?
See raw tracefile info below where
EXEC #25:c=0,e=1104515301999379,

This is a snippet from the tkprof report:
********************************************************************************

insert into AccAgeAnalysisDay ( AccAgeAnalysisDay.kAccAgeAnalysisDayID ,
AccAgeAnalysisDay.fAccAgeAnalysisID ,
AccAgeAnalysisDay.fSubAccAgeAnalysisID ,
AccAgeAnalysisDay.fAgeAnalysisDayCodeID , AccAgeAnalysisDay.Balance ,
AccAgeAnalysisDay.LockStamp , AccAgeAnalysisDay.AuditDateTime ,
AccAgeAnalysisDay.fAuditSystemFunctionID , AccAgeAnalysisDay.fAuditUserCode
)
values
( :p1 , :p2 , :p3 , :p4 , :p5 , :p6 , :p7 , :p8 , :p9 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 45450 10.201104515313.83 272 1006 739365 45450
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 45450 10.201104515313.83 272 1006 739365 45450

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 34

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 45450 0.00 0.03
SQL*Net message from client 45450 49.03 108.08
db file sequential read 274 0.03 1.28
log file sync 5 0.00 0.00
control file sequential read 12 0.00 0.00
async disk IO 93 0.01 0.13
db file single write 1 0.00 0.00
control file parallel write 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
********************************************************

Parts of the raw trace file:

/oraj/oradata/oplei/oplei_ora_5738712.trc
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /ora01/app/oracle/product/9.2.0
System name: AIX
Node name: adm-unx3
Release: 2
Version: 5
Machine: 0000B4B4C00
Instance name: oplei
Redo thread mounted by this instance: 1
Oracle process number: 20
Unix process pid: 5738712, image: oracle@adm-unx3 (TNS V1-V3)

*** 2005-11-03 15:02:14.826
*** SESSION ID:(20.4) 2005-11-03 15:02:14.821
WAIT #18: nam='SQL*Net message to client' ela= 2 p1=675562835 p2=1 p3=0
WAIT #18: nam='SQL*Net message from client' ela= 21867 p1=675562835 p2=1 p3=0
APPNAME mod=' ? @admdev-unx (TNS V1-V3)' mh=0 act='' ah=0
=====================
PARSING IN CURSOR #18 len=1560 dep=0 uid=34 oct=3 lid=34 tim=1104514584813244 hv=1316610820 ad='3f51d030'
SELECT EnrolPresSite.kEnrolmentPresentationID, EnrolPresSite.EnrolPresSiteType, EnrolPresSite.fAccDefermentID, EnrolPresSite.fAgreem
....snip....
END OF STMT
EXEC #18:c=0,e=182,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1104514584813230
WAIT #18: nam='SQL*Net message to client' ela= 1 p1=675562835 p2=1 p3=0
...snip....
=====================
PARSING IN CURSOR #25 len=435 dep=0 uid=34 oct=2 lid=34 tim=1104514585267648 hv=90086482 ad='3f39bc88'
insert into AccAgeAnalysisDay ( AccAgeAnalysisDay.kAccAgeAnalysisDayID , AccAgeAnalysisDay.fAccAgeAnalysisID , AccAgeAnalysisDay.fSubAcc
AgeAnalysisID , AccAgeAnalysisDay.fAgeAnalysisDayCodeID , AccAgeAnalysisDay.Balance , AccAgeAnalysisDay.LockStamp , AccAgeAnalysisDay.Au
ditDateTime , AccAgeAnalysisDay.fAuditSystemFunctionID , AccAgeAnalysisDay.fAuditUserCode ) values ( :p1 , :p2 , :p3 , :p4 , :p5 ,
:p6 , :p7 , :p8 , :p9 )
END OF STMT
EXEC #25:c=0,e=189,p=0,cr=0,cu=16,mis=0,r=1,dep=0,og=4,tim=1104514585267645
WAIT #25: nam='SQL*Net message to client' ela= 0 p1=675562835 p2=1 p3=0
WAIT #25: nam='SQL*Net message from client' ela= 509 p1=675562835 p2=1 p3=0
EXEC #21:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1104514585268251

........ The cursor is at line 287 of 500000
..... from here until line 481035 in the file, cursor #25 has 8362 EXEC entries where the
max elapsed time = 74596 and the average = 232
...snip...
.......The cursor is at line 481035 of 500000

WAIT #22: nam='SQL*Net message from client' ela= 952 p1=675562835 p2=1 p3=0
EXEC #25:c=0,e=159,p=0,cr=0,cu=16,mis=0,r=1,dep=0,og=4,tim=1104515301860344
WAIT #25: nam='SQL*Net message to client' ela= 0 p1=675562835 p2=1 p3=0
WAIT #25: nam='SQL*Net message from client' ela= 473 p1=675562835 p2=1 p3=0
EXEC #21:c=0,e=24,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1104515301860896
WAIT #21: nam='SQL*Net message to client' ela= 1 p1=675562835 p2=1 p3=0
FETCH #21:c=0,e=30,p=0,cr=2,cu=0,mis=0,r=1,dep=0,og=4,tim=1104515301860939
WAIT #21: nam='SQL*Net message from client' ela= 31977 p1=675562835 p2=1 p3=0
EXEC #22:c=0,e=80,p=0,cr=1,cu=1,mis=0,r=1,dep=0,og=4,tim=1104515301893034
WAIT #22: nam='SQL*Net message to client' ela= 1 p1=675562835 p2=1 p3=0
WAIT #22: nam='SQL*Net message from client' ela= 101108 p1=675562835 p2=1 p3=0
EXEC #21:c=0,e=102,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1104515301994353
WAIT #21: nam='SQL*Net message to client' ela= 2 p1=675562835 p2=1 p3=0
FETCH #21:c=0,e=57,p=0,cr=2,cu=0,mis=0,r=1,dep=0,og=4,tim=1104515301994428
WAIT #21: nam='SQL*Net message from client' ela= 1219 p1=675562835 p2=1 p3=0
EXEC #22:c=0,e=174,p=0,cr=1,cu=1,mis=0,r=1,dep=0,og=4,tim=1104515301995893
WAIT #22: nam='SQL*Net message to client' ela= 1 p1=675562835 p2=1 p3=0
WAIT #22: nam='SQL*Net message from client' ela= 4051 p1=675562835 p2=1 p3=0
EXEC #25:c=0,e=1104515301999379,p=0,cr=0,cu=16,mis=0,r=1,dep=0,og=4,tim=1104515301999379
WAIT #25: nam='SQL*Net message to client' ela= 1 p1=675562835 p2=1 p3=0
WAIT #25: nam='SQL*Net message from client' ela= 853 p1=675562835 p2=1 p3=0
EXEC #21:c=0,e=39,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1104515302000349
WAIT #21: nam='SQL*Net message to client' ela= 0 p1=675562835 p2=1 p3=0
FETCH #21:c=0,e=54,p=0,cr=2,cu=0,mis=0,r=1,dep=0,og=4,tim=1104515302000417
WAIT #21: nam='SQL*Net message from client' ela= 418 p1=675562835 p2=1 p3=0
EXEC #22:c=0,e=112,p=0,cr=1,cu=1,mis=0,r=1,dep=0,og=4,tim=1104515302000991
...snip....



Tom Kyte
November 04, 2005 - 2:48 am UTC

looks like a bug - an overflow of some sort. Please utilize support (yes, I've heard of this sporadically and have seen it)

RE: Huge elapsed time

Frank, November 04, 2005 - 8:09 am UTC

@Kobus Theron

I have seen this often when someone used an version 8 TKProf on a version 9 trace file.
In the tkprof output the version of tkprof is shown.
Thought this might help you.

Frank

Tom Kyte
November 04, 2005 - 8:54 am UTC

the number in the trace file itself is botched:

EXEC #22:c=0,e=174,p=0,cr=1,cu=1,mis=0,r=1,dep=0,og=4,tim=1104515301995893
WAIT #22: nam='SQL*Net message to client' ela= 1 p1=675562835 p2=1 p3=0
WAIT #22: nam='SQL*Net message from client' ela= 4051 p1=675562835 p2=1 p3=0
EXEC
#25:c=0,e=1104515301999379,p=0,cr=0,cu=16,mis=0,r=1,dep=0,og=4,tim=11045153019993
79


see the big e= value...

RE: Huge elapsed time

Frank, November 04, 2005 - 12:08 pm UTC

Ah, ok. Didn't see that one.

Tom Kyte
November 04, 2005 - 5:31 pm UTC

no worries, happens to me all of the time :)

trace file size limit and tkprof

Menon, November 05, 2005 - 5:05 pm UTC

Hi Tom
When I run a query and tkprof it, the trace file is getting
curtailed due to system limit. I still get the
query stats as follows:

---
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 140497 16.85 242.00 114908 799768 0 140497
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 140499 16.85 242.01 114908 799768 0 140497

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 32 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 114908 0.11 231.76
********************************************************************************

---

When I look at end of raw trace file - it shows that the file was cut off while waiting for "db file sequential read" as follows:

--------

FETCH #4:c=0,e=39,p=0,cr=5,cu=0,mis=0,r=1,dep=1,og=4,tim=1104712802199066
WAIT #4: nam='db file sequential read' ela= 412 p1=513 p2=26900 p3=1
FETC

*** DUMP FILE SIZE IS LIMITED TO 52428800 BYTES ***

----

I have two questions:

1. if the query was still waiting on an event when the trace was curtailed, it means that the query is still running when the trace was curtailed. How do we get the stats for the query in the tkprof output from the trace files since the stats are available only when the query is comlpete.

2. What can one do for the "db file sequential read" wait :)? I know it is related to an index block read but what are the alternatives to deal with it?

Have a good one!




Tom Kyte
November 06, 2005 - 8:22 am UTC

you have an incomplete trace, tkprof is just showing you what is available. There is missing data here, you have an incomplete picture.

You cannot obviously get from the trace file what is not placed there - so the "stat" records - which come pretty much dead last (cannot give you the execution stats until they have "finished") won't be available, they were never recorded.


You'd have to look at the plan and the execution profile of this query - if it is getting hundreds of thousands of rows - there is a smaller and smaller chance you even want it using indexes - indexes could be the problem here. insufficient data to comment further, you can peek at this however:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894#6760861174154 <code>


yes but....

Menon, November 06, 2005 - 2:12 pm UTC

"You cannot obviously get from the trace file what is not placed there - so the
"stat" records - which come pretty much dead last (cannot give you the execution
stats until they have "finished") won't be available, they were never recorded."

agreed - that is what makes sense to me - but my question was - how did tkprof show the stats if the query was still going on when the trace file got curtailed? The stats in tkprof come right after the query - so I am wondering how that happenned? When I say stats I mean the following info:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 140497 16.85 242.00 114908 799768 0 140497
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 140499 16.85 242.01 114908 799768 0 140497


And in terms of "db file sequential read" wait event, I guess I will have to debug why the CBO is going for it if it is not the right alternative...

Thanx!



Tom Kyte
November 06, 2005 - 3:48 pm UTC

those are logged as they happen - I thought you meant the STAT records that have the plan used and the number of rows flowing out of each step of the plan. The STAT records are not recorded until the end.

But, every time you FETCH, the FETCH record is written with the amount of work done on that fetch. TKPROF simply aggregates the ones it sees.

Ah!

Menon, November 06, 2005 - 7:16 pm UTC

Thanx - that makes sense! Have a great day!

A reader, November 22, 2005 - 9:42 am UTC

Tom

While doing a 10046 trace, I noticed if I have really large queries, the trace files get truncated on my Unix box. The vi editor says "Line Too Long" and stops at the point of truncation. Is there an alter session thing that could sort this?

Ravi

Tom Kyte
November 22, 2005 - 10:26 am UTC

that is your editor, not the trace file itself in this case.

Unless you see a message to the effect "file exceeds max_dump_file_size, truncated", we didn't truncate it.

Use "more"

db file sequential read

Gerald Koerkenmeier, December 03, 2005 - 10:31 pm UTC

Tom, I am having trouble with an insert statement within a PL/SQL package that runs in about 1 minute on my test server, but takes roughly 78 minutes on production (both servers are identical in every respect). What makes the issue more troubling is that when I pull the insert out of the PL/SQL code and run it in SQL*Plus directly it runs in about 1 minute as well. Here is the query and the comparative TKPROF results.

Using SQL*Plus to run the insert directly:

insert into tr_ckt
(acna, btn, cac, ckt_sts, ckt_frmt, ckt_frmt_origl, cca_id,
cca_id_rule, cca_rptg_addr, ccna, ckt_cnt, ckt_id,
ckt_id_origl, cust_addr, cust_nm, cust_nm_origl, data_src,
file_id, lata_nbr, lata_origl, maint_ctr, mcn, moyr,
p2_addr, sbc_aflt, sbc_rgn, st_cd, svc_cat, svc_cd,
svc_priority, cca_crtd_dt, cca_crtd_by, cca_mdfd_dt,
cca_mdfd_by, cca_comments)
(SELECT acna, btn, cac, ckt_sts, ckt_frmt, ckt_frmt_origl, cca_id,
cca_id_rule, cca_rptg_addr, ccna, ckt_cnt, ckt_id,
ckt_id_origl, cust_addr, cust_nm, cust_nm_origl, data_src,
file_id, lata_nbr, lata_origl, maint_ctr, mcn, moyr, p2_addr,
sbc_aflt, sbc_rgn, st_cd, svc_cat, svc_cd, svc_priority,
SYSDATE, 'GK7692', NULL, NULL, NULL
FROM ts_ckt
WHERE cca_id IS NOT NULL)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 4 0 0
Execute 1 23.23 50.79 144 66755 286809 2512275
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 23.23 50.79 144 66759 286809 2512275

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 78

Rows Row Source Operation
------- ---------------------------------------------------
0 PARTITION HASH ALL PARTITION: 1 16
0 TABLE ACCESS FULL TS1000_CKT PARTITION: 1 16


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
rdbms ipc reply 2 0.00 0.00
enqueue 6 3.07 6.87
PX Deq: Join ACK 6 0.00 0.00
PX Deq: Parse Reply 11 0.00 0.00
PX Deq: Execute Reply 277 0.57 7.88
db file sequential read 144 1.19 9.67
log file switch completion 155 0.06 1.22
log file switch (checkpoint incomplete) 18 1.02 1.07
log buffer space 2 0.00 0.00
PX Deq: Signal ACK 10 0.10 0.10
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00


And from within the PL/SQL:

INSERT INTO TR_CKT (ACNA, BTN, CAC, CKT_STS, CKT_FRMT, CKT_FRMT_ORIGL, CCA_ID,
CCA_ID_RULE, CCA_RPTG_ADDR, CCNA, CKT_CNT, CKT_ID, CKT_ID_ORIGL, CUST_ADDR,
CUST_NM, CUST_NM_ORIGL, DATA_SRC, FILE_ID, LATA_NBR, LATA_ORIGL, MAINT_CTR,
MCN, MOYR, P2_ADDR, SBC_AFLT, SBC_RGN, ST_CD, SVC_CAT, SVC_CD,
SVC_PRIORITY, CCA_CRTD_DT, CCA_CRTD_BY, CCA_MDFD_DT, CCA_MDFD_BY,
CCA_COMMENTS) (SELECT ACNA, BTN, CAC, CKT_STS, CKT_FRMT, CKT_FRMT_ORIGL,
CCA_ID, CCA_ID_RULE, CCA_RPTG_ADDR, CCNA, CKT_CNT, CKT_ID, CKT_ID_ORIGL,
CUST_ADDR, CUST_NM, CUST_NM_ORIGL, DATA_SRC, FILE_ID, LATA_NBR, LATA_ORIGL,
MAINT_CTR, MCN, MOYR, P2_ADDR, SBC_AFLT, SBC_RGN, ST_CD, SVC_CAT, SVC_CD,
SVC_PRIORITY, SYSDATE, :B1 , NULL, NULL, NULL FROM TS_CKT WHERE CCA_ID IS
NOT NULL)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 6088.96 4713.14 19683 741559938 892667775 2512275
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 6088.96 4713.14 19683 741559938 892667775 2512275

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 78 (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
0 PARTITION HASH ALL PARTITION: 1 16
0 TABLE ACCESS FULL TS1000_CKT PARTITION: 1 16


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
rdbms ipc reply 4 0.00 0.00
enqueue 2 0.22 0.22
PX Deq: Join ACK 10 0.00 0.00
PX Deq: Parse Reply 3 0.00 0.00
PX Deq: Execute Reply 4 0.07 0.07
db file sequential read 19683 1.26 73.80
log file switch completion 91 0.26 3.23
latch free 6 0.00 0.00
PX Deq: Signal ACK 1 0.00 0.00

Obviously the issue is related to the db file sequential read wait event. I just can't figure out why it would be so significantly larger in the PL/SQL code, especially since the code is exactly the same as the code on the test server, where it runs in about 1 minute as well.

Tom Kyte
December 04, 2005 - 6:19 am UTC

.... (both servers are identical in every respect) ....

almost impossible to say that.



Looks like the table in one of the databases being inserted into has lots of indexes and the other might not?


lets see:

select dbms_metadata.get_ddl( 'TABLE', 'T' ) from dual;
select dbms_metadata.get_dependent_ddl( 'INDEX', 'T' ) from dual;
select dbms_metadata.get_dependent_ddl( 'CONSTRAINT', 'T' ) from dual;


where 'T' is your inserted into table name.

db file sequential read

Gerald Koerkenmeier, December 04, 2005 - 9:33 am UTC

I have included below the output from the functions you gave. However, the 2 TKPROF results I gave before were both for the same database (Production). One was within PL/SQL and one was issuing the insert directly from SQL*Plus. So table differences should not affect the differences there, right? Both were done one after another on the system after-hours with no other DB users.

Production:

gmcca@GMCC01P> select dbms_metadata.get_ddl( 'TABLE', 'TR1000_CKT' ) from dual;

DBMS_METADATA.GET_DDL('TABLE','TR1000_CKT')
--------------------------------------------------------------------------------

CREATE TABLE "GMCCA"."TR1000_CKT"
( "ACNA" VARCHAR2(5),
"BTN" VARCHAR2(10),
"CAC" VARCHAR2(8),
"CKT_STS" VARCHAR2(2),
"CKT_FRMT" VARCHAR2(1),
"CKT_FRMT_ORIGL" VARCHAR2(1),
"CCA_ID" NUMBER(7,0),
"CCA_ID_RULE" VARCHAR2(10),
"CCA_RPTG_ADDR" VARCHAR2(80),
"CCNA" VARCHAR2(5),
"CKT_CNT" NUMBER(38,0),
"CKT_ID" VARCHAR2(46),
"CKT_ID_ORIGL" VARCHAR2(46),
"CUST_ADDR" VARCHAR2(80),
"CUST_NM" VARCHAR2(50),
"CUST_NM_ORIGL" VARCHAR2(50),
"DATA_SRC" VARCHAR2(4),
"FILE_ID" NUMBER(3,0),
"LATA_NBR" NUMBER(3,0),
"LATA_ORIGL" VARCHAR2(5),
"MAINT_CTR" VARCHAR2(11),
"MCN" VARCHAR2(20),
"MOYR" DATE,
"P2_ADDR" VARCHAR2(50),
"SBC_AFLT" NUMBER(3,0),
"SBC_RGN" VARCHAR2(1),
"ST_CD" VARCHAR2(2),
"SVC_CAT" VARCHAR2(9),
"SVC_CD" VARCHAR2(2),
"SVC_PRIORITY" NUMBER(2,0),
"CCA_CRTD_DT" DATE NOT NULL ENABLE,
"CCA_CRTD_BY" VARCHAR2(6) NOT NULL ENABLE,
"CCA_MDFD_DT" DATE,
"CCA_MDFD_BY" VARCHAR2(6),
"CCA_COMMENTS" VARCHAR2(255)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "GMCC_01_DAT"
MONITORING PARALLEL

gmcca@GMCC01P> select dbms_metadata.get_dependent_ddl( 'INDEX', 'TR1000_CKT' ) from dual;

DBMS_METADATA.GET_DEPENDENT_DDL('INDEX','TR1000_CKT')
--------------------------------------------------------------------------------

CREATE BITMAP INDEX "GMCCA"."TR1000_CKT_B1" ON "GMCCA"."TR1000_CKT" ("MOYR", "
ST_CD", "SBC_AFLT", "SVC_CAT", "CCA_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "GMCC_01_IDX"
PARALLEL

gmcca@GMCC01P> select dbms_metadata.get_dependent_ddl( 'CONSTRAINT', 'TR1000_CKT' ) from dual;
ERROR:
ORA-31608: specified object of type CONSTRAINT not found
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_METADATA", line 631
ORA-06512: at "SYS.DBMS_METADATA", line 1282
ORA-06512: at line 1



Test:


gmcca@GMCC01T> select dbms_metadata.get_ddl( 'TABLE', 'TR1000_CKT' ) from dual;

DBMS_METADATA.GET_DDL('TABLE','TR1000_CKT')
--------------------------------------------------------------------------------

CREATE TABLE "GMCCA"."TR1000_CKT"
( "ACNA" VARCHAR2(5),
"BTN" VARCHAR2(10),
"CAC" VARCHAR2(8),
"CKT_STS" VARCHAR2(2),
"CKT_FRMT" VARCHAR2(1),
"CKT_FRMT_ORIGL" VARCHAR2(1),
"CCA_ID" NUMBER(7,0),
"CCA_ID_RULE" VARCHAR2(10),
"CCA_RPTG_ADDR" VARCHAR2(80),
"CCNA" VARCHAR2(5),
"CKT_CNT" NUMBER(38,0),
"CKT_ID" VARCHAR2(46),
"CKT_ID_ORIGL" VARCHAR2(46),
"CUST_ADDR" VARCHAR2(80),
"CUST_NM" VARCHAR2(50),
"CUST_NM_ORIGL" VARCHAR2(50),
"DATA_SRC" VARCHAR2(4),
"FILE_ID" NUMBER(3,0),
"LATA_NBR" NUMBER(3,0),
"LATA_ORIGL" VARCHAR2(5),
"MAINT_CTR" VARCHAR2(11),
"MCN" VARCHAR2(20),
"MOYR" DATE,
"P2_ADDR" VARCHAR2(50),
"SBC_AFLT" NUMBER(3,0),
"SBC_RGN" VARCHAR2(1),
"ST_CD" VARCHAR2(2),
"SVC_CAT" VARCHAR2(9),
"SVC_CD" VARCHAR2(2),
"SVC_PRIORITY" NUMBER(2,0),
"CCA_CRTD_DT" DATE NOT NULL ENABLE,
"CCA_CRTD_BY" VARCHAR2(6) NOT NULL ENABLE,
"CCA_MDFD_DT" DATE,
"CCA_MDFD_BY" VARCHAR2(6),
"CCA_COMMENTS" VARCHAR2(255)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "GMCC_01_DAT"
MONITORING PARALLEL

gmcca@GMCC01T> select dbms_metadata.get_dependent_ddl( 'INDEX', 'TR1000_CKT' ) from dual;

DBMS_METADATA.GET_DEPENDENT_DDL('INDEX','TR1000_CKT')
--------------------------------------------------------------------------------

CREATE BITMAP INDEX "GMCCA"."TR1000_CKT_B1" ON "GMCCA"."TR1000_CKT" ("MOYR", "
ST_CD", "SBC_AFLT", "SVC_CAT", "CCA_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "GMCC_01_IDX"
PARALLEL

gmcca@GMCC01T> select dbms_metadata.get_dependent_ddl( 'CONSTRAINT', 'TR1000_CKT' ) from dual;
ERROR:
ORA-31608: specified object of type CONSTRAINT not found
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_METADATA", line 631
ORA-06512: at "SYS.DBMS_METADATA", line 1282
ORA-06512: at line 1

Tom Kyte
December 04, 2005 - 11:50 am UTC

give me the entire sequence of events here, from cradle to grave, explain what is in the tables and all before/during/after. Some crucial piece of information is "missing" here.

can we see all of the sql involved here - in order- all of the conditions.

can you run this with statistics level set to all and make sure to exit sqlplus so the stat records are complete (no row counts or anything in the row source operations?)

versions?


you said before

"Obviously the issue is related to the db file sequential read wait event."

i would say definitely "not", look at the query consistent mode gets - I cannot explain them yet.

db file sequential read

Gerald Koerkenmeier, December 04, 2005 - 1:00 pm UTC

Tom,

The version is 9.2.0.5

This is one step in an ETL load package (with around 3500 lines). If you want the full code, I can give it, but here is the way it works:

Extract data from a whole bunch of external tables into a staging table which is truncated before it is loaded. The total number of rows in the staging table for the month in question is 2,512,275. The staging table is hash partitioned.

Validate and transform the data in the staging table using several stages of single SQL statements to update the staging table with lookup values.

Load the data into the reporting tables (one detail and one summary table). In this case (circuit data) the detail table stores only one month of data, while the summary table stores 24 months. The detail table cannot be truncated because although the data will be replaced with data from the current load, the load can be incremental, reloading files that did not exist during the first load of the month, or have been updated since then. So if it contains data from a previous month it is all deleted, otherwise it deletes only the data that will be replaced.

The SQL in question is the insert into the reporting detail table (TR_CKT) from the staging table (TS_CKT). The entire load step is done in one transaction and consists of the following procedure. Global variables are used for the load month (g_load_moyr), current data type and name (g_curr_data_type - 'C', g_curr_data_type_nm = 'Circuit'), current load id (g_load_id), and the userid running the load (g_sbcuid). Log_actvty is an autonomous logging procedure.




PROCEDURE load_ckt
AS
BEGIN
/* Deleting from reporting table. */
log_actvty (-20110, g_curr_data_type_nm);

DELETE FROM tr_ckt
WHERE (moyr != g_load_moyr)
OR ( moyr = g_load_moyr
AND (sbc_aflt, sbc_rgn) IN (
SELECT DISTINCT sbc_aflt, sbc_rgn
FROM t_file_load_hist
WHERE load_id = g_load_id
AND data_type = g_curr_data_type
AND error_cd IS NULL)
);

DELETE FROM tr_ckt_sumy
WHERE (sbc_aflt, sbc_rgn) IN (
SELECT DISTINCT sbc_aflt, sbc_rgn
FROM t_file_load_hist
WHERE load_id = g_load_id
AND data_type = g_curr_data_type
AND error_cd IS NULL)
AND moyr = g_load_moyr;

/* Loading reporting table. */
log_actvty (-20111, 'tr_ckt');

INSERT INTO tr_ckt
(acna, btn, cac, ckt_sts, ckt_frmt, ckt_frmt_origl, cca_id,
cca_id_rule, cca_rptg_addr, ccna, ckt_cnt, ckt_id,
ckt_id_origl, cust_addr, cust_nm, cust_nm_origl, data_src,
file_id, lata_nbr, lata_origl, maint_ctr, mcn, moyr,
p2_addr, sbc_aflt, sbc_rgn, st_cd, svc_cat, svc_cd,
svc_priority, cca_crtd_dt, cca_crtd_by, cca_mdfd_dt,
cca_mdfd_by, cca_comments)
(SELECT acna, btn, cac, ckt_sts, ckt_frmt, ckt_frmt_origl, cca_id,
cca_id_rule, cca_rptg_addr, ccna, ckt_cnt, ckt_id,
ckt_id_origl, cust_addr, cust_nm, cust_nm_origl, data_src,
file_id, lata_nbr, lata_origl, maint_ctr, mcn, moyr, p2_addr,
sbc_aflt, sbc_rgn, st_cd, svc_cat, svc_cd, svc_priority,
SYSDATE, g_sbcuid, NULL, NULL, NULL
FROM ts_ckt
WHERE cca_id IS NOT NULL);

/* Loading reporting table. */
log_actvty (-20111, 'tr_ckt_sumy');

INSERT INTO tr_ckt_sumy
(moyr, cca_id, sbc_aflt, sbc_rgn, st_cd, svc_cat, ckt_cnt,
cca_crtd_dt, cca_crtd_by, cca_mdfd_dt, cca_mdfd_by,
cca_comments)
(SELECT moyr, cca_id, sbc_aflt, sbc_rgn, st_cd, svc_cat,
SUM (ckt_cnt), SYSDATE, g_sbcuid, NULL, NULL, NULL
FROM ts_ckt
WHERE cca_id IS NOT NULL
GROUP BY moyr, cca_id, sbc_aflt, sbc_rgn, st_cd, svc_cat);

/* Updating loaded_rec_cnt in t_file_load_hist table. */
log_actvty (-20112);
update_loaded_cnt;
COMMIT;
END load_ckt;




You mentioned, "can you run this with statistics level set to all and make sure to exit sqlplus so the stat records are complete (no row counts or anything in the row source operations?)"

Since the process is loaded via a job in a separate session (and because I did't want to trace all other unnecessary procedures), I created the trace using sys.dbms_system.set_ev with a level of 12. Do you mean something different by 'statistics level set to all'?

Is it possible that all the query consistent mode gets are related to the fact that I am deleting 2.5 million records (all from the TR_CKT table) and then loading 2.5 million new records (from the TS_CKT table) without committing between? I want to avoid a commit there if at all possible, as if any section of the load step fails, the old data will remain. Maybe I don't have enough memory allocated to get this huge operation done without commit?

Unexpected logical I/O

Jonathan Lewis, December 04, 2005 - 2:49 pm UTC

As Tom points out, the extra time is spent in logical I/O (hence the excessive CPU), not the db file sequential reads.

You are suffering from a combination of events. But your thought about the absence of a commit is correct, it is one of the contributing factors. I am prepared to guess, however, that you are using ASSM (automatic segment space management) and this is the other half of the problem.

Consider the example - 9.2.0.6 (I haven't tried it on 10g yet):

drop table t1;
create table t1 as select * from all_objects;

begin
delete from t1;
-- commit;
insert into t1 select * from all_objects;
commit;
end;
/

I've done 4 variants on this:
t1 in ASSM, with commit,
t1 in ASSM, no commit,
t1 in freelist management, with commit,
t1 in freelist management, no commit.

Three of the four showed results like:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.03 0 0 0 0
Execute 1 0.96 1.01 0 97989 1617 29188
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.00 1.04 0 97989 1617 29188


The example in ASSM, with NO commit showed:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.03 0 0 0 0
Execute 1 1.23 1.38 0 154588 71133 29188
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.26 1.41 0 154588 71133 29188


Most of the extra logical I/O was due to acquisitions, search, and updates of bitmap space management blocks (which I could see by checking x$kcbsw).



Tom Kyte
December 04, 2005 - 3:41 pm UTC

Jonathan - thanks, you beat me to the punch.


10gr2, 100,000 rows, ASSM

t1 = delete then insert then commit;
t2 = delete then commit then insert then commit;


INSERT INTO T T1 SELECT * FROM BIG_TABLE.BIG_TABLE WHERE ROWNUM <= 100000

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 28 0 0
Execute 1 5.60 8.44 1074 985719 1195124 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 5.62 8.45 1074 985747 1195124 100000
********************************************************************************
INSERT INTO T T2 SELECT * FROM BIG_TABLE.BIG_TABLE WHERE ROWNUM <= 100000


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.50 4.99 1489 3408 11060 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.51 5.00 1489 3408 11060 100000



same thing with Manual segement space management:

INSERT INTO T T1 SELECT * FROM BIG_TABLE.BIG_TABLE WHERE ROWNUM <= 100000


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 28 0 0
Execute 1 0.58 4.79 1440 4358 5909 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.59 4.81 1440 4386 5909 100000

********************************************************************************

INSERT INTO T T2 SELECT * FROM BIG_TABLE.BIG_TABLE WHERE ROWNUM <= 100000


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 28 0 0
Execute 1 0.34 1.33 1 3325 5720 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.35 1.34 1 3353 5720 100000


This is why I wanted to see the entire process - to be able to simulate it myself...


the delete was crucial to the question.

Resolution

Gerald Koerkenmeier, December 04, 2005 - 4:02 pm UTC

So what is the resolution here? Use a delete? Do not use ASSM? Or some other combination?

Resolution

Gerald Koerkenmeier, December 04, 2005 - 10:49 pm UTC

Sorry, that should be commit, not delete. Issuing the commit solved the problem, it is now down to 2-3 minutes as opposed to 70+. I think I need to use manual storage management since I do not want to use the commit in the code to break the transaction.

Are there any guidelines about the applications of ASSM and its potential downsides?

Also - do you think that a reporting table that holds 2.5 - 3 million rows is a good candidate for hash partitioning?

Tom Kyte
December 05, 2005 - 12:32 am UTC

This would be a case whereby you might not want to be using ASSM (for obvious reasons in this case). I most "normal" circumstances, ASSM can work rather nicely and obviates the need (that most people don't do anyway) of setting freelists, freelist groups and pctused.

ASSM tends to "waste space" in order to achieve "higher concurrency for modifications". In a warehousing/reporting system - this is not necessarily what you are after. Also, ASSM tends to "scatter data" where as manual segment space management will not. So, if you try to load data "sorted", you might find that ASSM thwarts that to an extent as well.


3 million rows isn't very large. You using partitioning not because a "segment is large" but because you want to gain something from it. For example

o rolling windows (data purge)
o ease of admin (the segment is so large and needs to be managed in smaller chunks)
o you need partition elimination (but hash partitioning is dubious there in many cases)

and so on. 3 million rows is pretty small. hash partitioning is not likely to achieve anything for you (beyond perhaps slowing down the loads since we have to hash each row to figure out where to put it)

Where is another waits?

Reader, December 05, 2005 - 4:07 am UTC

From 10046 trace file:

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 86 0.03 0.09 0 0 0 0
Execute 124428 34.02 68.41 0 2 0 0
Fetch 175502 51.78 154.58 14351 5326811 0 135484
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 300016 85.83 223.09 14351 5326813 0 135484

Misses in library cache during parse: 9

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 14351 0.08 68.45
latch free 1519 0.15 6.30

223.09 seconds elapsed, 85.83 CPU.

We waited something for 223.09 - 85.83 = 137,79 seconds.

68.45 -- db file sequential read
6.30 -- latch free

for the total of 74,75 second.

Where is another 137,79 - 74,75 = 63,04 seconds ???

We spent 63 seconds waiting for .... what ?

Tom Kyte
December 06, 2005 - 4:10 am UTC

cpu perhaps.

To Gerald Koerkenmeier

A Reader, December 05, 2005 - 8:29 am UTC

Was tablespace of the table in question created as Auto SSM in production and with manual SSM in Test?

Tom/Jonathan,

That was an excellent analysis. Are there any do's/don'ts to be considered while creating tablespaces? When ASSM should be considered, etc. I am personally creating every tablespace with ASSM.

Tom Kyte
December 06, 2005 - 4:35 am UTC

the do's and don'ts are distilled into one word:

o test


I've a feeling the systems were not the same, else they would observe in test what they observed in production.

I hate to come up with rules of thumb - but - if you look at what ASSM was designed to deal with, that will give you a hint when it might or might not apply. It was designed to improve concurrency (at the cost of space). It was predominantly designed for transactional type of systems.

10046 degrading performance - why?

IK, December 14, 2005 - 5:26 am UTC

Tom,

I have a strange problem here. I hit upon this rather accidentally. For pure learning purpose, i want to understand whats really happening.

These are the steps i did

Connect to Oracle
Alter system flush shared_pool;
alter system flush buffer_cache;
alter session set timed_statistics=true
ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';
TRUNCATE TABLE runstats;

The flushing of the shared pool and buffer cache was just to see how much latching and physical reads get incurred. I didnt even start my actual query but noticed that truncate of this 'runstats' table with 300 rows took 19 secs.

If i log in again and execute only, (without the sql trace)


Alter system flush shared_pool;
alter system flush buffer_cache;
TRUNCATE TABLE runstats;

It gets done in 0.15 secs.
Significant waits are on SQL*Net message from client.

My question is
---------------
(1) Why does enabling tracing degrade performance? Its not just the truncate... every statement gets slower (much much slower) and all the time it waits for 'SQL*Net message from client'. There are 'db_file_sequential_read' waits also.. but minor when compared to the message form cleint waits.

What could be happening here?

(2) When i tested performance of queries after disabling sql trace, i could see (from v$session_wait) that it still waits on 'sqlnet message from client'.

My query returns 3000 records of which one field is a BLOB (in-line < 2000 bytes).

PL/SQL developer is the tool i use. Records (in arrays of 10) starts coming in but takes 35 minutes to display all 3000 records. Most of the time it waits for 'message from client'.

Is this a problem with the client application or is it to do with the TCP settings on the server? Anything (apart from changing the tool - sqlplus does not display BLOBs) i can do?

PS:- I know that flushing the shared_pool and buffer_cache isnt a good thing to do - but i really want to know what is going on here ;)

Many thanks








Tom Kyte
December 14, 2005 - 8:40 am UTC

1) I cannot reproduce. I can hypothesize - perhaps table was "big", first truncate had lots of extents to free up. Second one - nothing.

I used this script:

disconnect
connect /
set timing on
drop table t;
create table t as select * from all_objects;
Alter system flush shared_pool;
alter system flush buffer_cache;
alter session set timed_statistics=true;
ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';
TRUNCATE TABLE t;
disconnect
connect /
set timing on
drop table t;
create table t as select * from all_objects;
Alter system flush shared_pool;
alter system flush buffer_cache;
TRUNCATE TABLE t;


They had minimal differences in response times. I would expect most things to run (much) slower with tracing on however, you are after all generating the trace.


2) that means the client is taking a while to process the results.

Get "faster" client, write "faster" client. LOBS are not array fetched. That is really row by row - lobs are "pointers" to the real data.

But ...

Ik, December 14, 2005 - 9:40 am UTC

Thank you very much.

(1) I thought tracing happened in the background and so the client application should not be affected by tracing at all. Shouldn't that be the case?

(2) Tom, i really understand that the LOB issue isn't related to this post. But, could you please expand a little bit or any URLs would be helpful. I never saw any documentation that mentioned about the row by row reading of LOBs. Is this the case even if the LOB is stored in line?

Thanks,

Tom Kyte
December 14, 2005 - 10:05 am UTC

1) tracing is done by your FOREGROUND process - your dedicated or shared server. It'll definitely impact the client, measurably.


2) lobs are pointers, you get a lob - you have to "dereference it" using dbms_lob.read or some other API available in your language. You have to go back to the database to "get it" (unless you use dbms_lob.substr() or substr to retrieve some piece of it.

A reader, December 15, 2005 - 2:48 pm UTC

Hi Tom,

Here is the tkprof output of trace file of 1 of the sessions which was experiencing very bad performance, when querying some of the dictionary views. This database was recently refreshed from production and the tables are all analyzed after the refresh. We have similar kinds of poor response quite often in this dev database even since it was created . Any idea what could be the reason for this library cache - latch wait (child latch 2) ?

There are only 2 instances running on this particular box. Both were refreshed using the same method from prod and have similar init.ora memory settings. One database is quite fast whereas the other one(whose session tkprof is given below) is having performance issues

thanks
Anto
TKPROF: Release 9.2.0.1.0 - Production on Thu Dec 15 14:23:13 2005

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: getlog.txt
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

select text
from
view$ where rowid=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6 0.02 0.64 0 0 0 0
Execute 6 0.00 0.22 0 0 0 0
Fetch 6 0.00 0.11 2 12 0 6
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18 0.02 0.98 2 12 0 6

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$ (cr=1 r=1 w=0 time=25046 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 2 0.03 0.03
latch free 1 0.26 0.26
********************************************************************************

SELECT SQL_TEXT FROM V$SQLTEXT_WITH_NEWLINES WHERE
HASH_VALUE=TO_NUMBER(:hash) ORDER BY PIECE

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.01 0.44 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 5 4.73 369.95 0 0 0 77
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 4.74 370.40 0 0 0 77

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 26

Rows Row Source Operation
------- ---------------------------------------------------
0 SORT ORDER BY (cr=0 r=0 w=0 time=369918375 us)
0 FIXED TABLE FIXED INDEX X$KGLNA1 (ind:1) (cr=0 r=0 w=0 time=369910733 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 178 2.96 8.83
SQL*Net message to client 13 0.00 0.00
SQL*Net message from client 13 248.40 343.10
********************************************************************************

select sql_text
from V$sqltext_with_newlines
where address = (select prev_sql_addr
from V$session
where username = :uname
and sid = :snum)
ORDER BY piece

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 1.52 0 4 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 4.39 339.15 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 4.41 340.68 0 4 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 26

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT ORDER BY (cr=0 r=0 w=0 time=339120039 us)
1 FILTER (cr=0 r=0 w=0 time=339064370 us)
130663 FIXED TABLE FULL X$KGLNA1 (cr=0 r=0 w=0 time=338998092 us)
1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 r=0 w=0 time=63 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 4 0.00 0.00
SQL*Net message from client 4 110.50 110.76
latch free 250 1.26 10.81
********************************************************************************

select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,
nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,
scale,183,scale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,
rowid,col#,property, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,
nvl(spare3,0)
from
col$ where obj#=:1 order by intcol#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.81 0 0 0 0
Execute 1 0.00 0.71 0 0 0 0
Fetch 7 0.00 0.04 0 3 0 6
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 0.00 1.57 0 3 0 6

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.08 0.08
********************************************************************************

select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1,
spare2
from
obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null
and linkname is null and subname is null


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.71 0 0 0 0
Execute 1 0.02 0.54 0 0 0 0
Fetch 1 0.00 0.08 0 2 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.02 1.33 0 2 0 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 1 0.00 0.00
********************************************************************************

SELECT
S.STATUS "Status",
S.SERIAL# "Serial#",
S.TYPE "Type",
S.USERNAME "DB User",
S.OSUSER "Client User",
S.SERVER "Server",
S.MACHINE "Machine",
S.MODULE "Module",
S.CLIENT_INFO "Client Info",
S.TERMINAL "Terminal",
S.PROGRAM "Program",
P.PROGRAM "O.S. Program",
s.logon_time "Connect Time",
lockwait "Lock Wait",
si.physical_reads "Physical Reads",
si.block_gets "Block Gets",
si.consistent_gets "Consistent Gets",
si.block_changes "Block Changes",
si.consistent_changes "Consistent Changes",
s.process "Process",
p.spid, p.pid, si.sid, s.audsid,
s.sql_address "Address", s.sql_hash_value "Sql Hash", s.Action,
sysdate - (s.LAST_CALL_ET / 86400) "Last Call"
FROM
V$SESSION S,
V$PROCESS P,
sys.V_$SESS_IO si
WHERE
S.paddr = P.addr(+)
and si.sid(+)=s.sid
and (s.USERNAME is not null) and (NVL(s.osuser,'x')<>'SYSTEM') and (s.type<>'BACKGROUND')
ORDER BY 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 5.48 1 11 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 1 0.03 0.06 0 0 0 8
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.07 5.55 1 11 0 8

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 26

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 3 0.00 0.00
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.08 0.16
SQL*Net more data to client 1 0.00 0.00



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.06 7.45 1 15 0 0
Execute 4 0.01 0.00 0 0 0 0
Fetch 7 9.15 709.17 0 0 0 86
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 9.22 716.63 1 15 0 86

Misses in library cache during parse: 3

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 431 2.96 19.64
SQL*Net message to client 19 0.00 0.01
SQL*Net message from client 19 248.40 454.03
SQL*Net more data to client 1 0.00 0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 8 0.02 2.16 0 0 0 0
Execute 8 0.02 1.48 0 0 0 0
Fetch 14 0.00 0.24 2 17 0 12
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 30 0.04 3.89 2 17 0 12

Misses in library cache during parse: 3

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch free 2 0.26 0.26
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.08 0.08
db file sequential read 2 0.03 0.03

4 user SQL statements in session.
8 internal SQL statements in session.
12 SQL statements in session.
********************************************************************************
Trace file: getlog.txt
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
4 user SQL statements in trace file.
8 internal SQL statements in trace file.
12 SQL statements in trace file.
6 unique SQL statements in trace file.
728 lines in trace file.


Tom Kyte
December 15, 2005 - 3:02 pm UTC

may I ask why you are running these queries? You do know that v$ tables are really querying data structures typically and these require latched access (cannot modify a data structure I'm walking...)

Sure way to lock up your system is over query these views. Are you sure you want to do this? We may need stats on the fixed views to get proper plans - but I'd like to understand more...

Anto, December 15, 2005 - 2:56 pm UTC

For the above , Database version is 9204

A reader, December 16, 2005 - 9:55 am UTC

Hi Tom,

These are the trace generated by a session, while using TOAD --->DBA---> Kill/Trace session option - wherein we can see each session and their current SQLs and their stats so far.

These used to work nicely for all the other instances so far. As you have mentioned, we tried gathering stats on the system tables but it did not make any difference

We did have some excessive paging/swapping in the server due to memory constraints - but in spite of reducing the instance SGA memory dynamically we are still facing the issue. And the surprising part is that we have 2 dev instances on this box,which has the same settings and refreshed using the same method at about the same time, out of which 1 instance is working perfectly and only the other is having issues like this.

thanks
Anto

Tom Kyte
December 16, 2005 - 12:47 pm UTC

on *fixed* views.


suggest you:

create table t as select * from v$parameter;

export it, import it to the other database and report out the parameters that are different - I'm sure you'll find some ;)

Oracle 9i Trace file Explain plan againt 8i database

Krishan Jaglan, December 20, 2005 - 7:40 am UTC

Hi Tom,

what i am trying to do is
1. trace a session in 9i using event 10046 with level 12.
2. run tkprof trace.file out.txt explain=u/p@8idatabase.

What I want to achive is to get the explain plan for all the query in trace file againt 8i database.

SO is it possible to get explain plan for 9i trace file against 8i database.
If its not possible what the another way around to get the explain plan , i don't have sql query i only have trace file from 9i database.

thanks in advance.



Tom Kyte
December 20, 2005 - 9:33 am UTC

well, you do "have" the SQL query, it is in the trace file!


but I just did a quick test, seemed to work OK for me.

Oracle 9i Trace file Explain plan againt 8i database

Krishan Jaglan, December 21, 2005 - 6:56 am UTC

Hi Tom,

Thanks for your help.

Further to my previous query,

My question is does it really run the query against 8i database to create execution plan or it just uses the 8i database PLAN_TABLE for processing the Resources.

Thanks in advance


Tom Kyte
December 21, 2005 - 7:38 am UTC

it would have to use the 8i database - it has not any clue what or where your 9i database is - it connects to the database you told it to connect to and runs the explain plan command.

Matching the timing of the extract with the timings in tkprof output file

A reader, March 01, 2006 - 10:04 am UTC

Hi Tom,

Here is the last portion of our tkprof output file run on our Dev environment

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 10 0.10 0.08 0 0 0 0
Execute 11 31.99 73.06 38823 128960 1679 11
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 21 32.09 73.15 38823 128960 1679 11

Misses in library cache during parse: 10
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 361 0.39 15.26
db file sequential read 122 0.08 0.71
direct path write 2 0.00 0.00
log file sync 9 0.01 0.03
SQL*Net message to client 13 0.00 0.00
SQL*Net message from client 13 0.01 0.04


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3749 1.43 1.69 4 190 5 0
Execute 5373 7791.44 10073.47 2281799 302465763 23049461 29128198
Fetch 5923 107.17 120.27 43971 8468446 1 4101
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15045 7900.04 10195.44 2325774 310934399 23049467 29132299

Misses in library cache during parse: 301
Misses in library cache during execute: 2

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 147364 24.21 1151.11
db file sequential read 111965 20.11 690.97
direct path write 3082 0.95 198.92
control file sequential read 66 0.00 0.00
direct path read 2425 0.31 23.62
log buffer space 1745 0.98 347.98
latch free 14 0.03 0.08
free buffer waits 189 0.98 142.92
log file switch completion 68 0.61 7.82
log file sync 29 0.73 6.70
rdbms ipc reply 48 0.23 0.59

440 user SQL statements in session.
3324 internal SQL statements in session.
3764 SQL statements in session.

Now my questions - rather I will say what I think ,please confirm whether what I assume is true or not

a) Our total extract(whose tkprof summary is shown above) ran for around 170 minutes which comes to 10200 seconds,

From the tkprof output :

Total elapsed time for non recursive SQLs : 73.15
Total elapsed time for recursive SQLs : 10195.44

Sum of total wait time for db file scattered read, db file sequential read etc : around 2500 seconds

10200 should be roughly equal to 73.15+ 10195.44+2500

In other words : Total extract time should be roughly equal to the elapsed times for recursive and non-recursive SQLs plus sum of total waited time ?

b) Is CPU wait included in the wait events for a 9i tkprof ?

thanks
Anto



Tom Kyte
March 01, 2006 - 10:34 am UTC

a) no, 10200 should be roughly equal to the sum of:

Total elapsed time for non recursive SQLs : 73.15
Total elapsed time for recursive SQLs : 10195.44


which it is. Wait events are already accounted for in the elapsed time.


2) no, that is an "unrecordable event", not something you can really capture.


explain shown only for SELECTs

A reader, March 01, 2006 - 10:31 am UTC

Hi,

While using tkprof, the output file shows the actual explain plan only for the SELECTs. No explain plans are shown for INSERTS/UDATES in the tkprof output. IS there any way we can get the actual explain plan used of INSERTS/UDATES in tkprof output ?

thanks
Apraim

Tom Kyte
March 01, 2006 - 10:35 am UTC

make sure the cursors are closed, I will suspect that the insert/update statements are not closed, hence the stat records do not appear in the trace (or you stopped tracing BEFORE closing them)

Easiest thing - close the session that is tracing and then everything will be there.

total waits,

A reader, March 01, 2006 - 12:08 pm UTC

I thought the sum of total waits of all the wait events almost equals to the elapsed time (in the tkprof report).

But it doesn't look like. What does Oracle represent that time which is the difference between elapsed time and sum of total waits?

Thanks,

Tom Kyte
March 01, 2006 - 1:49 pm UTC

no, that is wrong.

elapsed time is already inclusive of all CPU + waits

in theory

cpu time + wait time + (time spent waiting for cpu) = elapsed time.



elapsed time,

A reader, March 01, 2006 - 2:32 pm UTC

That was clear. thanks very much.

If time spent for CPU is a big number, is there a way to reduce that at the query level or at the application/design level?

Tom Kyte
March 02, 2006 - 8:34 am UTC

yes there probably is.

Is there a single answer to that?

No, of course there is not.

Matching the timing of the extract with the timings in tkprof output file

Anto, March 01, 2006 - 2:52 pm UTC

Thanks for the clarification


To the previous "reader"

A reader, March 01, 2006 - 2:54 pm UTC

Add more CPUs and execute in parallel ;-)



Tom Kyte
March 02, 2006 - 8:40 am UTC

but that would be a way to likely INCREASE the total cpu used ;)

CPU usage

A reader, March 02, 2006 - 12:44 pm UTC

But by having multiple CPUs and executing large operations in parallel(assuming most of the operations are those that can be parallelized like full table scans), the total execution time is most likely to come down - right ?

Tom Kyte
March 02, 2006 - 1:18 pm UTC

but it would not decrease the time spent on CPU (was my point)

it might reduce the elapsed time
it might increase the elapsed time
it might not have any affect on the elapsed time

and we don't know that they have a large operation :)

In short - not really enough information to give a terse answer, you would need pages of explanation to get it all across.

Interpretation of "WAIT #0:"

Mathew Butler, March 02, 2006 - 12:59 pm UTC

I'm just investigating the performance of a client-server application that pulls alot of data across the network in order to locally cache in the client. I'm encountering a significant wait against 'SQL*Net message to client'

Above you say:

1) the wait is wait not associated with any cursor.

log file sync = wait during "commit". it is associated with your

XCTEND rlbk=0, rd_only=0

record (commit) right above. The next two waits are also not associated with a
cursor, this for example:

WAIT #0: nam='SQL*Net message from client' ela= 9924 p1=1413697536 p2=1 p3=0

just means you "sat there after you committed and did nothing for a while"

My comments:

Here is a 10046 cut-and-paste if the section that I am interested in. The process takes a total of approx 25 minutes to pull acrosss the data, manipulate it ready for display.

XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 26253 p1=2001 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
*** 2006-03-02 17:28:10.000
WAIT #0: nam='SQL*Net message from client' ela= 1117511861 p1=1413697536 p2=1 p3=0

I see a few commit lines like above with small (=acceptable WAIT #0: nam='SQL*Net message from client' ela=) lines. I then see the one above.

So, if I understand correctly, the session commits and then waits (doing some client processing) for a considerable amount of time. This time is associated with the commit marker (XCTEND rlbk=0, rd_only=0)?

Best Regards,

Tom Kyte
March 02, 2006 - 1:21 pm UTC

the session commits (xctend)

the client who was committing waited a bit of time for the log file sync

then the server waited a tiny bit of time sending the data (the commit was OK data) to the client (ela=3)

then the server sat and waited for ela=1117511861 whilst the client fiddled with whatever it was doing.

difference between

A reader, March 07, 2006 - 9:39 am UTC

Hi Tom,

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 3 0.03 0.67 6 8 9 3
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.03 0.67 6 8 9 3

Misses in library cache during parse: 0
Misses in library cache during execute: 2

what is the difference between -
Misses in library cache during parse and
Misses in library cache during execute ?


I know "Misses in library cache during parse " means hard parse but what about "Misses in library cache during execute" ?


Tom Kyte
March 08, 2006 - 4:33 pm UTC

means the sql statement was aged out or otherwise invalidated after the parse. it was "auto-recompiled"


drop table t;
create table t ( x int );
create or replace procedure p
as
begin
for x in ( select * from t )
loop
null;
end loop;
end;
/
alter session set sql_trace=true;
exec p
exec p
alter system flush shared_pool;
exec p


will show:


Misses in library cache during parse: 1 <<<=== first parse after creating table
Misses in library cache during execute: 1 <<== the 3 execute did that




automate tkprof,

A reader, March 09, 2006 - 8:24 am UTC

I have a need to automate the tkprof process whenever I trace a session on both Windows and Unix.

Example: I use dbms_system to enable the trace for some SID (22 for eg). Under UDUMP folder, a .trc file would be generated.
At the end of the session 22, the trace will be set off.

What I need is some job should tkprof on that .trc file and mail us.

Is it possible?

Thanks,

Tom Kyte
March 09, 2006 - 2:49 pm UTC

do you have access to my book Effective Oracle by Design?

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:7115831027951 <code>

How to check if a trace event is set

Jay, May 31, 2006 - 10:53 am UTC

Hi, Tom,

Is there any way to find out whether we dynamically set a trace event or not on instance and session level other than go through trace files in user_dump_dest? Some times we set trace events like '1652 trace ...' or '1427 trace ...' on system level to capture certain SQL activitiy but we may later forget whether we set it or not.

Thanks!

Tom Kyte
May 31, 2006 - 3:10 pm UTC

no, that is not exposed in the v$ views.

trace event set?

Mark A. Williams, May 31, 2006 - 3:45 pm UTC

Perhaps this little script on Jonathan's site might be helpful if you just want to determine if a trace event is set:

</code> http://www.jlcomp.demon.co.uk/tracetst.html <code>

The "read_event.sql" script is near the bottom.

- Mark

Tom Kyte
May 31, 2006 - 3:51 pm UTC

indeed, that'll do it for the *current* session - in re-reading the above, it might work for them.

Gte events

Michel Cadot, May 31, 2006 - 3:50 pm UTC

Hi,

You can know which events are set on your session with dbms_system.read_ev.

I use the following PL/SQL block:

set serveroutput on size 100000 format wrapped
declare
event_level number;
cnt pls_integer := 0;
begin
dbms_output.put_line (' ');
for i in 10000..10999 loop
sys.dbms_system.read_ev (i, event_level);
if ( event_level > 0 ) then
dbms_output.put_line ('Event '||to_char(i)||' set at level '||
to_char(event_level));
cnt := cnt + 1;
end if;
end loop;
if cnt = 0 then
dbms_output.put_line ('No event set');
end if;
dbms_output.put_line (' ');
end;
/

Regards
Michel

Re: Trace Event Set?

Jay, June 01, 2006 - 10:37 am UTC

Thanks Mark and Michel, but as Tom pointed out, dbms_system.read_ev only works for current session.

I also tested setting sql_trace = true in the same session, it does give me level 1 for 10046 in 9i, but it returns level 0 in 10gr2.

event 10046 level 12

A reader, July 05, 2006 - 6:35 pm UTC

Tom,
I have the query fails with 3113 on the client and 7445 on the server every time after 8950 row selected. I have the tar with Oracle support for about a weekend, have not had any luck yet.

They wanted by to set the event 10046 level 12. and I did.
I asked them for finding out exact row (fields) failing on.
They told me the event does not do that.

Question: how can i find the row record the query falied on?

Sean

Tom Kyte
July 08, 2006 - 8:46 am UTC

They told you....

the 10046 level 12 trace will give them the query text, the binds and everything that is traceable upto the point of failure.

Follow up about event 10046

Richard Tan, July 06, 2006 - 5:28 pm UTC

Hi Mr. Kyte,

Thank you very much for your previous help, which I appreciate a lot.

Recently, we run a moderately simple query, it does not work in production.

I remember due to nest level...,

alter session set events '10046... to make the query to run successfully. 10046 may not be right.

I can not remember the rest

Is there any set events to make query to work?

The error is:
ERROR at line 1:
ORA-03113: end-of-file on communication channel

In the alert log:
ORA-07445: exception encountered: core dump [kkogtp()+5893] [SIGSEGV] [Address not mapped to object] [0x40] [] []

I thank you very much for your help.

Richard Tan

Tom Kyte
July 08, 2006 - 10:32 am UTC

eh? please utilize support for a 7445.

Interpreting raw trace file

Rahul, July 12, 2006 - 12:50 pm UTC

Tom,

We noticed that the session hung when trying to execute a packaged procedure. I enabled a 10046 trace and here is the relavant portion:

...
FETCH #16:c=0,e=22,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=188383669329
=====================
PARSING IN CURSOR #20 len=96 dep=1 uid=39 oct=3 lid=39 tim=188383669578 hv=13305
8167 ad='165bfbb0'
SELECT FN_VAL_LCK_MSG_CD FROM GTEMP_TCSLKVL WHERE CUS_SYS_NR = :B2 AND SLS_PLN_VER_TYP_CD = :B1
END OF STMT
PARSE #20:c=0,e=119,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=188383669569
BINDS #20:
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=c000000300000001
size=56 offset=0
bfp=800000010016d6e0 bln=22 avl=05 flg=05
value=13298515
bind 1: dty=1 mxl=32(01) mal=00 scl=00 pre=00 oacflg=03 oacfl2=c000000300000001
size=0 offset=24
bfp=800000010016d6f8 bln=32 avl=01 flg=01
value="C"
EXEC #20:c=0,e=206,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=188383669902
FETCH #20:c=0,e=59,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=4,tim=188383669994
FETCH #13:c=0,e=42,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=188383670112
WAIT #1: nam='library cache pin' ela= 2941460 p1=-4611686005146523688 p2=-461168
6005186726192 p3=302
WAIT #1: nam='library cache pin' ela= 2947388 p1=-4611686005146523688 p2=-461168
6005186726192 p3=302
WAIT #1: nam='library cache pin' ela= 2939247 p1=-4611686005146523688 p2=-461168
6005186726192 p3=302
WAIT #1: nam='library cache pin' ela= 2943607 p1=-4611686005146523688 p2=-461168
6005186726192 p3=302
...

The table above (GTEMP_TCSLKVL) is a global temporary table with on commit preserve rows.

This is what I am trying to understand:

1. The WAIT events are for the select stmt displayed above, right? Or are they for the next thing being executed?

2. This metalink note
</code> https://metalink.oracle.com/metalink/plsql/f?p=130:14:7810075377222815638::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,34579.1,1,1,1,helvetica <code>
describes P3 as
...
In Oracle 9.0 - 9.2 inclusive the value is 100 * Mode + Namespace.

Where:

Mode is the mode in which the pin is wanted. This is a number thus:

* 2 - Share mode
* 3 - Exclusive mode


Namespace is just the namespace number of the namespace in the library cache in which the required object lives:

* 0 SQL Area
* 1 Table / Procedure / Function / Package Header
* 2 Package Body
* 3 Trigger
* 4 Index
* 5 Cluster
* 6 Object
* 7 Pipe
* 13 Java Source
* 14 Java Resource
* 32 Java Data
...

So p3=302 means EXCLUSIVE MODE + PACKAGE BODY. Is this the package body of the package this stmt is called from? Or is it the next one?

Tom Kyte
July 12, 2006 - 3:54 pm UTC

the waits are associated with #1, look up in the trace file, what is the last parsed statement for #1.

Re: Interpreting raw trace file

Rahul, July 12, 2006 - 4:44 pm UTC

Thanks for the quick response Tom.

The last #1 I see in the file is the anonymous block we used to call the packaged procedure:
...
PARSING IN CURSOR #1 len=671 dep=0 uid=39 oct=47 lid=39 tim=188381000658 hv=1864
538309 ad='18a91690'
DECLARE
<...snipped...>
BEGIN
my_package.my_procedure(<...some_parameters..>);
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(SubStr('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM, 1, 255
);
RAISE;
END;
END OF STMT
PARSE #1:c=0,e=674,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=188381000640
...

So is the WAIT 'library cache pin' occuring on the "my_package" package above? I tried using P1 and P2 in the WAIT info to query the X$ views, but I do not have access to them.

Any ideas on how I can simulate this situation?

Thanks,
Rahul

Tom Kyte
July 12, 2006 - 5:44 pm UTC

can you get access to them?

Re: Interpreting raw trace file

Rahul, July 12, 2006 - 9:31 pm UTC

Thanks Tom.

I have requested the DBAs to run the query:
SELECT kglnaown "Owner", kglnaobj "Object"
FROM x$kglob
WHERE kglhdadr='&P1RAW'

I neglected to mention above that the issue was resolved after a few developers logged off and a batch process ended. I am trying to reproduce it again, so that I can get support involved.

1. Will the result of the query still be relevant for the post-mortem analysis?
2. Is P1RAW the same as P1 in the trace file?



Tom Kyte
July 13, 2006 - 7:44 am UTC

what were the developers doing - compiling stuff?

Re: Interpreting raw trace file

Rahul, July 13, 2006 - 10:02 am UTC

Yes. But they basically stopped when we hit this issue.

Tom Kyte
July 13, 2006 - 12:57 pm UTC

good, never compile in an active production database - think about it.

Re: Interpreting raw trace file

Rahul, July 13, 2006 - 10:27 am UTC

The query against the x$ view was run by the DBAs:

SELECT kglnaown "Owner", kglnaobj "Object"
FROM x$kglob
WHERE kglhdadr= '-4611686005146523688'
2 3 4 ;

no rows selected


Tom Kyte
July 13, 2006 - 12:58 pm UTC

that is not an address in hex.

Re: Interpreting raw trace file

Rahul, July 13, 2006 - 2:21 pm UTC

Thanks Tom.

This was not a Production issue. This was on the development database. If it was only due to developers compiling, I don't need to investigate further since that won't happen in Production. I just wanted to ensure that it was not due to some other process running (e.g. the background batch process).

The metalink note says P1RAW should be used. But I don't have that in the trace file. Do I just convert the P1 in the trace file to hex to get P1RAW?

Tom Kyte
July 13, 2006 - 5:10 pm UTC

yup, that is the decimal representation

Explain Plan

CT VELU, July 13, 2006 - 4:48 pm UTC

Hi Tom
Following is the query.I ran this for two different values.The first one came up much faster then the second one.I did a trace and the tkprof is here.
Version oracle 10.1.0
SELECT count(*) cnt
FROM VP_PRE_POST_DTL a
WHERE a.vp_load_nbr = :nbr
AND EXISTS (
SELECT 'x'
FROM VP_SALES_DTL d
WHERE d.vp_cst_nbr = a.vp_cst_nbr
AND d.dst_cst_code = a.dst_cst_code
AND d.inv_no = a.inv_no
AND d.inv_dt = a.inv_dt
AND d.ship_dt = a.ship_dt
AND d.vp_item_nbr = a.vp_item_nbr
AND d.posted_cases = a.qty
AND d.ext_net_amt = a.ext_net_amt
AND d.vp_dst_nbr = a.vp_dst_nbr)

case 1:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.36 1.88 2154 5406 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.36 1.88 2154 5406 0 1

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 82

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=5406 pr=2154 pw=0 time=1880064 us)
69 NESTED LOOPS SEMI (cr=5406 pr=2154 pw=0 time=602112 us)
211 TABLE ACCESS FULL VP_PRE_POST_DTL (cr=2162 pr=1937 pw=0 time=239616 us)
69 TABLE ACCESS BY GLOBAL INDEX ROWID VP_SALES_DTL PARTITION: ROW LOCATION ROW LOCATION (cr=3244 pr=217 pw=0 time=1009664 us)
4389 INDEX RANGE SCAN VP_SALES_DTL_NDX1 (cr=1103 pr=47 pw=0 time=353052 us)(object id 75923)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file scattered read 126 0.02 0.77
db file sequential read 217 0.01 0.79
SQL*Net message from client 2 0.00 0.00
********************************************************************************

case2:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.01 0.02 0 0 0 0
Fetch 2 162.70 544.15 82638 2815753 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 162.73 544.18 82638 2815753 0 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: 82

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=2815753 pr=82638 pw=0 time=544152576 us)
41590 NESTED LOOPS SEMI (cr=2815753 pr=82638 pw=0 time=496831176 us)
41614 TABLE ACCESS FULL VP_PRE_POST_DTL (cr=2162 pr=1501 pw=0 time=1499834 us)
41578 TABLE ACCESS BY GLOBAL INDEX ROWID VP_SALES_DTL PARTITION: ROW LOCATION ROW LOCATION (cr=2813591 pr=81137 pw=0 time=542641152 us)
4753655 INDEX RANGE SCAN VP_SALES_DTL_NDX1 (cr=831862 pr=20599 pw=0 time=192169732 us)(object id 75923)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file scattered read 99 0.04 1.01
db file sequential read 81142 0.24 377.14
SQL*Net message from client 2 0.01 0.01
********************************************************************************


From looking at the plan.I have following questions.

1.I infer that in case2 we are spending more time on db file sequential read .I believe this is on the table in the sub query.Is there any way I can reduce it.The inner table is partioned on yr column and has seven partitions.
2.Why I am seeing time difference on the FTS of the first table.Thought it should take same time .The cr(believe this is consistent read) and pr column for this FTS are almost same on both cases.
3.In this type of query, is it common to expect this kind of time difference(9 minutes to less then a minute) because of the difference in number of rows from the outer table and because of nested loops.
4.If the answer is no for question 3, How can I approach this problem to find an optimal solution.

Thank you very much for your time.
--CT


Tom Kyte
July 13, 2006 - 6:23 pm UTC

how can we compare these? they look to be returning different things and since I cannot see the second query at all....

it isn't just the time spent doing IO, look at the cpu times, apples and oranges here all together.



Same query

CT VELU, July 13, 2006 - 8:43 pm UTC

Hi Tom
Thanks for your response.I ran the same query second time with a different value for the bind variable :nbr.
For the load nbr 1 I got count of 69 records( first tkprof plan) and for the load number 2 I got a count of 41590 (second tkprof) records.

SELECT count(*) cnt
FROM VP_PRE_POST_DTL a
WHERE a.vp_load_nbr = :nbr
AND EXISTS (
SELECT 'x'
FROM VP_SALES_DTL d
WHERE d.vp_cst_nbr = a.vp_cst_nbr
AND d.dst_cst_code = a.dst_cst_code
AND d.inv_no = a.inv_no
AND d.inv_dt = a.inv_dt
AND d.ship_dt = a.ship_dt
AND d.vp_item_nbr = a.vp_item_nbr
AND d.posted_cases = a.qty
AND d.ext_net_amt = a.ext_net_amt
AND d.vp_dst_nbr = a.vp_dst_nbr)

same query ran twice for two different load number.Hope this time I have communicated properly.(I didn't paste the query for the second tkprof because it was same, next time I will make sure to do so).
I would be happy if you can clarify my questions.
Thank you very much
CT


Tom Kyte
July 14, 2006 - 8:12 am UTC

it would seem to indicate that for the first bind value, there were very few records such that vp_load_nbr = :nbr (borne out by the row source plan:

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=5406 pr=2154 pw=0 time=1880064 us)
69 NESTED LOOPS SEMI (cr=5406 pr=2154 pw=0 time=602112 us)
211 TABLE ACCESS FULL VP_PRE_POST_DTL (cr=2162 pr=1937 pw=0 time=239616
69 TABLE ACCESS BY GLOBAL INDEX ROWID VP_SALES_DTL PARTITION: ROW
4389 INDEX RANGE SCAN VP_SALES_DTL_NDX1 (cr=1103 pr=47 pw=0 time=353052

211 rows from vp_pre_post_dtl...


and in the second, there was, well, more:

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=2815753 pr=82638 pw=0 time=544152576 us)
41590 NESTED LOOPS SEMI (cr=2815753 pr=82638 pw=0 time=496831176 us)
41614 TABLE ACCESS FULL VP_PRE_POST_DTL (cr=2162 pr=1501 pw=0 time=1499834
41578 TABLE ACCESS BY GLOBAL INDEX ROWID VP_SALES_DTL PARTITION: ROW
4753655 INDEX RANGE SCAN VP_SALES_DTL_NDX1 (cr=831862 pr=20599 pw=0


but the real problem seems to be that the vp_sale_dtl_ndx1 that is being used is not discriminating enough - you don't say what columns it is on - but if the goal is "make this query better", having an index that covers many more of the columns in vp_sales_dtl would make sense


Index

CT VELU, July 14, 2006 - 8:47 am UTC

Hi Tom
Thank you very much for the response.From your answer for the given existing conditions , I assume it is normal to see this kind of time difference.I thought it should not.

The Index is on the columns VP_CST_NBR,YR,MTH,VP_DST_NBR,VP_ITEM_NBR. I am looking forward to hearing your suggestion.

Thank you very much
CT

Tom Kyte
July 14, 2006 - 8:58 am UTC

"add more columns to the index so that you can avoid going to the table so often to discover the row doesn't match what you want"

see this tells me:

41578 TABLE ACCESS BY GLOBAL INDEX ROWID VP_SALES_DTL PARTITION: ROW
4753655 INDEX RANGE SCAN VP_SALES_DTL_NDX1 (cr=831862 pr=20599 pw=0

you found 4,735,655 rows in the index - using the 4 columns you listed above. But when we went to the table - a mere 41,578 of those 4.7 MILLION hits were found to really satisfy the entire predicate of:

WHERE d.vp_cst_nbr = a.vp_cst_nbr
AND d.dst_cst_code = a.dst_cst_code
AND d.inv_no = a.inv_no
AND d.inv_dt = a.inv_dt
AND d.ship_dt = a.ship_dt
AND d.vp_item_nbr = a.vp_item_nbr
AND d.posted_cases = a.qty
AND d.ext_net_amt = a.ext_net_amt
AND d.vp_dst_nbr = a.vp_dst_nbr)

so - either

a) index something a tad more selective for your predicate - could be an entirely different set of columns for all I know (you know your data patterns, I dno't)

b) add additional columns to your existing 4 column index in order to reduce the 4.7MILLION to some lower number... a variation on a) really

Thanks a lot. !!

CT VELU, July 14, 2006 - 9:03 am UTC


event 10046 level 12

Deba, July 17, 2006 - 11:47 am UTC

Hi,

I can find out that a process ( corresponding a database session ) is taking lot of cpu. So I did 10046 trace against the process and I got output also. Is it possible to find the name of the program unit which has run for this trace file ? Is it possible to use any other technique to trap the name of the program unit ?

Regards
Deba

Tom Kyte
July 17, 2006 - 3:13 pm UTC

do you mean "plsql program unit"? if so - what version of Oracle- 10gr2 - maybe.

event 10046 level 12

Deba, July 18, 2006 - 7:29 am UTC

Hi,

Thanks for the reply. Yes I am talking about the pl/sql program unit. Though I can see module hash value but I don't know what it is actually because I am not using any dbms_application_info. Please help me because i need to find out which pl/sql program unit causes so load on cpu.

Regards
Deba

Tom Kyte
July 18, 2006 - 8:45 am UTC

hmm, interesting, I see no version information. as I said - 10gr2 - maybe, before that - you have the sql - maybe a query or two against dba_source would turn up something useful (peek around look for the query in the code)

event 10046 level 12

Deba, July 18, 2006 - 7:30 am UTC

Hi,

Sorry I forgot to mention the version . I am using oracle 9i rel 2.

Thanks
Deba

Tom Kyte
July 18, 2006 - 8:45 am UTC

try querying dba_source.

in the trace file, you should see the parse of the procedure call as well

begin p; end;

for example - in order to run plsql, the client application has to parse the plsql call itself

event 10046 level 12

deba, July 18, 2006 - 10:59 am UTC

Hi,

Is there any other alternative because in the trace file I can't see any procedure call.

I can tell you the probable scenario . There are 2 program units lets say procedure x and procedure y. Now x calls y and y is the huge program. Now I have started the trace when y was already started. So I will not get any statement like this ( for calling y from x ). So in this case how can I find out the name of program unit ?

Thanks
Deba

Tom Kyte
July 19, 2006 - 8:10 am UTC

look in all_source.

Row source operation

CT VELU, July 26, 2006 - 10:04 am UTC

Hi Tom
I am tracing a session on event 10046 level 8.In the tkprof out put I am unable to find the row source operation for any of the statements.I did go through this page , you have mentioned that the cursor should be closed or after tracing the session exit the sql plus and do the tkprof.I followed the above by exiting sql plus. Following is the script I used

set time on
set timing on
alter session set max_dump_file_size=unlimited ;
alter session set tracefile_identifier='cttest';
alter session set events '10046 trace name context forever, level 8';
execute ct_test_2004(-1);
execute jim_test_2004(-1);
commit;
alter session set events '10046 trace name context off' ;
exit;

oracle version 10.1.0
Please let me know , is there anything I am missing.I never had this problem before while tracing like this.

Thank you very much for your time.

CT


Tom Kyte
July 26, 2006 - 11:35 am UTC

you turned off tracing before the cursors got closed

just exit (this has always been this way - it is not new), don't "off" the trace.



Thank you

CT VELU, July 26, 2006 - 1:35 pm UTC

Yes, That helped me!!

inserts taking longer time

A reader, August 03, 2006 - 10:51 am UTC

Hi Tom,

any idea why the 9402 inserts took 304 seconds ? or is that normal ? A rather high value of 3777431 for query column for these 9402 inserts - does it mean Oracle has to do rollback each time or am I completely wrong here ?

All the SQLs in the below output including insert are executed in a loop

TKPROF: Release 9.2.0.1.0 - Production on Thu Aug 3 10:37:04 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: getlog.htm
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

INSERT
into cmis_crm.mktg_activity_cust
( activity_id -- changed mktg_activity_id to activity_id
,cust_level_id
,contact_name_str
,contact_name_count
,ubs_person_str
,ubs_person_count
,ubs_person_team_str,dpz_code
)
values
( :b8
,:b7
,substr(:b6,1,length(:b6)-1)
,:b5
,substr(:b4,1,length(:b4)-1)
,:b3
,substr(:b2,1,length(:b2)-1),:b1
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 9402 0.00 304.51 70775 3777431 457638 9402
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9402 0.00 304.51 70775 3777431 457638 9402

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 125 (recursive depth: 2)
********************************************************************************

COMMIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 9401 0.00 0.62 0 0 9401 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9401 0.00 0.62 0 0 9401 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 125 (recursive depth: 2)
********************************************************************************

SELECT /*+ PARALLEL(ic,8) */
distinct
ic.activity_id -- changed mktg_activity_id to activity_id for br
,ic.cust_level_id,ic.dpz_code
from cmis_crm.mktg_activity_contact ic where activity_id in (select activity_id from cmis_crm.mktg_activity_contact
where contact_id in (select contact_id from
cmis_crm_stage.mktg_contact_admin_move_crm) and dpz_code in (select distinct dpz_code from cmis_crm_stage.mktg_contact_admin_move_crm) )

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 9401 0.00 0.05 63 0 0 9401
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9401 0.00 0.05 63 0 0 9401

Misses in library cache during parse: 0
Parsing user id: 125 (recursive depth: 2)
********************************************************************************

SELECT distinct
ic.contact_id
,cd.last_name
,cd.first_name
from
cmis_crm.mktg_activity_contact ic
,cmis_crm.mktg_contact_detail cd
where ic.contact_id = cd.contact_id
and ic.activity_id = :b2 --- changed mktg_activity_id to activity_id
and ic.cust_level_id = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 9401 0.00 0.43 0 0 0 0
Fetch 9401 0.00 18.99 719 98405 0 18987
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18802 0.00 19.42 719 98405 0 18987

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 125 (recursive depth: 2)
********************************************************************************

SELECT distinct
ie.person_gpn
,spd.last_name
,spd.first_name
from
cmis_crm.mktg_activity_contact ic
,cmis_crm.mktg_activity_employee ie
,cmis_crm.mktg_person spd
where ic.activity_id = ie.activity_id --- changed mktg_activity_id to activity_id for both tables
and ie.person_gpn = spd.person_gpn ---- changed sbc_person_id to person_gpn
and ic.activity_id = :b2 --- changed mktg_activity_id to activity_id
and ic.cust_level_id = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 9401 0.00 0.54 0 0 0 0
Fetch 9401 0.00 0.77 35 120968 0 9401
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18802 0.00 1.31 35 120968 0 9401

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 125 (recursive depth: 2)
********************************************************************************

SELECT
distinct td.team_name
from cmis_crm.mktg_team_def td,
cmis_crm.mktg_person_team op,
cmis_crm.mktg_person spd
where td.team_id = op.team_id
and op.person_gpn = spd.person_gpn
and spd.person_gpn in ( select distinct ie.person_gpn -- changed to person_gpn
from cmis_crm.mktg_activity_contact ic
,cmis_crm.mktg_activity_employee ie
,cmis_crm.mktg_person spd
where ic.activity_id = ie.activity_id -- changed mktg_activity_id to activity_id for both tables
and ie.person_gpn = spd.person_gpn --- changed sbc_person_id to person_gpn
and ic.activity_id = :b2 --- changed mktg_activity_id to activity_id
and ic.cust_level_id = :b1 )

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 9401 0.00 0.88 0 0 0 0
Fetch 9401 0.00 1.27 6 321069 0 31587
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18802 0.00 2.15 6 321069 0 31587

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 125 (recursive depth: 2)



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 47006 0.00 307.00 70775 3777431 467039 9402
Fetch 37604 0.00 21.09 823 540442 0 69376
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 84610 0.00 328.10 71598 4317873 467039 78778

Misses in library cache during parse: 0
Misses in library cache during execute: 1

6 user SQL statements in session.
0 internal SQL statements in session.
6 SQL statements in session.
********************************************************************************
Trace file: getlog.htm
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
6 user SQL statements in trace file.
0 internal SQL statements in trace file.
6 SQL statements in trace file.
6 unique SQL statements in trace file.
94127 lines in trace file.




Tom Kyte
August 03, 2006 - 11:09 am UTC

another way to look at this is

Hey, wow, a single insert takes only 0.03 seconds! Pretty cool.

this is what happens when you do things slow by slow (row by row).

An individual operation - fast
Thousands of them - not so fast

the fix?

turn thousands of individual operations into a singl sql statement if possible.

If not, definitely employ BULK PROCESSING.


this is classic example of slow by slow processing here - look to your algorithms, procedural code is wrong - set based code is good.

Above tkprof out

A reader, August 03, 2006 - 10:54 am UTC

Or is it due to the commit inside the loop ?

Tom Kyte
August 03, 2006 - 11:10 am UTC

ouch, not only slow by slow but likely a transactional nightmare!

no, the commit is not causing the insert to be slow because the insert is in fact not slow. You just do it thousands of times.

insert slow

A reader, August 03, 2006 - 3:12 pm UTC

But compared to the other 3 selects inside the loop, it is in fact very slow - maybe the insert involves much more overhead than select

Tom Kyte
August 03, 2006 - 4:58 pm UTC

well, there is this thing about maintaining indexes, sure.

and something is wrong there - no cpu times reported at all

A reader, August 04, 2006 - 9:35 am UTC

True - did not notice that CPU time is 0.0 in all stats incl. total

How to trace the records which are valid but not taken into the output?

Gowtham Sen., January 23, 2007 - 12:03 pm UTC

Dear Tom,

I am very much impressive with your posts. Its very helpful.
Here, I came to know this regarding traceability events "10046 and tkprof". But I am not aware of tkprof.
I read the following document by Rittman.
http://www.rittman.net/work_stuff/tracing_owb_mappings_pt1.htm
I am using Oracle Warehouse Builder 10G R1.
But I feel, it may solve my problem. My problem scenario is as follows.

Here I would like to know how to trace the records which are valid as per business rules, but not counted in the output due to some functional errors, as follows.

For example a variable contains value Region = "R01".
So as per the rule, we need to retrieve the number 01.
I impelemented as to_number ( substr (Region,2 ) )

Unfortunately, In one record, I got the field data as "RRR".
So as per the rule , if apply that logic, this will return the error/warning.

So this record is not counted in the output.

Here I would like to trace these type of records in a table or a file while executing the package.


Is it possible using Oracle Warehouse Builder or Oracle?

Thank you,
Regards,

Gowtham Sen.

Query

ARU, March 05, 2007 - 4:55 pm UTC

Hi Tom,
I can trace a session using the following as you sugggested:-
alter session set events '10046 trace name context forever, level 12';

Can I use the same 'alter system '10046 trace name context forever, level 12';' to trace on system level?
When I try no application sql is found in the tracefile?

What am I doing wrong, is it I should not be doing it system wide at all? Please help.....

Regards,
Aru.

Tom Kyte
March 05, 2007 - 8:43 pm UTC

does it work for new sessions, after you set the event.

I would never ever recommend, suggest or even try tracing the entire system though, why would you do this?

Row Source operation in TKPROF

wat, April 13, 2007 - 11:48 am UTC


SELECT *
FROM
FIELD_VERSION WHERE OBJECT_ID = :B1 AND DAYTIME = (SELECT MAX(DAYTIME) FROM
FIELD_VERSION WHERE OBJECT_ID = :B1 AND DAYTIME <= :B2 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 975 0.14 0.13 0 1950 0 0
Fetch 1950 0.09 0.11 1 2925 0 975
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2925 0.23 0.24 1 4875 0 975

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 584 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1 0.02 0.02
--------------------------------------------------------------------------------
Why am I not seeing the row source operation ..?
I did :

tkprof trcfile.trc trcfile.txt

Am I doing wanything wrong?Pls advice..

Thanks




Tom Kyte
April 13, 2007 - 2:14 pm UTC

you did not close the cursor in the application - the stat records are not written to the trace file until the cursor is closed.


select *
from (
select *
from field_version
where object_id = :b1
and daytime <= :b2
order by object_id, daytime DESC
)
where rownum = 1;

may be a more efficient approach to that query (index on object_id, daytime DESC)

missing Row operations

Wat, April 16, 2007 - 9:28 pm UTC


TKPROF: Release 9.2.0.7.0 - Production on Mon Apr 16 15:24:24 2007

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: oil1rbid_ora_8958_MYSESSION.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
SELECT --d.OBJECT_ID,
max(to_char(d.DAYTIME, 'YYYY-MM-DD"T"HH24:MI:SS')) AS DAYTIME,
sum(d.PREC_THEOR_COND_RATE), sum(d.PREC_THEOR_GAS_RATE),sum(d.PREC_THEOR_NET_OIL_RATE),
--sum(d.PREC_THEOR_COND_MASS),sum(d.PREC_THEOR_WATER_MASS),
sum(d.PREC_THEOR_WATER_RATE)--,
--sum(d.PREC_THEOR_GAS_MASS),sum(d.PREC_THEOR_NET_OIL_MASS),sum(d.PREC_THEOR_GASLIFT_RATE),
--sum(d.PREC_THEOR_DILUENT_RATE)
FROM eckernel_cpi.DV_PWEL_DAY_PREC_DATA d
WHERE (d.DAYTIME>='01-jan-1997'
AND d.DAYTIME<'01-jan-1999')
AND (d.OBJECT_ID IN (
'162AED127C2C4171E0440003BA9B3A5F',
'162AED1286414171E0440003BA9B3A5F',
.
.
.------almost 800 values---------
.
'162AED128A4D4171E0440003BA9B3A5F',
'20B3F2D6118133E0E0440003BA9B3A5F'))

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 38.07 56.20 1 2 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 110.28 194.92 8972 4233419 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 148.35 251.13 8973 4233421 0 2

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 578

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 4 0.00 0.00
SQL*Net message from client 4 975.13 982.91
db file sequential read 2051 0.22 48.97
db file scattered read 1641 0.11 3.88
********************************************************************************

Tom---I cannot get to see the Row source operations when I run this query...I enabled session level trace at level 8
as I want to see the wait events because this query takes almost8 mins to fetch...Though the logic performed is simple but the where clause has almost 800 values to cpmare using IN...is there any better approach to tune or build index in any specific way?

Regards

Tom Kyte
April 17, 2007 - 9:52 am UTC

did you

a) log into sqlplus
b) enable trace
c) run query
d) exit sqlplus
e) run tkprof?

800 items in an inlist, classically bad....

800 values in inlist

A reader, May 07, 2007 - 11:48 am UTC

try using the hint no_expand or

give

alter session set "_no_or_expand"=true at the session level

tkprof output using 10046 level 8

parag j patankar, May 25, 2007 - 2:53 am UTC

Hi Tom,

I have generated tkprof output using 10046 in 9.2 database.

Rows Row Source Operation
------- ---------------------------------------------------
16 TABLE ACCESS BY INDEX ROWID TEST
16 INDEX RANGE SCAN X01TEST (object id 109041)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
16 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'TEST'
16 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'X01TEST' (UNIQUE)

I want to know why it is not showing CR,r,w, time etc ? for e.g.

Rows Row Source Operation
------- ---------------------------------------------------
16 TABLE ACCESS BY INDEX ROWID TEST (cr=2300 r=2209 w=0 time=900 us)
..
..


thanks & regards
pjp
Tom Kyte
May 26, 2007 - 11:54 am UTC

what is your session/system setting for statistics_level?

show parameter statistics_level

Rowsource execution statistics

Jonathan Lewis, May 26, 2007 - 3:44 pm UTC

Parag,

This feature was enabled by default in 9.2.0.2 through to 9.2.0.5 when you switched on sql_trace (or event 10046); but the overhead can be enormous, so Oracle stopped doing enabling it in 9.2.0.6.

It's back in 10g (at least 10.2, I don't remember if it came back in 10.1) when a sampling mechanism was introduced to keep the overhead down.

There are some notes at: http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/



Rowsource execution statistics

Jonathan Lewis, May 26, 2007 - 3:46 pm UTC

parag, the feature was enabled by default in 9.2.0.2 through to 9.2.0.6 when you switched on sql_trace (or event 10046) but the overhead could be huge, so Oracle stopped doing this in 9.2.0.6.

It's back in 10g (at least 10.2, I don't remember if it came back in 10.1) when the stats were reduced to a sample frequency of 1 in 128 by default.

There are some notes <a href=" http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/" >here</a>



More Execution time taken than the shown time by dbms_xplan

Vk, July 12, 2007 - 12:03 pm UTC

Hi Tom,

Enclosed is the SQL QUERY along with the execution plan using dbms_xplan.display_cursor().

Tables hold :

TEMP : 2.5 million
Impression_data : 70 million
Ad_Match : 900
Click_data : 1000000
Invalid_clk_Report_data : 258000

The plan suggests that the execution of the query wil be over in 4 mins however the query doesn't finish even in 50 mins.
What could be wrong.Where should I look into?

I appreciate your answer and findings.

Select /*+ Parallel(Step2 16) */
Category, count(AdMatchId) cnt_impr_admatch, Sum(Ads_Ranked) Cnt_Ranks_AdMatch, count(Click_tag) cnt_clicks_admatch, sum(AdCost) cost_admatch
from
(
Select /*+ Parallel(X 16) Parallel(Y 16) Parallel(Z 16) Parallel(ZA 16) USE_HASH(x y) USE_HASH(z) USE_HASH(ZA) */
Category,
Y.ad_match_id AdMatchId,
Ads_Ranked ,
Click_tag,
Case When Z.click_tag is NOT NULL then Y.Ad_Cost Else 0 END AdCost
from
temp X,
impression_data Y ,
(
Select /*+ Parallel(a 16) */ * from click_data a Where Insert_date >= to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss') AND
NOT EXISTS
(Select /*+ Parallel(b 16) */ Null from Invalid_Click_report_data b Where Insert_date >= to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss') AND
a.click_tag = b.Click_tag)
) Z,
(Select /*+ Parallel(AdMatch 16) */ Ad_Match_Id, Match_Terms from Ad_Match AdMatch Where Match_terms like 'category_%'
AND UNSYNC_DATE <= to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss')) ZA
Where
Invalid is NULL AND
Y.Insert_date >= to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss') AND Y.Insert_date < to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss') + 1 + 1/24 AND
X.RTag = Y.Request_tag AND
Y.Impression_tag = Z.Impression_tag(+) AND
y.Ad_Match_id = ZA.Ad_Match_Id AND
X.Category = ZA.Match_Terms
) Step2
Group by Category
/
select * from table(dbms_xplan.display_cursor('529db1yazvtp9'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 529db1yazvtp9, child number 0
-------------------------------------
Select /*+ Parallel(Step2 16) */ Category, count(AdMatchId) cnt_impr_admatch, Sum(Ads_Ranked) Cnt_Ranks_AdMatch, count(Click_tag)
cnt_clicks_admatch, sum(AdCost) cost_admatch from ( Select /*+ Parallel(X 16) Parallel(Y 16) Parallel(Z 16) Parallel(ZA 16)
USE_HASH(x y) USE_HASH(z) USE_HASH(ZA) */ Category, Y.ad_match_id AdMatchId, Ads_Ranked , Click_tag, Case When Z.click_tag is NOT
NULL then Y.Ad_Cost Else 0 END AdCost from temp X, impression_data Y , ( Select /*+ Parallel(a 16) */ * from click_data a Where
Insert_date >= to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss') AND NOT EXISTS (Select /*+ Parallel(b 16) */
Null from Invalid_Click_report_data b Where Insert_date >= to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss') AND
a.click_tag = b.Click_tag) ) Z, (Select /*+ Parallel(AdMatch 16) */ Ad_Match_Id, Match_Terms from Ad_Match AdMatch Where Match_terms
like 'category_%' AND UNSYNC_DATE <= to_date('04/07/2007 07:59:59','DD/MM/YYYY hh24:mi:ss')) ZA Where

Plan hash value: 3195496468

---------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT|
---------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 21571 (100)| | | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :Q1006 | 1 | 273 | 21571 (4)| 00:04:19 | | | Q1,06 | P->S |
| 3 | SORT GROUP BY | | 1 | 273 | 21571 (4)| 00:04:19 | | | Q1,06 | PCWP |
| 4 | PX RECEIVE | | 1 | 273 | 21571 (4)| 00:04:19 | | | Q1,06 | PCWP |
| 5 | PX SEND HASH | :Q1005 | 1 | 273 | 21571 (4)| 00:04:19 | | | Q1,05 | P->P |
| 6 | SORT GROUP BY | | 1 | 273 | 21571 (4)| 00:04:19 | | | Q1,05 | PCWP |
|* 7 | HASH JOIN OUTER | | 1 | 273 | 21570 (4)| 00:04:19 | | | Q1,05 | PCWP |
| 8 | PX RECEIVE | | 1 | 253 | 20860 (4)| 00:04:11 | | | Q1,05 | PCWP |
| 9 | PX SEND HASH | :Q1003 | 1 | 253 | 20860 (4)| 00:04:11 | | | Q1,03 | P->P |
|* 10 | HASH JOIN | | 1 | 253 | 20860 (4)| 00:04:11 | | | Q1,03 | PCWP |
| 11 | PX RECEIVE | | 1 | 199 | 1081 (2)| 00:00:13 | | | Q1,03 | PCWP |
| 12 | PX SEND BROADCAST | :Q1001 | 1 | 199 | 1081 (2)| 00:00:13 | | | Q1,01 | P->P |
|* 13 | HASH JOIN | | 1 | 199 | 1081 (2)| 00:00:13 | | | Q1,01 | PCWP |
| 14 | PX RECEIVE | | 1 | 53 | 0 (0)| | | | Q1,01 | PCWP |
| 15 | PX SEND BROADCAST | :Q1000 | 1 | 53 | 0 (0)| | | | Q1,00 | P->P |
| 16 | PX BLOCK ITERATOR | | 1 | 53 | 0 (0)| | | | Q1,00 | PCWC |
|* 17 | TABLE ACCESS FULL| TEMP | 1 | 53 | 0 (0)| | | | Q1,00 | PCWP |
| 18 | PX BLOCK ITERATOR | | 180K| 25M| 1080 (2)| 00:00:13 | | | Q1,01 | PCWC |
|* 19 | TABLE ACCESS FULL | AD_MATCH | 180K| 25M| 1080 (2)| 00:00:13 | | | Q1,01 | PCWP |
| 20 | PX BLOCK ITERATOR | | 65M| 3365M| 19713 (4)| 00:03:57 | 12 | 13 | Q1,03 | PCWC |
|* 21 | TABLE ACCESS FULL | IMPRESSION_DATA | 65M| 3365M| 19713 (4)| 00:03:57 | 12 | 13 | Q1,03 | PCWP |
| 22 | PX RECEIVE | | 460K| 9003K| 709 (1)| 00:00:09 | | | Q1,05 | PCWP |
| 23 | PX SEND HASH | :Q1004 | 460K| 9003K| 709 (1)| 00:00:09 | | | Q1,04 | P->P |
| 24 | VIEW | | 460K| 9003K| 709 (1)| 00:00:09 | | | Q1,04 | PCWP |
|* 25 | HASH JOIN RIGHT ANTI | | 460K| 29M| 709 (1)| 00:00:09 | | | Q1,04 | PCWP |
| 26 | PX RECEIVE | | 1 | 25 | 8 (13)| 00:00:01 | | | Q1,04 | PCWP |
| 27 | PX SEND BROADCAST | :Q1002 | 1 | 25 | 8 (13)| 00:00:01 | | | Q1,02 | P->P |
| 28 | PX BLOCK ITERATOR | | 1 | 25 | 8 (13)| 00:00:01 | 1 | 27 | Q1,02 | PCWC |
|* 29 | TABLE ACCESS FULL | INVALID_CLICK_REPORT_DATA | 1 | 25 | 8 (13)| 00:00:01 | 1 | 27 | Q1,02 | PCWP |
| 30 | PX BLOCK ITERATOR | | 460K| 18M| 700 (1)| 00:00:09 | 12 | 27 | Q1,04 | PCWC |
|* 31 | TABLE ACCESS FULL | CLICK_DATA | 460K| 18M| 700 (1)| 00:00:09 | 12 | 27 | Q1,04 | PCWP |
---------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

7 - access("Y"."IMPRESSION_TAG"="Z"."IMPRESSION_TAG")
10 - access("X"."RTAG"="Y"."REQUEST_TAG" AND "Y"."AD_MATCH_ID"="AD_MATCH_ID")
13 - access("X"."CATEGORY"="MATCH_TERMS")
17 - access(:Z>=:Z AND :Z<=:Z)
filter(("INVALID" IS NULL AND "X"."CATEGORY" LIKE 'category_%'))
19 - access(:Z>=:Z AND :Z<=:Z)
filter(("MATCH_TERMS" LIKE 'category_%' AND "UNSYNC_DATE"<=TO_DATE('2007-07-04 07:59:59', 'yyyy-mm-dd hh24:mi:ss')))
21 - access(:Z>=:Z AND :Z<=:Z)
filter(("Y"."INSERT_DATE">=TO_DATE('2007-07-04 07:59:59', 'yyyy-mm-dd hh24:mi:ss') AND "Y"."INSERT_DATE"<TO_DATE('2007-07-05
08:59:59', 'yyyy-mm-dd hh24:mi:ss')))
25 - access("A"."CLICK_TAG"="B"."CLICK_TAG")
29 - access(:Z>=:Z AND :Z<=:Z)
filter("INSERT_DATE">=TO_DATE('2007-07-04 07:59:59', 'yyyy-mm-dd hh24:mi:ss'))
31 - access(:Z>=:Z AND :Z<=:Z)
filter("INSERT_DATE">=TO_DATE('2007-07-04 07:59:59', 'yyyy-mm-dd hh24:mi:ss'))






Alexander, November 16, 2007 - 5:53 pm UTC

Tom, can you please show me what command I can run to trace errors for ORA-01483 and get the bind values?

alter system set events='1483 trace name errorstack level 3';

That work but I see no bind information. Since this stuff isn't documented I can't find all the different options. Thanks.
Tom Kyte
November 21, 2007 - 11:03 am UTC

it probably happens on the client - hence, it'll never even get to the server. The client is "smart enough" to know "that'll never fit, stop doing that"

:-)

Jay, November 21, 2007 - 9:40 am UTC


Happy thanksgiving to one and all :-)

Jay

:-)

Jay, November 21, 2007 - 9:43 am UTC


Tom,

Happy Thanksgiving to you and all the other users of this forum.

Thanks for all your help and invaluable advice :-)


Alexander, November 21, 2007 - 11:15 am UTC

I agree, but in some cases that worked. Since I have contacted support for some direction, they said using level 4 should do it, but we are having problems reproducing.

Would you mind taking a blind stab as to what I can tell the developers to look for based on this:

They update a table containing 4 clob columns. Attempting to update all 4 causing the bind error, reducing the number to two fixes the issue.

They are not "streaming" the clob data in, just using sql. As far as I know java has no variable limitation as pl/sql does.

I know I'm asking you to guess, I'm fine with that. I'm on 10.2.0.2 plus DST patches, I'm wondering if I'm hitting bug 6085625, but it's pretty vague.
Tom Kyte
November 20, 2007 - 1:42 pm UTC

I'd tell the developers:

write me the smallest, tinest, standalone piece of java code that is 100% complete, yet so concise and small - even Tom can understand it.

And then post it here. and we'll have a look at it.

I mean, this should be TEENY TINY


java has the same limits as ANYTHING does as far as bind lengths, you can only bind things of "so much size" - regardless of the language.

by the way...

Jay, November 21, 2007 - 11:31 am UTC

What is wrong with the dates? It's not November 21st yet!!!!
Tom Kyte
November 20, 2007 - 1:51 pm UTC

:)

we fixed that - the clock drifted....

the admin fixed the drift of the time....

but was really eager for the Thanksgiving holiday to start so......

accidentally set the clock ahead by a day, but with the right time...

Alexander, November 28, 2007 - 3:32 pm UTC

Tom,

In response to your reply two posts up, they are using hybernate so I can't even see what they are doing.

I've been trying to work with support on this, their service has been atrocious. That brings me here, I'm hoping you can provide some feedback on a few things.

1). I used

alter system set events='1483 trace name errorstack level 4';

It produced trace files with the errors.

But running

alter system set events '10046 trace name context forever, level 12';

does not. Do you think it's just a coincedence?

2). The trace I do have shows ""No bind buffers allocated" what does that mean?

3). It also shows "sharing failure(s)=8004000", do you know what that means?

4.) The trace shows

Bind#10
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000000 frm=01 csi=31 siz=0 off=128
kxsbbbfp=b5f46d50 bln=22 avl=02 flg=01
value=###
An invalid number has been seen.Memory contents are :
Dump of memory from 0xB5F46D50 to 0xB5F46D52
B5F46D50 0C3D0101 [..=.]

This is not a clob bind, but a varchar2(50) column (string variable).

5.) Upgrading the Oracle JDBC driver from 10.2.0.1 to 10.2.0.3 has seemed to have resolved the issue, but we don't understand the problem or why this may have fixed it.

Do any bugs pop into your mind throughout your travels?

I'll take whatever imput you can give on these questions, even a total guess. This is holding up a production deployment. Thanks.
Tom Kyte
November 29, 2007 - 8:17 am UTC

you have a bug in some bit of code that you cannot even see, that oracle has nothing to do with, that you are using - and support has been "atrocious", hmm, interesting. You'd think your developers would be able to concoct a test case - they do know how to code to the database do they not? They do know how to instrument code? I would hope so. I don't know why you said what you said - I can only say that - hey, the bind is probably never making it to the server.


1) i already told you what I thought - that the bind is never making it.

2) what it says. like I said, this is likely never getting to the server. It is all on the client.

3) without any sort of context, no.

4) no, the oacdty = 02, 02 is a NUMBER, the program is binding a NUMBER.

5) no, other than something quirky is happening in this automagical hibernate layer that no one has any sort of insight into.




It seems you have your issue solved - 10.2.0.3 is the current production release. Not sure what you want??



Alexander, November 29, 2007 - 9:29 am UTC

Yes, they have been terrible. I opened my SC ten days ago. Since then all they have done is say "upload a 10046 trace file" and "please update the request" on the day before Thanksgiving.

I'm not asking you or them to magically fix my problem. I wanted some help, ideas, direction, anything. You both have tons of experience to draw on. Now I get the feeling I'm asking you both to cure cancer when I just wanted a few questions answered.

The developers are looking to me because they are getting an Oracle error message. It gets us nowhere to just point the finger at the client and say "fix your code". That doesn't do a lot of good for the DBA developer relationship.

So it's possible for the sql statement and some of the binds to make it to the db server and some not?

I say this because if it is dieing at the client why am I seeing trace files when setting 1483 events?
Tom Kyte
November 29, 2007 - 6:44 pm UTC

Well, I already told you want needs be done, and it is way back in the code - you already know the trace contains *nothing* of use to you.

The DBA is not responsible for fixing code, coders are. Really - honest.

You have your solution in hand though - I'm not sure where we are going here.

and if the developers are not able to do what developers have been doing for years (extract a small test case, whittle it done, track it down, isolate the problem - *debug a problem*) - well, sorry?

It is what I do - tens of times - every day - over and over. Distill the problem down to the very smallest bit you can.


Alexander, November 30, 2007 - 9:23 am UTC

Were I was going was to understand the problem, why this is happening, why does applying this patch appear to fix it.

In a perfect world I wouldn't want java programmers touching sql either.

I still don't understand why the trace file shows no bind variables because that would throw constraint errors trying to insert with no values.
Tom Kyte
November 30, 2007 - 1:47 pm UTC

not if the error is "this bind don't fly - the inputs are bad"

$ oerr ora 1483
01483, 00000, "invalid length for DATE or NUMBER bind variable"
// *Cause: A bind variable of type DATE or NUMBER is too long.
// *Action: Consult your manual for the maximum allowable length.


the data was *munged*, bad, invalid, not usable, garbage.

Alexander, December 04, 2007 - 12:01 pm UTC

Tom,

Support got back to me, they said I'm encoutering bug 6085625, and I should apply the patch for that.

I'm wondering what you make of that because you seemed so sure it's on the client side. I agree with you, how can it be Oracle's problem if no values are being passed over.

They have the trace file information though, so I'm al ittle torn. Who should I listen to, you or them ;)

TKPROF output

A reader, December 11, 2007 - 1:51 am UTC

Tom,

First of all thanks for your help to Oracle community. Keep it up.

Following is the output taken from the tkprof report I generated when one of my clients was complaining that the database is running "slow"

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 32 0.34 0.22 40 705 0 0
Execute 38 0.02 0.02 0 0 0 0
Fetch 1051 0.69 1.09 104 1386 0 20359
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1121 1.06 1.34 144 2091 0 20359

Misses in library cache during parse: 11
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message from client 1149 120.84 722.16
SQL*Net message to client 1149 0.00 0.00
db file sequential read 9 0.02 0.06
SQL*Net more data to client 2 0.00 0.00
db file scattered read 13 0.03 0.27


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 118 0.11 0.10 0 0 0 0
Execute 271 0.21 0.22 0 0 0 0
Fetch 605 0.13 0.47 58 1065 0 891
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 994 0.46 0.80 58 1065 0 891

Misses in library cache during parse: 40

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message from client 2 0.00 0.00
SQL*Net message to client 2 0.00 0.00
db file sequential read 58 0.02 0.37

35 user SQL statements in session.
117 internal SQL statements in session.
152 SQL statements in session.
14 statements EXPLAINed in this session.
********************************************************************************

From above, can I conclude that the query/queries ran in just 2 seconds,(NON-RECURSIVE elapsed + RECURSIVE elapsed), and Oracle database waited for input from client for 722.16 seconds. So actualy bottleneck is network not the database.

Once again thanks for your help.

Cheers,

Tom Kyte
December 11, 2007 - 7:52 am UTC

bottleneck is not network either - sql net message from client is the time we sat their waiting for client to ask us to do something.

meaning, client was off busy doing whatever - not asking database for stuff.

database in this case - when asked to do something - responded very fast (about 1 second in database).

The application is likely the culprit here - the CLIENT was being slow here, it wasn't asking us to do anything.

TKPROF output

A reader, December 11, 2007 - 8:12 am UTC

Tom,

Thanks for the clarification.

Cheers,

No Plan

Matt, February 06, 2008 - 6:17 am UTC

Interesting above you mention ensuring the sql*plus connection is closed (hence the cursor) before running tkprof.

Is there any reason why calling a pl/sql package from the sql*plus session would not cause the row source operations to be included in the tracefile?

Script running the package

alter session set tracefile_identifier='10046_1'; 
alter session set timed_statistics=true; 
alter session set statistics_level=all; 
alter session set max_dump_file_size=unlimited; 
alter session set events '10046 trace name context forever, level 12'; 

exec test_pkg.test_proc; 
alter session set events '10046 trace name context off'; 
exit; 



Actual package (simplified)

CREATE OR REPLACE PACKAGE BODY test_pkg AS 

   PROCEDURE test_proc AS 
   lDummy   VARCHAR2(30); 
   BEGIN 
   
      SELECT dummy 
      INTO lDummy 
      FROM dual; 
      
  
   END test_proc; 
END test_pkg; 
/ 


TKPROF output shows no row source operations (checked original trace file no STATS rows in their either).

SELECT DUMMY 
FROM 
 DUAL
call     count       cpu    elapsed       disk      query    current        rows 
------- ------  -------- ---------- ---------- ---------- ----------  ---------- 
Parse        1      0.00       0.18          0          0          0           0 
Execute      1      0.00       0.00          0          0          0           0 
Fetch        1      0.00       0.01          2          3          0           1 
------- ------  -------- ---------- ---------- ---------- ----------  ---------- 
total        3      0.00       0.20          2          3          0           1 
Misses in library cache during parse: 1 
Optimizer mode: ALL_ROWS 
Parsing user id: 100     (recursive depth: 1) 
Elapsed times include waiting on following events: 
  Event waited on                             Times   Max. Wait  Total Waited 
  ----------------------------------------   Waited  ----------  ------------ 
  SQL*Net message to client                       1        0.00          0.00 


Any ideas? There is a metalink log but that just covers closing cursors. We're running 10gR2 (10.2.0.3)

Tom Kyte
February 06, 2008 - 7:53 am UTC

sure, plsql is extremely efficient in how it does sql.

when you say "close this cursor" - it says "yeah, right, whatever - I'm ignoring you"

because you'll probably execute it again.

best to close the session to ensure the cursors are closed - BEFORE ending tracing.

a) turn on trace
b) run stuff in sqlplus
c) exit

trace file will have row source information for everything after a)

Hmm

Matt, February 06, 2008 - 8:50 am UTC

By exiting before the stop trace I see the row source operation.

How about in an Oracle Ebusiness Suite environment where the PL/SQL is called from a concurrent manager process. I switch 10046 tracing on in the package and once its ended I'd like to run the trace and see the row source operations.

Is it a timing issue, will the cursor eventually close? Or is there another way to force the cursor?
Tom Kyte
February 06, 2008 - 9:36 am UTC

the plsql cursor cache is control (in 9205 and up) by session_cached_cursors - or 100 if session_cached_cursors is not set.

you'd have to run a stored procedure that has N "dummy" cursors in it.


connect /

create or replace procedure do_something
as
begin
    for x in (select * from all_users)
    loop
        null;
    end loop;
end;
/
create or replace procedure empty_cursor_cache
as
begin
for x in (select * from dual d1 ) loop null; end loop;
for x in (select * from dual d2 ) loop null; end loop;
...
for x in (select * from dual d149 ) loop null; end loop;
for x in (select * from dual d150 ) loop null; end loop;
end;
/

alter session set sql_trace=true;
exec do_something
REM tkprof here would have the select * from all_users, but no plan
exec empty_cursor_cache
REM tkprof here would have the select * from all_users, and the plan



note: this all changes in 11g, the row source will just be there the first time..

Answering own question

Matt, February 06, 2008 - 8:56 am UTC

Actually I think i've answered my own question, our bespoke tracing routine tries to cleanly stop trace before exiting the concurrent request. If I remove this then it seems to work.

Thanks.
Tom Kyte
February 06, 2008 - 9:36 am UTC

that too - if you disable trace before the cursor is closed - it'll not emit the stat records...

Comparing AWR results to TKPROF

A reader, February 29, 2008 - 6:21 am UTC

Hi Tom,
I asked this question in OTN forum hir:
http://forums.oracle.com/forums/thread.jspa?messageID=2371283�
but didnt got an answer that can explain the difference
between the TKPROF output and AWR output.

I ran the statment from sqlplus and after that i generated an addm report (and also awr report):
As you can see below TKPROF show that elspaed time was : 50.76 second,
while ADDM show:
"was executed 1 times and had an average elapsed time of 751 seconds."

ALTER SESSION SET max_dump_file_size = unlimited;
ALTER SESSION SET tracefile_identifier = '10046';
ALTER SESSION SET statistics_level = ALL;
ALTER SESSION SET events '10046 trace name context forever, level 12';

SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID, CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID, CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID, CM_CUST_DIM_INST_PROD.PRODUCT_DESCR, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR, CM_CUST_DIM_INST_PROD.PROD_CATEGORY, CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR, CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM, CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT, CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT, CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR, CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id = cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd = cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id = CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID

ALTER SESSION SET EVENTS '10046 trace name context off';
EXIT

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.05 0 0 0 0
Execute 1 0.02 1.96 24 32 0 0
Fetch 46 0.19 48.74 6 18 0 661
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 48 0.26 50.76 30 50 0 661

Rows Row Source Operation
------- ---------------------------------------------------
661 PX COORDINATOR (cr=50 pr=30 pw=0 time=50699289 us)
0 PX SEND QC (ORDER) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
0 SORT ORDER BY (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND RANGE :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
3366 INDEX FAST FULL SCAN IDX_CM_SERVICE_DELTA (cr=9 pr=6 pw=0 time=47132 us)(object id 1547887)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
3366 INDEX FAST FULL SCAN IDX_CM_SERVICE_DELTA (cr=9 pr=0 pw=0 time=20340 us)(object id 1547887)
0 PX BLOCK ITERATOR PARTITION: 1 4 (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL CM_CUST_DIM_INST_PROD PARTITION: 1 4 (cr=0 pr=0 pw=0 time=0 us)

ADDM Results:
RECOMMENDATION 1: SQL Tuning, 56% benefit (615 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"6wd7sw8adqaxv".
RELEVANT OBJECT: SQL statement with SQL_ID 6wd7sw8adqaxv and
PLAN_HASH 2594021963
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID,
CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID,
CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID,
CM_CUST_DIM_INST_PROD.PRODUCT_DESCR,
CM_CUST_DIM_INST_PROD.PRODUCT_GROUP,
CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR,
CM_CUST_DIM_INST_PROD.PROD_CATEGORY,
CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR,
CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE,
CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR,
CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM,
CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT,
CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT,
CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD,
CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS,
CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR,
CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id =
cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd =
cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id =
CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID
RATIONALE: SQL statement with SQL_ID "6wd7sw8adqaxv" was executed 1
times and had an average elapsed time of 751 seconds.
RATIONALE: At least one execution of the statement ran in parallel.

Can you please explain the differnce in the results ?

tkprof output for a sql with parallel hint

Reene, March 24, 2008 - 4:14 am UTC

Hi Tom

for the SQL below - i see always 0 rows in the row source operation of tkporf.

select /*+ parallel(ool,4) ordered use_hash(ooh ool) */ ooh.order_number,count(line_id)
from ont.oe_order_headers_all ooh,
ont.oe_system_parameters_all osp,
ont.oe_order_lines_all ool,
( select source_line_id from wsh.wsh_delivery_Details
where (released_status > 'D' or released_status < 'C' ) ) wdd
where ooh.org_id = osp.org_id
and ooh.order_type_id = osp.attribute1 and osp.attribute1 is not null
and ooh.order_source_id = 1203
and ooh.header_id = ool.header_id
and ooh.org_id = ool.org_id
and ool.line_id = wdd.source_line_id (+)
and ool.shippable_flag = 'Y'
group by ooh.order_number

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.02 0 3 0 0
Fetch 1858 192.62 2119.74 746459 8992092 0 185630
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1860 192.63 2119.78 746459 8992095 0 185630

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 65

Rows Row Source Operation
------- ---------------------------------------------------
185630 SORT GROUP BY (cr=8992092 r=746459 w=0 time=739488230 us)
2852746 NESTED LOOPS OUTER (cr=8992092 r=746459 w=0 time=2111210851 us)
0 HASH JOIN
0 HASH JOIN
237137 TABLE ACCESS FULL OE_ORDER_HEADERS_ALL (cr=567139 r=565826 w=0 time=123073723 us)
50 TABLE ACCESS FULL OE_SYSTEM_PARAMETERS_ALL (cr=3 r=1 w=0 time=10382 us)
0 TABLE ACCESS FULL OE_ORDER_LINES_ALL
58602 VIEW PUSHED PREDICATE (cr=8424950 r=180632 w=0 time=858909493 us)
58602 TABLE ACCESS BY INDEX ROWID WSH_DELIVERY_DETAILS (cr=8424950 r=180632 w=0 time=857652179 us)
2693429 INDEX RANGE SCAN WSH_DELIVERY_DETAILS_N3 (cr=5713844 r=7255 w=0 time=71318230 us)(object id 688572)

why did it shows 0 for table access full of oe_order_lines_all - beacuse that is no true.

if we use parallel - is there a different way to know the number of rows.

Thanks

predicate information in trace file

reader, December 30, 2008 - 11:55 am UTC

Dear Tom,
good day to you, I am a little curious to know if there's any way to have the predicate information in Trace file generated by event 10046 or any other. Also starting 9i can order of conditions/predicates in where clause change the way optimizer decides on the plan for query execution.

Thanks for your help on this.

Regards,
your fan.
Tom Kyte
January 05, 2009 - 9:51 am UTC

the trace file will contain

a) the query
b) the binds used by the query (dbms_monitor, use the binds=>TRUE)

...Also
starting 9i can order of conditions/predicates in where clause change the way
optimizer decides on the plan for query execution.
....

in general, the answer is "no", the CBO is free to rearrange the predicate as it sees fit.


thanks for your inputs on the query.

reader, January 05, 2009 - 11:24 am UTC

Dear Tom,
good day to you and thank you for taking out time to answer my query. I have one more question regarding predicate information can we safely assume that the predicate information (filter and access) shown by explain plan (dbms_xplan.display) will be the followed by the optimizer when the query is actually executed.

Thanks in advance for your help on this.

Regards,
your fan.
Tom Kyte
January 05, 2009 - 11:43 am UTC


can we safely assume that the
predicate information (filter and access) shown by explain plan
(dbms_xplan.display) will be the followed by the optimizer when the query is
actually executed.


No, explain plan 'lies' sometimes. Well, not really lies - but explain plan does not

a) bind peek
b) understand datatypes of binds

and explain plan

c) does a hard parse every time, using the current sessions environment.


http://asktom.oracle.com/Misc/when-explanation-doesn-sound-quite.html

individual waits for index/table

Aj, January 24, 2009 - 10:42 am UTC

Tom,
Thanks for all your help. Below in tkprof example above for 10046 gives the overall totals :

Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file scattered read 126 0.02 0.77
db file sequential read 217 0.01 0.79
SQL*Net message from client 2 0.00 0.00
********************************************************************************

My question : Is there a way to get time spent for each table/index within the query. That way an index rebuild can be considered. Or table exp/imp. Maybe it can be done by grep and count of original trace file but I am looking for a better way. I am assuming waits here means total time spent on doing the activity. Thanks
Tom Kyte
January 24, 2009 - 1:32 pm UTC

why/how would IO waits against an index tell you whether index should be rebuilt? Please explain - use math.

Look at the row source operation in the tkprof report, it shows IO's - logical and physical - against segments accessed during the processing of a query clearly.

But explain how you would use IO waits to figure out what you say you would figure out???

How to find out if session is tracing in 9i and 11g

A reader, March 12, 2010 - 11:03 am UTC

1)I have enabled event 10046 for specific user sessions .
using dbms_system.set_ev(sid,serial#,10046,12,'');

Is there any Database view that would tell me if what sessions are being traced ?

What are the different ways to know this info ?

In 9i and also 11g

2) what is the blank value '' for 5th parameter for ?
dbms_system.set_ev(sid,serial#,10046,12,'');



Tom Kyte
March 12, 2010 - 3:55 pm UTC

in 10g and above, there is a column SQL_TRACE in v$session you an peek at.

In 9i, you'll want to look in the trace directory on the OS and see if a trace file exists for that session.

  1  select c.value || '/' || d.instance_name || '_ora_' ||
  2         a.spid || '.trc' ||
  3        case when e.value is not null then '_'||e.value end trace
  4    from v$process a, v$session b, v$parameter c, v$instance d, v$parameter e
  5   where a.addr = b.paddr
  6     and b.audsid = userenv('sessionid')
  7     and c.name = 'user_dump_dest'
  8*    and e.name = 'tracefile_identifier'


generates the default tracefile name - just modify line 6

No trace ??? 9.2.0.8

A reader, March 24, 2010 - 3:50 pm UTC

Hi Tom,
I have enabled trace event 10046 level 12 for a particular user used by 3 tier web application to logon to database thru on logon trigger.

I see event is enabled for the sessions and it is writing tot he trace files for multiple sessions(connection pool used ) for that user.

But there is one particular error coming from the application and seen in the webserver logs :
java.sql.SQLException: ORA-01858: a non-numeric character was found where a numeric was expected

but no equivalent trace info is generated in the database.

Can you explain why would this happen.
Is it possible that Oracle errors generated on the client side dont show up in the database trace even with event 10046 level 12 ?

How to get the DB trace to work ?
Tom Kyte
March 26, 2010 - 11:11 am UTC

... Is it possible that Oracle errors generated on the client side dont show up in
the database trace even with event 10046 level 12 ?
...

sure, not all things occur on the server side.


... How to get the DB trace to work ? ...

it is.


=====================
PARSING IN CURSOR #4 len=41 dep=0 uid=597 oct=3 lid=597 tim=1239863110199977 hv=64101263 ad='3acc4278'
select to_date( 'xx-mar-2010' ) from dual
END OF STMT
PARSE #4:c=3000,e=21005,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=1239863110199962
BINDS #4:
EXEC #4:c=0,e=84,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1239863110200167
WAIT #4: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=-40016381 tim=1239863110200225
FETCH #4:c=0,e=34,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1239863110200310
WAIT #4: nam='SQL*Net break/reset to client' ela= 3 driver id=1650815232 break?=1 p3=0 obj#=-40016381 tim=1239863110200421
WAIT #4: nam='SQL*Net break/reset to client' ela= 79 driver id=1650815232 break?=0 p3=0 obj#=-40016381 tim=1239863110200526
=====================



it would be showing up like that though - see the break/reset

Why no Error in Trace : 9.2.0.8

A reader, March 25, 2010 - 1:33 pm UTC

Hi Tom,
I enable event 10046 from sql plus.
I run a pl/sql anonymous block which give ora-01858 ORA-01858: a non-numeric character was found where a numeric was expected.

I see the error in trace file.

Next I just run a stanaon select statement which gives me ora-01858.

But the trace file has no trace of ora-01858 error.

Question :--
1) Why is this happening ? Why the anonymous block error is seen and not the select statement

2) How to get the error for select statment in the trace file.


Reproducible Test case :--

SQL> alter session set events '10046 trace name context forever,level 12';

Session altered.

SQL> declare
2 v varchar2(20);
3 begin
4
5 select to_date('1/january/2010', 'd-mm-yy') into v from dual ;
6 end ;
7 /
declare
*
ERROR at line 1:
ORA-01858: a non-numeric character was found where a numeric was expected
ORA-06512: at line 5


SQL> select to_date('1/january/2010', 'd-mm-yy') test2 from dual ;
select to_date('1/january/2010', 'd-mm-yy') test2 from dual
*
ERROR at line 1:
ORA-01858: a non-numeric character was found where a numeric was expected


SQL> disconn



APPNAME mod='SQL*Plus' mh=3669949024 act='' ah=4029777240
=====================
PARSING IN CURSOR #3 len=68 dep=0 uid=527 oct=42 lid=527 tim=5209571535017 hv=1346161232 ad='db3d28e0'
alter session set events '10046 trace name context forever,level 12'
END OF STMT
EXEC #3:c=0,e=65,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=5209571534469
WAIT #3: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
*** 2010-03-25 14:26:59.656
WAIT #3: nam='SQL*Net message from client' ela= 61342080 p1=1413697536 p2=1 p3=0
=====================
PARSING IN CURSOR #3 len=99 dep=0 uid=527 oct=47 lid=527 tim=5209632877503 hv=1781802427 ad='da38e690'
declare
v varchar2(20);
begin
select to_date('1/january/2010', 'd-mm-yy') into v from dual ;
end ;

END OF STMT
PARSE #3:c=0,e=69,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=5209632877498
BINDS #3:
=====================
PARSING IN CURSOR #5 len=54 dep=1 uid=527 oct=3 lid=527 tim=5209632877706 hv=3263010293 ad='cb5418e0'
SELECT TO_DATE('1/january/2010', 'd-mm-yy') FROM DUAL
END OF STMT
PARSE #5:c=0,e=39,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=5209632877703
BINDS #5:
EXEC #5:c=0,e=48,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=5209632877804
FETCH #5:c=0,e=72,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=4,tim=5209632877892
EXEC #3:c=0,e=400,p=0,cr=3,cu=0,mis=0,r=0,dep=0,og=4,tim=5209632877982
ERROR #3:err=1858 tim=533466406
WAIT #3: nam='SQL*Net break/reset to client' ela= 1 p1=1413697536 p2=1 p3=0
WAIT #3: nam='SQL*Net break/reset to client' ela= 174 p1=1413697536 p2=0 p3=0
WAIT #3: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
*** 2010-03-25 14:27:33.084
WAIT #3: nam='SQL*Net message from client' ela= 32643027 p1=1413697536 p2=1 p3=0
STAT #5 id=1 cnt=1 pid=0 pos=1 obj=222 op='TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=37 us)'
=====================
PARSING IN CURSOR #3 len=56 dep=0 uid=527 oct=3 lid=527 tim=5209665537343 hv=1986370508 ad='88b997a8'
select to_date(:"SYS_B_0", :"SYS_B_1") test2 from dual
END OF STMT
PARSE #3:c=20000,e=15781,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=0,tim=5209665537338
BINDS #3:
bind 0: dty=1 mxl=32(14) mal=00 scl=00 pre=00 oacflg=10 oacfl2=100 size=32 offset=0
bfp=9fffffffdfef2320 bln=32 avl=14 flg=09
value="1/january/2010"
bind 1: dty=1 mxl=32(07) mal=00 scl=00 pre=00 oacflg=10 oacfl2=100 size=32 offset=0
bfp=9fffffffdfef2298 bln=32 avl=07 flg=09
value="d-mm-yy"
EXEC #3:c=0,e=127,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=5209665537697
WAIT #3: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
FETCH #3:c=0,e=102,p=0,cr=3,cu=0,mis=0,r=0,dep=0,og=4,tim=5209665537853
WAIT #3: nam='SQL*Net break/reset to client' ela= 1 p1=1413697536 p2=1 p3=0
WAIT #3: nam='SQL*Net break/reset to client' ela= 209 p1=1413697536 p2=0 p3=0
WAIT #3: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
WAIT #3: nam='SQL*Net message from client' ela= 6871294 p1=1413697536 p2=1 p3=0
STAT #3 id=1 cnt=1 pid=0 pos=1 obj=222 op='TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=56 us)'
XCTEND rlbk=0, rd_only=1


Tom Kyte
March 26, 2010 - 2:49 pm UTC

see above, look for the break/reset.

But Why

A reader, March 26, 2010 - 8:01 pm UTC

Thanx Tom But why does the select error not show up while the anonymous block error show up in trace file.

Is this a bug ? or expected behaviour ?

The break/reset wont tell me that It was caused by ora-01858 ?

So there is no way to know this ?
Is it possible to do sql tracing for client like sqlplus tool or even web application server
Tom Kyte
March 26, 2010 - 8:19 pm UTC

the break/reset is what you'll see for that particular issue.

you have the break/reset - and the statement it was on, for whatever reason, it is appearing that way in the trace file.

It is the expected behavior, it has always been this way. Lots of "runtime" errors are done in this fashion - things that do not cause an error "right away"


select 1/0 from dual;
break/reset.


select to_number('x') from dual;
break/reset


break/reset

A reader, March 26, 2010 - 8:13 pm UTC

Does Break/reset always indicate that there is some error ?
Tom Kyte
March 26, 2010 - 8:21 pm UTC

not 100%, it can be used for other things not just error conditions.

Break/reset for what other things

A reader, March 28, 2010 - 11:46 am UTC

Can you please elaborate what other things break/reset is used for ?

Please advice on appropriate use of 10046

Martin Vajsar, July 29, 2010 - 6:11 am UTC

Dear Tom,

a problem has recently appeared in our application, where some invocations of a database process takes much longer than usual. We were unable to find any obvious reason, but using our system's log files we identified a single SQL command responsible for the delays. This command normally takes around 10 seconds, but sometimes out of sudden takes around 9-10 minutes to run. Usually several runs of the command are delayed before everything returns back to normal.

The problem demonstrates only in our client's production database and therefore I cannot post a test case, so I don't send you the command either. We have no access to the production database and the assistance from the client DBAs is very limited. We're therefore trying to devise a way to analyze the problem with as little assistance from the DBAs as possible.

Given that the problematic SQL is undoubtedly identified, would you consider appropriate turning on 10046, level 12 tracing for the session that executes the command (just when the problem appears again)?

And from the DBA point of view, would you consider granting ALTER SESSION to the account used by third-party software in a production database a security risk? (I suppose we could move the ALTER SESSION command to a package managed by customer's DBAs to do without the grant, but that means more work for both us and them.)

Thank you for your insights.
Tom Kyte
July 29, 2010 - 12:02 pm UTC

sounds like bind peeking to me - classic symptom "it runs fast then slow then fast".

http://asktom.oracle.com/Misc/tuning-with-sqltracetrue.html
http://asktom.oracle.com/Misc/sqltracetrue-part-two.html

using sql trace might accidentally "fix it" (meaning, it will run slow without trace, fast with - but trace didn't really fix it)

In any case, if you do go the trace route, just get execute on dbms_monitor - no need to get alter session. dbms_monitor.session_trace_enable( waits=>true ) would do it.

If you have access to ASH data - that would be almost as good as a tkprof.

Alex, August 02, 2010 - 11:50 am UTC

Tom,

From reading the "what the heck do we do about this situation", I don't understand why histograms won't address this problem? It is after all a skewed data problem isn't it? Maybe I'm misunderstanding your write up, you do talk about histograms but it doesn't sound like they're a possible solution.
Tom Kyte
August 02, 2010 - 12:02 pm UTC

the histograms cause bind peeking to be able to select one of two plans - depending on who parsed it first. If you do not have histograms - the probability you have just one plan always is high.

so, one of the 'solutions' you can use to remove the plan flip flopping that happens would be to not have statistics that cause the plan to flip flop

Alex, August 02, 2010 - 3:48 pm UTC

I guess I don't understand why there isn't sufficient information for the optimizer to pick the correct plan with histograms and bind peeking regardless of how it was parsed.

Maybe I have too simplistic a view of the process, but I would think histograms cover the "skewdness", peeking allows it so see the values, together I would think it could figure it out.
Tom Kyte
August 02, 2010 - 4:27 pm UTC

here is the problem in a nutshell in 9i-10g


a) when a query is hard parsed, we peek at the binds and pretend they are binds but literals and optimize that query. So, say you have the query: 'select * from big_Table where indexed_column = ?'. supposed further, indexed_column is heavily skewed and has histograms. The value 1 is very sparse in that table (1 row has one). The value 99 is very dense in that table (most rows have 99)


b) on monday, the query is not in the shared pool. You run the application and run that query. You supply a bind value of 1 via the user interface. When the query hard parses, the optimizer optimizes "where indexed_column = 1" via bind peeking. When it sees indexed_column = 1, with the histograms, it knows "one row" and chooses the index.

c) on monday, every uses your index range scan plan - EVERYONE (shared sql). If I run it with 99 - I use an index. If you run it with 50 - you use an index. And so on.

d) overnight - jobs run, statistics are gathered, your plan is flushed from the shared pool

e) tuesday - I get in before you and run the application. I use the number 99 instead of 1. The optimizer optimizes "where indexed_column = 99" and says "full scan OF COURSE - anything else would be bad" due to the histograms that told it the table is full of 99's.

f) on tuesday EVERYONE uses my full scan - even you - to find your one row out of millions. You are not pleased.



With bind peeking in 9i-10g, as outlined and demonstrated in those links - the plan flip flops. One day it is good, next day bad - as if by magic.


In 11g we have adaptive cursor sharing (also talked about in those articles above)


make sense now?

Alex, August 03, 2010 - 8:29 am UTC

Yes, I made the mistake of thinking the adaptive cursor sharing was always the behavior. Not realizing versions less than 11g only looked at binds when the statement was parsed.

Understanding Columns

Taral, October 11, 2010 - 11:20 am UTC

Hello Sir,

Using 11.2.0.2

What is Rows (1st) Rows (avg) Rows (max) ?

alter index t_idx rebuild partition T_1 parallel 4 online
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.01          1          1          0           0
Execute      9     16.26      58.10      33065      34678       5648           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       11     16.26      58.12      33066      34679       5648           0
 
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 9
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          4  PX COORDINATOR  (cr=1 pr=0 pw=0 time=795005 us)
         0          0          0   PX SEND QC (ORDER) :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
         1          0          1    INDEX BUILD NON UNIQUE T_IDX (cr=73 pr=0 pw=456 time=3004795 us)(object id 0)
    838180     348550     990260     SORT CREATE INDEX (cr=0 pr=0 pw=0 time=2757375 us)
    838180     348550     990260      PX RECEIVE  (cr=0 pr=0 pw=0 time=2274457 us cost=3040 size=25095576 card=3136947)
         0          0          0       PX SEND RANGE :TQ10000 (cr=0 pr=0 pw=0 time=0 us cost=3040 size=25095576 card=3136947)
         0     349152     796999        PX BLOCK ITERATOR PARTITION: 1 1 (cr=3748 pr=3660 pw=0 time=2059830 us cost=3040 size=25095576 card=3136947)
         0     349152     796999         TABLE ACCESS FULL T PARTITION: 1 1 (cr=3748 pr=3660 pw=0 time=1932330 us cost=3040 size=25095576 card=3136947)

Tom Kyte
October 11, 2010 - 1:10 pm UTC

I asked Graham Wood - a AWR/ASH expert in Oracle - and he said:

... The data is from multiple executions of the SQL statement. We always capture the row source stats from the first execution, and depending on the sql trace set up we may capture data for all executions. We display data from the first along with average and max values for each row source. It is trying to help identify skewed data distributions issues. ...

Thanks Sir

Taral, October 11, 2010 - 1:58 pm UTC

Thank You sir for this information. I had little feeling of same thing. But this is good information

I think today i wrote this on my blog

https://desaitaral.wordpress.com/2010/10/11/rowsource-for-each-execution/



Tkprof producing unusual results?

Ravi, March 08, 2011 - 12:45 pm UTC

Hello Tom,

ONe of our PL/SQL batch programs hits an ORA-1427 error, so we requested the Trace file and I've copied and pasted a section of it below.

Now, what is you see after the first few lines is the program Inserting into our Error Logging Table and further on the error logging program issues an autonomous transaction commit.

The Call to the INsert into Error logging table is housed INSIDE a Exception When others statement, I've pasted that code down below.

After the Insert command, the program appears to be generating statistics.

Now, I hope this info is enough for you to answer two questions I have


1) I'd thought ORA-1427 would have been an Oracle SQL statement causing the error all the time and should have been between the Fetch and Insert into error log below. But there was NO SIGN of any Oracle SQL statement with text "ERR 1427" in the whole raw trace file. Could this be a PL/SQL error by any chance?

2) Why does the program generate stats with a Delay, like after the Insert, the Stats look to be for a command that happened much earlier, is that normal behaviour?

Thanks

Ravi




EXEC #17:c=0,e=333,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=159353020623
FETCH #17:c=0,e=134,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=4,tim=159353020827
BINDS #74:
EXEC #74:c=0,e=88,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=159353021417
FETCH #74:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=159353021483
BINDS #74:
EXEC #74:c=0,e=82,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=159353021693
FETCH #74:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=159353021762
=====================
PARSING IN CURSOR #14 len=761 dep=1 uid=409 oct=2 lid=409 tim=159353023377 hv=291898906 ad='8e6cb088'
INSERT INTO STD_ERR_LOG ( CREATED_DATE, CREATED_BY, SEL_ERR_ID, SEL_ERR_USERNAME, SEL_ERR_PROC_DATE, SEL_ERR_SYSTEM, SEL_ERR_MODULE, SEL_ERR_TYPE, SEL_ERR_NO, SEL_ERR_DATE, SEL_ERR_STEP_NO, SEL_ERR_MSG, SEL_ERR_DATA_VALUES, SEL_ERR_NAME_1, SEL_ERR_VALUE_1, SEL_ERR_NAME_2, SEL_ERR_VALUE_2, SEL_ERR_NAME_3, SEL_ERR_VALUE_3, SEL_ERR_NAME_4, SEL_ERR_VALUE_4 ) VALUES ( SYSDATE, SUBSTR(:B18 , 1, 30), STD_ERR_ID_SEQ.NEXTVAL, SUBSTR(:B18 , 1, 30), NVL(:B17 , SYSDATE), SUBSTR(NVL(:B16 , :B15 ), 1, 10), SUBSTR(:B14 , 1, 60), SUBSTR(:B13 , 1, 1), :B12 , SYSDATE, :B11 , :B10 , SUBSTR(:B9 , 1, 500), SUBSTR(:B8 , 1, 30), SUBSTR(:B7 , 1, 50), SUBSTR(:B6 , 1, 30), SUBSTR(:B5 , 1, 50), SUBSTR(:B4 , 1, 30), SUBSTR(:B3 , 1, 50), SUBSTR(:B2 , 1, 30), SUBSTR(:B1 , 1, 50) )
END OF STMT
PARSE #14:c=0,e=1401,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=159353023367
BINDS #14:
kkscoacd
Bind#0
oacdty=01 mxl=32(30) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=ffffffff7cf6f478 bln=32 avl=07 flg=09
value="Z601741"
Bind#1
oacdty=01 mxl=32(30) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=ffffffff7cf6f478 bln=32 avl=07 flg=09
value="Z601741"
Bind#2
oacdty=12 mxl=07(07) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=00 csi=00 siz=8 off=0
kxsbbbfp=ffffffff7cf6fce8 bln=07 avl=07 flg=09
value="3/8/2011 10:16:11"
Bind#3
oacdty=01 mxl=32(10) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=ffffffff7b1b0d98 bln=32 avl=04 flg=09
value="SRDP"
Bind#4
oacdty=01 mxl=32(10) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=ffffffff7d7daf88 bln=32 avl=04 flg=05
value="SRDP"
Bind#5
oacdty=01 mxl=128(60) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=128 off=0
kxsbbbfp=ffffffff7cf6f4c0 bln=128 avl=18 flg=09
value="SRDP_AR_VALIDATION"
Bind#6
oacdty=01 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=395d1a2c8 bln=32 avl=01 flg=09
value="F"
Bind#7
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=ffffffff7cf6f438 bln=22 avl=04 flg=09
value=-1427
Bind#8
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=395d19ed0 bln=22 avl=01 flg=09
value=0
Bind#9
oacdty=01 mxl=2000(500) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=2000 off=0
kxsbbbfp=ffffffff7cf6f528 bln=2000 avl=56 flg=09
value="ORA-01427: single-row subquery returns more than one row"
Bind#10
oacdty=01 mxl=32(00) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=00000000 bln=32 avl=00 flg=09
Bind#11
oacdty=01 mxl=32(08) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=395d1a368 bln=32 avl=08 flg=09
value="CLAIM ID"
Bind#12
oacdty=01 mxl=32(07) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=ffffffff7b5f26e8 bln=32 avl=07 flg=09
value="4340687"
Bind#13
oacdty=01 mxl=32(16) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=395d1a378 bln=32 avl=16 flg=09
value="ERROR SUB MODULE"
Bind#14
oacdty=01 mxl=32(30) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=ffffffff7cf6e6b8 bln=32 avl=08 flg=09
value="RPARP004"
Bind#15
oacdty=01 mxl=32(10) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=395d1a390 bln=32 avl=10 flg=09
value="ERROR TEXT"
Bind#16
oacdty=01 mxl=2000(255) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=2000 off=0
kxsbbbfp=ffffffff7b5f8038 bln=2000 avl=00 flg=09
Bind#17
oacdty=01 mxl=32(00) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=00000000 bln=32 avl=00 flg=09
Bind#18
oacdty=01 mxl=32(00) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=01 csi=31 siz=32 off=0
kxsbbbfp=00000000 bln=32 avl=00 flg=09
WAIT #14: nam='db file sequential read' ela= 63 file#=56 block#=62988 blocks=1 obj#=29695 tim=159353030711
WAIT #14: nam='db file sequential read' ela= 44 file#=56 block#=62987 blocks=1 obj#=29695 tim=159353030894
WAIT #14: nam='db file sequential read' ela= 46 file#=56 block#=166538 blocks=1 obj#=29695 tim=159353031061
WAIT #14: nam='db file sequential read' ela= 42 file#=56 block#=166898 blocks=1 obj#=29695 tim=159353031214
WAIT #14: nam='db file sequential read' ela= 34 file#=193 block#=114708 blocks=1 obj#=0 tim=159353031588
WAIT #14: nam='db file sequential read' ela= 41 file#=128 block#=126605 blocks=1 obj#=29696 tim=159353031847
WAIT #14: nam='db file sequential read' ela= 34 file#=128 block#=226490 blocks=1 obj#=29696 tim=159353032001
EXEC #14:c=10000,e=8472,p=7,cr=1,cu=7,mis=1,r=1,dep=1,og=4,tim=159353032142
STAT #27 id=1 cnt=0 pid=0 pos=1 obj=0 op='FILTER (cr=63 pr=0 pw=0 time=1475 us)'
STAT #27 id=2 cnt=0 pid=1 pos=1 obj=0 op='SORT GROUP BY (cr=63 pr=0 pw=0 time=1471 us)'
STAT #27 id=3 cnt=0 pid=2 pos=1 obj=27207 op='TABLE ACCESS BY INDEX ROWID FIELD_DATA_LINES (cr=63 pr=0 pw=0 time=1419 us)'
STAT #27 id=4 cnt=69 pid=3 pos=1 obj=0 op='NESTED LOOPS (cr=59 pr=0 pw=0 time=62988 us)'
STAT #27 id=5 cnt=1 pid=4 pos=1 obj=26878 op='TABLE ACCESS BY INDEX ROWID ACT_SCH_APPS_ALL (cr=56 pr=0 pw=0 time=978 us)'
STAT #27 id=6 cnt=75 pid=5 pos=1 obj=26883 op='INDEX RANGE SCAN ASA_FK03 (cr=3 pr=0 pw=0 time=64 us)'
STAT #27 id=7 cnt=67 pid=4 pos=2 obj=27209 op='INDEX RANGE SCAN FDL_ASA (cr=3 pr=0 pw=0 time=92 us)'
=====================


----

WHEN OTHERS
THEN
IF gv_fatal_error = FALSE
THEN
--write to business steps table, validation complete with errors
lv_std_info_rec.completion_bes :=
srdp_rp_constants_pkg.c_s_id_claim_validated_errs;
lv_bes_insert :=
fnc_insert_to_bes (lv_error_rec,
lv_std_info_rec,
lv_std_info_rec.completion_bes
);
END IF;

lv_rtc :=
std_err_log_pkg.write_err_log
(p_system => gv_err_system,
p_err_proc_dt => SYSDATE,
p_module => gv_err_module,
p_type => 'F',
p_err_no => NVL (SQLCODE, 0),
p_err_msg => SUBSTR (SQLERRM,
1,
500
),
p_step_no => 0,
p_data_values => NULL,
p_name_1 => 'CLAIM ID',
p_value_1 => lv_std_info_rec.claim_id,
p_name_2 => 'ERROR SUB MODULE',
p_value_2 => lv_error_rec.check_id,
p_name_3 => 'ERROR TEXT',
p_value_3 => lv_error_rec.se_err_text,
p_name_4 => NULL,
p_value_4 => NULL
);
Tom Kyte
March 08, 2011 - 2:28 pm UTC

1) not all errors are recorded in the trace files, most are, some are not. You do not let us see the text of the offending query. I'll assume it was like

select * from t where x = (select user_id from all_users);

where t might be:

create table t ( x int );
insert into t values (1);
commit;

that'll be trapped in the client - won't be in the trace.


2) stat records are simply written when a cursor is closed, something closed a cursor after your error logging completed.

Follow up

A reader, March 08, 2011 - 3:53 pm UTC

Tom,

This is Ravi who supplied the last post in this thread that you answered.

I couldn't locate the 1427 error in the tkprof output, which is 42MB. Is there a place on this forum I can upload it for you to have a look.

The other option is I can get them to run the trace just for the erroneous bit and upload it.

Thanks

Ravi
Tom Kyte
March 08, 2011 - 4:48 pm UTC

But I told you:

not all errors are recorded in the trace files, most are, some are not.

that one for a select statement is not. However you have the trace, you know that it must have been:

BINDS #74:
EXEC #74:c=0,e=82,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=159353021693
FETCH #74:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=159353021762
=====================
PARSING IN CURSOR #14 len=761 dep=1 uid=409 oct=2 lid=409 tim=159353023377 hv=291898906 
ad='8e6cb088'
INSERT INTO STD_ERR_LOG ( CREATED_DATE, CREATED_BY, SEL_ERR_ID, SEL_ERR_USERNAME, 


statement 74, look UP in the trace for what statement 74 is and you'll have your culprit. You know it must be statement 74 because that is what was being executed right before your error logic was tripped.



Yury, April 27, 2011 - 9:19 am UTC

Hi Tom,
is there any way to find what was the actual bind variable value in the invalid sql query executed from third-party application?
Well, in trace I see the query but not binds values. Situation looks like the following:
SQL> variable x number;
SQL> exec :x := 10;
SQL> select :x from dual union all select sysdate from dual;
ORA-01790: expression must have same datatype as corresponding expression

Sometimes this query is valid, sometimes - invalid (depends on the value of x). Indeed I don't know the type of :x, but I want to know what value was passed.

Thanks in advance.

Tom Kyte
April 27, 2011 - 1:26 pm UTC

The parse failed - the binding didn't happen yet. binding is after the parse, before the execute

The binds won't be available.

missint STAT in trace

A reader, June 07, 2011 - 2:26 pm UTC

Hi Tom,
I'm trying to trace the execution of pl/sql procedure. I see no STAT lines in the trace file for the sql in this proc, but I see them for recursive sql. Because of this I don't see the actual plan and the stats for each operation in the plan. What could prevent the writing of STAT lines in the trace?
Tom Kyte
June 07, 2011 - 2:52 pm UTC

in older releases - the stat records were only written to the trace file when the cursor was closed.

plsql - being simply the most efficient language for data processing - doesn't close cursors until it HAS TO. Even when you say "close cursor" in plsql, plsql doesn't close it - it caches them open (in order to seriously decrease the amount of parsing performed). It will only close them when:

a) pressed for cursor space - the plsql cursor cache will NOT cause you to hit max open cursors, if we start to run out of cursors - it will close them transparently

b) you have more than session_cached_cursors cursors cached already - plsql uses that parameter to decide how many cursors to cache open

c) you end your session.


(c) is probably the most appropriate for you - all you need to do is end the session - then the stat records will be written out.

A reader, June 08, 2011 - 8:44 am UTC

Hi Tom,
Thanks for the info, however, I'm looking at the trace file long after the session closed and I still see no STAT records for the app sql, while i see it for recursive sql.
Tom Kyte
June 08, 2011 - 10:56 am UTC

Then probably - you started tracing AFTER the cursors were open.

Or - you disabled tracing before you ended the session.


In 10g and before - either of those would cause this to happen. In 11g - the stat records are written out after each statement completes.

A reader, June 09, 2011 - 3:22 pm UTC

Yes, the second one was my case, I used dbms_monitor at the beginning and at the end of the session. Once I removed the second call at the end, the STAT showed up.
Thanks.

INSERTs in 10046 trace

Eduardo Claro, July 19, 2011 - 10:06 am UTC

Tom, we have a huge process (Oracle EBS) doing lots of INSERTs and other statements, and I have a doubt about the following excerpt from the trace generated:


INSERT INTO CAIBR_EXPORT_DATA(TRANSACTION_ID, EXPORT_SYSTEM_ID, EVENT_TYPE_ID,
TRANSACTION_TYPE, EXPORT_DATE, TERRITORY_CODE, EXPORT_STATUS, SYSTEM_ID1,
SYSTEM_ID2, SYSTEM_ID3, SYSTEM_CODE1, SYSTEM_CODE2, USER_ID, SYSTEM_ID4,
SYSTEM_ID5, SYSTEM_ID6, SYSTEM_ID7, SYSTEM_ID8, SYSTEM_ID9, SYSTEM_ID10,
SYSTEM_CODE3, SYSTEM_CODE4, SYSTEM_CODE5, SYSTEM_CODE6, SYSTEM_CODE7,
SYSTEM_CODE8, SYSTEM_CODE9, SYSTEM_CODE10, SYSTEM_DATE1, SYSTEM_DATE2,
SYSTEM_DATE3, SYSTEM_DATE4, SYSTEM_DATE5, SYSTEM_DATE6, SYSTEM_DATE7,
SYSTEM_DATE8, SYSTEM_DATE9, SYSTEM_DATE10)
VALUES
(:B37 , :B36 , :B35 , :B34 , :B33 , :B32 , 1, :B31 , :B30 , :B29 , :B28 ,
:B27 , :B26 , :B25 , :B24 , :B23 , :B22 , :B21 , :B20 , :B19 , :B18 , :B17 ,
:B16 , :B15 , :B14 , :B13 , :B12 , :B11 , :B10 , :B9 , :B8 , :B7 , :B6 ,
:B5 , :B4 , :B3 , :B2 , :B1 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 3386 117.95 627.75 114 1039 25425 3386
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3386 117.95 627.75 114 1039 25425 3386

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 55 (recursive depth: 2)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 114 0.01 0.99
log file sync 5 0.01 0.03
enqueue 2 0.00 0.00
latch free 3 0.00 0.00
buffer busy waits 2 0.00 0.00

The 3386 executions of the INSERT statement above spent 627.75 seconds in total. 117.95 seconds of the total were spent using CPU, and about 1 second in waits reported.

So, my question is: where were the other more than 500 seconds spent?

Initialy, I suspected of dynamic space alloaction, but the table has only 5 extents. It has 2 indexes, with few extents also. The table and the indexes have about 50/60M of size each.

Could you give me a light?
Tom Kyte
July 19, 2011 - 10:30 am UTC

could it have been in waiting for the CPU, was the machine pretty much running at near 100% utilization when this took place. That would be my first and foremost guess.

Waiting for CPU.

INSERTs in 10046 trace

Eduardo Claro, July 19, 2011 - 11:39 am UTC

thanks for the reply. But, it does not seem to be the case. When I looked at CPU, it was showing about 50% idle. Any other tip?
Tom Kyte
July 19, 2011 - 1:18 pm UTC

are you using hyperthreading chips, if so, 50% is the new 100%.

INSERTs in 10046 trace

Eduardo Claro, July 19, 2011 - 1:46 pm UTC

No, it is a Sun Solaris with 64 CPUs:


load averages: 24.71, 24.21, 24.36 15:42:44
4142 processes:4131 sleeping, 11 on cpu
CPU states: 61.5% idle, 27.7% user, 10.8% kernel, 0.0% iowait, 0.0% swap
Memory: 192G real, 23G free, 112G swap in use, 288G swap free

When I look for the specific process using "top" and "ps", it shows a consumption of 0.41%, which means one third of each CPU (100%/64=1.56).

But, on the other hand, load seems to be high (26).

Tom Kyte
July 19, 2011 - 4:01 pm UTC

you didn't answer me. ask your sysadm.

if you are using hyperthreading, some tools - like 'top' - are not as useful.

If you have one core, but two threads can use it, when you are 50% utilized (meaning one of the two threads was always running) - you are 100% utilized.

tkprof..

Manoj Kaparwan, August 16, 2011 - 2:21 am UTC

Hello Tom,
Thanks for your time .

I have the below snip from TKPROF.

SELECT * FROM ( SELECT T1.objid FROM v T1
WHERE ( T1.S_task_title LIKE 'SA-MAN-08%' ) AND ( T1.S_task_status LIKE 'C%' )
AND ( T1.S_cond_title LIKE 'Op%' ) AND ( T1.S_x_test_fault LIKE 'N' ) ) WHERE ROWNUM <= 200

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.57 0.63 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 40.78 316.14 173076 2982677 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 41.35 316.77 173076 2982677 0 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 44

Rows Row Source Operation
------- ---------------------------------------------------
0 COUNT STOPKEY
0 FILTER
0 NESTED LOOPS
169626 NESTED LOOPS
169626 NESTED LOOPS
169626 NESTED LOOPS OUTER
169626 HASH JOIN
169690 NESTED LOOPS OUTER
169690 NESTED LOOPS
169690 NESTED LOOPS OUTER
169690 NESTED LOOPS OUTER
169690 TABLE ACCESS BY INDEX ROWID TABLE_TASK
169690 INDEX RANGE SCAN IND_S_TASK_TITLE (object id 407714)
0 INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)
68 INDEX UNIQUE SCAN SYS_C0092853 (object id 403226)
169690 INDEX UNIQUE SCAN SYS_C0092995 (object id 403822)
73 INDEX UNIQUE SCAN SYS_C0093023 (object id 403914)
172 TABLE ACCESS FULL TABLE_GBST_ELM
0 VIEW PUSHED PREDICATE
0 NESTED LOOPS
169626 INDEX UNIQUE SCAN SYS_C0093238 (object id 405996)
0 INLIST ITERATOR
0 INDEX RANGE SCAN FA_TASK_NAME_UNIQUE (object id 407740)
169626 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
169626 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
0 TABLE ACCESS BY INDEX ROWID TABLE_CONDITION
169626 INDEX UNIQUE SCAN SYS_C0093236 (object id 405979)
0 TABLE ACCESS BY INDEX ROWID TABLE_FA_TASK
0 INDEX UNIQUE SCAN FA_TASK_NAME_UNIQUE (object id 407740)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 169500 0.31 291.54
direct path write 511 0.00 0.00
db file scattered read 2 0.02 0.05
direct path read 509 0.00 0.00
SQL*Net message from client 1 0.19 0.19


I have HINT FIRST_ROWS in the view v.

I was expecting the first 200 rows to flow immediately.
is the below driving OUTER join is stooping that.?
I mean the 1st row will flow out only when the entire join is calculated?


169690 NESTED LOOPS OUTER
169690 TABLE ACCESS BY INDEX ROWID TABLE_TASK
169690 INDEX RANGE SCAN IND_S_TASK_TITLE (object id 407714)
0 INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)




Tom Kyte
August 16, 2011 - 4:54 pm UTC

No rows flowed out of this, it found almost 170,000 *candidate* rows (rows that matched part of the predicate), but after investigating further - we found the other part of the predicate wasn't satisfied for any of the 170,000 rows.

So, you waited a long time to find out "no data found" - because there where 170,000 rows in one of the tables (you were querying a view apparently) and when we joined to the other table - we discovered NONE of them actually matched the entire predicate.

tkprof..

Manoj Kaparwan, August 16, 2011 - 8:23 am UTC

Tom
It is 'COUNT STOPKEY' operation at the end which would cause rows to limit.

So

169690 NESTED LOOPS OUTER
169690 TABLE ACCESS BY INDEX ROWID TABLE_TASK
169690 INDEX RANGE SCAN IND_S_TASK_TITLE (object id 407714)
0 INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)


If in another plan " INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206) " row Source operation ... UNIQUE Index SCan is performed... we would get rows flowing out immeditaly as soon as match is found
?


Could you please help me understand the row source operation in the plan below, when we limit on row numbers (rownum<=200).
I have the below tkprof for the original query without filter.




tkprof:


SELECT * FROM ( SELECT objid FROM v ) WHERE
ROWNUM <= 200

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.86 0.93 0 3 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 15 0.04 0.12 83 3538 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 0.90 1.06 83 3541 0 200

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 44

Rows Row Source Operation
------- ---------------------------------------------------
200 COUNT STOPKEY
200 NESTED LOOPS OUTER
200 NESTED LOOPS OUTER
200 NESTED LOOPS
200 NESTED LOOPS OUTER
200 NESTED LOOPS OUTER
200 NESTED LOOPS
200 NESTED LOOPS
200 NESTED LOOPS
200 NESTED LOOPS
200 TABLE ACCESS FULL TABLE_TASK
200 INDEX UNIQUE SCAN SYS_C0093236 (object id 405979)
200 INDEX UNIQUE SCAN SYS_C0092995 (object id 403822)
200 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
200 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
0 INDEX UNIQUE SCAN SYS_C0092853 (object id 403226)
0 INDEX UNIQUE SCAN SYS_C0093023 (object id 403914)
200 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
14 VIEW PUSHED PREDICATE
14 NESTED LOOPS
200 INDEX UNIQUE SCAN SYS_C0093238 (object id 405996)
14 INLIST ITERATOR
14 INDEX RANGE SCAN FA_TASK_NAME_UNIQUE (object id 407740)
0 INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)




how the row limit applied to the 1st row source operation when we have 'COUNT STOPKEY' operation at the end?


200 NESTED LOOPS
200 TABLE ACCESS FULL TABLE_TASK
200 INDEX UNIQUE SCAN SYS_C0093236 (object id 405979)



Tom Kyte
August 16, 2011 - 5:23 pm UTC

I don't know what you mean here, you cannot have it applied at the first row source operation, that would give you want is known as "the wrong answer"

You have a complex view (joins and all)

You have asked for the first 200 rows from the view.


This is entirely different from "get the first 200 rows from this table, then join to these other tables and return the result".

tkprof..

Manoj Kaparwan, August 17, 2011 - 1:39 am UTC

Tom
Thanks.

Further, re-running a test case to clarify my question.





A) If we chose FIRST_ROWS, Optimizer chosen a plan which best suits to fetch the first rows as soon poosible.
we got the 200 rows in just Elapsed: 00:00:05.89
first join chosen is below ( favouring first rows to come out quickly)

200 NESTED LOOPS
200 TABLE ACCESS FULL TABLE_TASK
200 INDEX UNIQUE SCAN SYS_C0093236 (object id 405979)




B) If we chose CHOOSE , optimizer didnt favoured the first rows fetching quickly. Hence we waited more.
we got the 200 rows in Elapsed: 00:04:52.80

first join chosen is sort merge ( so we waited there most)..
1630277 MERGE JOIN
3054314 INDEX FULL SCAN SYS_C0093236 (object id 405979)
1630277 SORT JOIN
1664063 TABLE ACCESS FULL TABLE_TASK




questions.

q1) is it fare to say... that if Optimzer know that fetch first rows is the goal, it would choose a plan which favours the
fetching of first rows quickly from a complex view..?

q2) how the processing of the rows beyond 200 rows was stooping in case (A) above from the first join ( when we have
count stop key as the last row source opertaion?) . In other words does optimser know that Index unique scan (SYS_C0093236 and table full scan)
needs to be stopped once we got the 200 rows out?

200 NESTED LOOPS
200 TABLE ACCESS FULL TABLE_TASK
200 INDEX UNIQUE SCAN SYS_C0093236 (object id 405979)





Details below for both the above cases (A) and (B)




TKPROF - for (A):

SELECT *
FROM
( SELECT /*+FIRST_ROWS*/ objid FROM v ) WHERE ROWNUM
<= 200


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 1.02 1.11 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 15 0.05 0.36 86 3538 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17 1.07 1.47 86 3538 0 200

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 44

Rows Row Source Operation
------- ---------------------------------------------------
200 COUNT STOPKEY
200 NESTED LOOPS OUTER
200 NESTED LOOPS OUTER
200 NESTED LOOPS
200 NESTED LOOPS OUTER
200 NESTED LOOPS OUTER
200 NESTED LOOPS
200 NESTED LOOPS
200 NESTED LOOPS
200 NESTED LOOPS
200 TABLE ACCESS FULL TABLE_TASK
200 INDEX UNIQUE SCAN SYS_C0093236 (object id 405979)
200 INDEX UNIQUE SCAN SYS_C0092995 (object id 403822)
200 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
200 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
0 INDEX UNIQUE SCAN SYS_C0092853 (object id 403226)
0 INDEX UNIQUE SCAN SYS_C0093023 (object id 403914)
200 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
14 VIEW PUSHED PREDICATE
14 NESTED LOOPS
200 INDEX UNIQUE SCAN SYS_C0093238 (object id 405996)
14 INLIST ITERATOR
14 INDEX RANGE SCAN FA_TASK_NAME_UNIQUE (object id 407740)
0 INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 15 0.00 0.00
db file scattered read 3 0.00 0.00
db file sequential read 66 0.01 0.30
SQL*Net message from client 15 0.19 2.89




TKPROF - for (B):

SELECT *
FROM
( SELECT /*+choose*/ objid FROM v ) WHERE ROWNUM <= 200


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 1.33 1.35 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 15 63.29 280.32 205625 6880436 175 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18 64.62 281.67 205625 6880436 175 200

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 44

Rows Row Source Operation
------- ---------------------------------------------------
200 COUNT STOPKEY
200 NESTED LOOPS OUTER
200 HASH JOIN OUTER
1630277 NESTED LOOPS
1630277 NESTED LOOPS OUTER
1630277 NESTED LOOPS OUTER
1630277 NESTED LOOPS
1630277 NESTED LOOPS
1630277 NESTED LOOPS
1630277 MERGE JOIN
3054314 INDEX FULL SCAN SYS_C0093236 (object id 405979)
1630277 SORT JOIN
1664063 TABLE ACCESS FULL TABLE_TASK
1630277 INDEX UNIQUE SCAN SYS_C0092995 (object id 403822)
1630277 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
1630277 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
2226 INDEX UNIQUE SCAN SYS_C0092853 (object id 403226)
3312 INDEX UNIQUE SCAN SYS_C0093023 (object id 403914)
1630277 INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)
89205 VIEW
89205 NESTED LOOPS
89205 INDEX FAST FULL SCAN FA_TASK_NAME_UNIQUE (object id 407740)
89205 INDEX UNIQUE SCAN SYS_C0093238 (object id 405996)
0 INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 15 0.00 0.00
db file sequential read 10910 0.05 15.99
db file scattered read 11015 0.38 100.27
direct path write 27753 0.00 0.05
direct path read 4349 0.00 0.01
latch free 4 0.02 0.08
SQL*Net message from client 15 0.19 2.89



Tom Kyte
August 17, 2011 - 4:10 am UTC

use gather_plan_statistics and dbms_xplan.

we need to see estimated row counts vs actuals - seeing just the actuals doesn't give us the full picture.

http://jonathanlewis.wordpress.com/2006/11/09/dbms_xplan-in-10g/

tkprof...

Manoj Kaparwan, August 17, 2011 - 7:46 am UTC

Tom

we are at 9.2.0.8

so could not use dbms_xplan.display_cursor to display estimated and actual together,


here is the 

for case (A) - using FIRST_ROWS 

SQL> explain plan for
  2  SELECT *
FROM
 ( SELECT /*+gather_plan_statistics FIRST_ROWS*/  objid FROM v  ) WHERE  ROWNUM <= 200  3    4
  5  ;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------
| Id  | Operation                   |  Name                         | Rows  | Bytes | Cost  |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                               |   200 | 16600 |  3341K|
|*  1 |  COUNT STOPKEY              |                               |       |       |       |
|   2 |   NESTED LOOPS OUTER        |                               |  1664K|   131M|  3341K|
|   3 |    NESTED LOOPS OUTER       |                               |  1664K|   111M|  3341K|
|   4 |     NESTED LOOPS            |                               |  1664K|   101M|  1677K|
|   5 |      NESTED LOOPS OUTER     |                               |  1664K|    95M|  1677K|
|   6 |       NESTED LOOPS OUTER    |                               |  1664K|    88M|  1677K|
|   7 |        NESTED LOOPS         |                               |  1664K|    82M|  1677K|
|   8 |         NESTED LOOPS        |                               |  1664K|    76M|  1677K|
|   9 |          NESTED LOOPS       |                               |  1664K|    69M|  1677K|
|  10 |           NESTED LOOPS      |                               |  1664K|    63M|  1677K|
|  11 |            TABLE ACCESS FULL| TABLE_TASK                    |  1664K|    52M| 13433 |
|* 12 |            INDEX UNIQUE SCAN| SYS_C0093236                  |     1 |     7 |     1 |
|* 13 |           INDEX UNIQUE SCAN | SYS_C0092995                  |     1 |     4 |       |
|* 14 |          INDEX UNIQUE SCAN  | SYS_C0092596                  |     1 |     4 |       |
|* 15 |         INDEX UNIQUE SCAN   | SYS_C0092596                  |     1 |     4 |       |
|* 16 |        INDEX UNIQUE SCAN    | SYS_C0092853                  |     1 |     4 |       |
|* 17 |       INDEX UNIQUE SCAN     | SYS_C0093023                  |     1 |     4 |       |
|* 18 |      INDEX UNIQUE SCAN      | SYS_C0092596                  |     1 |     4 |       |
|  19 |     VIEW PUSHED PREDICATE   | TABLE_X_TASK_EQUIP_TYPE_VIEW  |     1 |     6 |     1 |
|  20 |      NESTED LOOPS           |                               |     1 |    27 |     4 |
|* 21 |       INDEX UNIQUE SCAN     | SYS_C0093238                  |     1 |     6 |     2 |
|  22 |       INLIST ITERATOR       |                               |       |       |       |
|* 23 |        INDEX RANGE SCAN     | FA_TASK_NAME_UNIQUE           |     1 |    21 |     2 |
|* 24 |    INDEX RANGE SCAN         | X_ENG_TASK_INST2TASK_IDX      |     1 |    13 |       |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=200)
  12 - access("TABLE_CONDITION"."OBJID"="TABLE_TASK"."TASK_STATE2CONDITION")
  13 - access("TABLE_USER"."OBJID"="TABLE_TASK"."TASK_OWNER2USER")
  14 - access("TABLE_GBST_TYPE"."OBJID"="TABLE_TASK"."TYPE_TASK2GBST_ELM")
  15 - access("TABLE_GBST_PRI"."OBJID"="TABLE_TASK"."TASK_PRIORITY2GBST_ELM")
  16 - access("TABLE_QUEUE"."OBJID"(+)="TABLE_TASK"."TASK_CURRQ2QUEUE")
  17 - access("TABLE_WIPBIN"."OBJID"(+)="TABLE_TASK"."TASK_WIP2WIPBIN")
  18 - access("TABLE_GBST_STAT"."OBJID"="TABLE_TASK"."TASK_STS2GBST_ELM")
  21 - access("TABLE_TASK"."OBJID"="TABLE_TASK"."OBJID")
  23 - access("TABLE_TASK"."OBJID"="TABLE_FA_TASK"."FA_TASK2TASK" AND
              ("TABLE_FA_TASK"."ATTRIBUTE_NAME"='neosa_equip_or_svc_type' OR
              "TABLE_FA_TASK"."ATTRIBUTE_NAME"='neosa_equip_type' OR
              "TABLE_FA_TASK"."ATTRIBUTE_NAME"='x_assoc_equipment_type'))
  24 - access("TABLE_TASK"."OBJID"="TABLE_X_ENG_TASK_INST"."X_ENG_TASK_INST2TASK"(+))

Note: cpu costing is off

50 rows selected.







for case (A) - using CHOOSE 

SQL> explain plan for
  2  SELECT *
FROM
 ( SELECT /*+gather_plan_statistics CHOOSE*/  objid FROM v  ) WHERE  ROWNUM <= 200  3    4
  5  ;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------
| Id  | Operation                    |  Name                         | Rows  | Bytes |TempSpc| Cost  |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                               |   200 | 16600 |       | 70602 |
|*  1 |  COUNT STOPKEY               |                               |       |       |       |       |
|   2 |   NESTED LOOPS OUTER         |                               |  1664K|   131M|       | 70602 |
|*  3 |    HASH JOIN OUTER           |                               |  1664K|   111M|   122M| 70602 |
|   4 |     NESTED LOOPS             |                               |  1664K|   101M|       | 41894 |
|   5 |      NESTED LOOPS OUTER      |                               |  1664K|    95M|       | 41894 |
|   6 |       NESTED LOOPS OUTER     |                               |  1664K|    88M|       | 41894 |
|   7 |        NESTED LOOPS          |                               |  1664K|    82M|       | 41894 |
|   8 |         NESTED LOOPS         |                               |  1664K|    76M|       | 41894 |
|   9 |          NESTED LOOPS        |                               |  1664K|    69M|       | 41894 |
|  10 |           MERGE JOIN         |                               |  1664K|    63M|       | 41894 |
|  11 |            INDEX FULL SCAN   | SYS_C0093236                  |  3054K|    20M|       |  7257 |
|* 12 |            SORT JOIN         |                               |  1664K|    52M|   179M| 34637 |
|  13 |             TABLE ACCESS FULL| TABLE_TASK                    |  1664K|    52M|       | 13433 |
|* 14 |           INDEX UNIQUE SCAN  | SYS_C0092995                  |     1 |     4 |       |       |
|* 15 |          INDEX UNIQUE SCAN   | SYS_C0092596                  |     1 |     4 |       |       |
|* 16 |         INDEX UNIQUE SCAN    | SYS_C0092596                  |     1 |     4 |       |       |
|* 17 |        INDEX UNIQUE SCAN     | SYS_C0092853                  |     1 |     4 |       |       |
|* 18 |       INDEX UNIQUE SCAN      | SYS_C0093023                  |     1 |     4 |       |       |
|* 19 |      INDEX UNIQUE SCAN       | SYS_C0092596                  |     1 |     4 |       |       |
|  20 |     VIEW                     | TABLE_X_TASK_EQUIP_TYPE_VIEW  |   101K|   596K|       |  3197 |
|  21 |      NESTED LOOPS            |                               |   101K|  2682K|       |  3197 |
|* 22 |       INDEX FAST FULL SCAN   | FA_TASK_NAME_UNIQUE           |   101K|  2086K|       |  2948 |
|* 23 |       INDEX UNIQUE SCAN      | SYS_C0093238                  |     1 |     6 |       |     1 |
|* 24 |    INDEX RANGE SCAN          | X_ENG_TASK_INST2TASK_IDX      |     1 |    13 |       |       |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=200)
   3 - access("TABLE_TASK"."OBJID"="TABLE_X_TASK_EQUIP_TYPE_VIEW"."TASK_OBJID"(+))
  12 - access("TABLE_CONDITION"."OBJID"="TABLE_TASK"."TASK_STATE2CONDITION")
       filter("TABLE_CONDITION"."OBJID"="TABLE_TASK"."TASK_STATE2CONDITION")
  14 - access("TABLE_USER"."OBJID"="TABLE_TASK"."TASK_OWNER2USER")
  15 - access("TABLE_GBST_TYPE"."OBJID"="TABLE_TASK"."TYPE_TASK2GBST_ELM")
  16 - access("TABLE_GBST_PRI"."OBJID"="TABLE_TASK"."TASK_PRIORITY2GBST_ELM")
  17 - access("TABLE_QUEUE"."OBJID"(+)="TABLE_TASK"."TASK_CURRQ2QUEUE")
  18 - access("TABLE_WIPBIN"."OBJID"(+)="TABLE_TASK"."TASK_WIP2WIPBIN")
  19 - access("TABLE_GBST_STAT"."OBJID"="TABLE_TASK"."TASK_STS2GBST_ELM")
  22 - filter("TABLE_FA_TASK"."ATTRIBUTE_NAME"='neosa_equip_or_svc_type' OR
              "TABLE_FA_TASK"."ATTRIBUTE_NAME"='neosa_equip_type' OR
              "TABLE_FA_TASK"."ATTRIBUTE_NAME"='x_assoc_equipment_type')
  23 - access("TABLE_TASK"."OBJID"="TABLE_FA_TASK"."FA_TASK2TASK")
  24 - access("TABLE_TASK"."OBJID"="TABLE_X_ENG_TASK_INST"."X_ENG_TASK_INST2TASK"(+))

Note: cpu costing is off

51 rows selected.

Tom Kyte
August 17, 2011 - 4:27 pm UTC

do the work for me here - merge the actuals with the estimated to simulate the dbms_xplan display format and format it nicely in a report so we can look at it and actually read it.

and bear in mind that I'm about to start a 9 hour seminar in sydney and then will immediately be getting on a plane to Scotland - I won't be online for a few days ;)

when you get the report - you'll be looking for large (orders of magnitude) differences between estimated and actual row counts - once we find those - we ask the question "why" and then "what can we do about it" - such as

o getting better statistics
o getting more statistics (not available for you on really old software)
o using dynamic sampling to get more statistics
o using a sql profile (again, not available for you on really really old software)

and so on

tkprof...

Manoj Kaparwan, August 17, 2011 - 6:37 pm UTC

thank you thank you Tom for your time.


yes I will merge it together to simulate the dbms_xplan and post here.
:)


kind regards


tkprof...

Manoj Kaparwan, August 23, 2011 - 1:24 am UTC

Tom
here is the merged one.

( E-rows : Estimated rows.. got using gather_plan_statistics
and
A-rows : Actual rows from tkprof
)


FIrst rows :

---------------------------------------------------------------------------------------------
|Row Source Operation                                                  |E-Rows    |  A-Rows  |
----------------------------------------------------------------------------------------------
|  SELECT STATEMENT                                                    |  200     |          |
|  COUNT STOPKEY                                                       |          |   200    |
|   NESTED LOOPS OUTER                                                 | 1664K    |   200    |
|    NESTED LOOPS OUTER                                                | 1664K    |   200    |
|     NESTED LOOPS                                                     | 1664K    |   200    |
|      NESTED LOOPS OUTER                                              | 1664K    |   200    |
|       NESTED LOOPS OUTER                                             | 1664K    |   200    |
|        NESTED LOOPS                                                  | 1664K    |   200    |
|         NESTED LOOPS                                                 | 1664K    |   200    |
|          NESTED LOOPS                                                | 1664K    |   200    |
|           NESTED LOOPS                                               | 1664K    |   200    |
|            TABLE ACCESS FULL TABLE_TASK                              | 1664K    |   200    |
|            INDEX UNIQUE SCAN SYS_C0093236 (object id 405979)         |     1    |   200    |
|           INDEX UNIQUE SCAN SYS_C0092995 (object id 403822)          |     1    |   200    |
|          INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)           |     1    |   200    |
|         INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)            |     1    |   200    |
|        INDEX UNIQUE SCAN SYS_C0092853 (object id 403226)             |     1    |     0    |
|       INDEX UNIQUE SCAN SYS_C0093023 (object id 403914)              |     1    |     0    |
|      INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)               |     1    |   200    |
|     VIEW PUSHED PREDICATE                                            |     1    |    14    |
|      NESTED LOOPS                                                    |     1    |    14    |
|       INDEX UNIQUE SCAN SYS_C0093238 (object id 405996)              |     1    |   200    |
|       INLIST ITERATOR                                                |          |    14    |
|        INDEX RANGE SCAN FA_TASK_NAME_UNIQUE (object id 407740)       |     1    |    14    |
|    INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)      |     1    |     0    |
----------------------------------------------------------------------------------------------



Choose :

------------------------------------------------------------------------------------------------
|Row Source Operation                                                  | Rows     | A-Rows      |
------------------------------------------------------------------------------------------------
|  Select Statement                                                    |  200     |             |
|  COUNT STOPKEY                                                       |          |     200     |
|  NESTED LOOPS OUTER                                                  | 1664K    |     200     |
|    HASH JOIN OUTER                                                   | 1664K    |     200     |
|     NESTED LOOPS                                                     | 1664K    | 1630277     |
|      NESTED LOOPS OUTER                                              | 1664K    | 1630277     |
|       NESTED LOOPS OUTER                                             | 1664K    | 1630277     |
|        NESTED LOOPS                                                  | 1664K    | 1630277     |
|         NESTED LOOPS                                                 | 1664K    | 1630277     |
|          NESTED LOOPS                                                | 1664K    | 1630277     |
|           MERGE JOIN                                                 | 1664K    | 1630277     |
|               INDEX FULL SCAN SYS_C0093236 (object id 405979)        | 3054K    | 3054314     |
|            SORT JOIN                                                 | 1664K    | 1630277     |
|             TABLE ACCESS FULL TABLE_TASK                             | 1664K    | 1664063     |
|           INDEX UNIQUE SCAN SYS_C0092995 (object id 403822)          |     1    | 1630277     |
|          INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)           |     1    | 1630277     |
|         INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)            |     1    | 1630277     |
|        INDEX UNIQUE SCAN SYS_C0092853 (object id 403226)             |     1    |    2226     |
|       INDEX UNIQUE SCAN SYS_C0093023 (object id 403914)              |     1    |    3312     |
|      INDEX UNIQUE SCAN SYS_C0092596 (object id 402255)               |     1    | 1630277     |
|     VIEW                                                             |  101K    |   89205     |
|      NESTED LOOPS                                                    |  101K    |   89205     |
|       INDEX FAST FULL SCAN FA_TASK_NAME_UNIQUE (object id 407740)    |  101K    |   89205     |
|       INDEX UNIQUE SCAN SYS_C0093238 (object id 405996)              |     1    |   89205     |
|    INDEX RANGE SCAN X_ENG_TASK_INST2TASK_IDX (object id 404206)      |     1    |       0     |
------------------------------------------------------------------------------------------------

Tom Kyte
August 30, 2011 - 3:09 pm UTC

which of these have you tried so far?

o getting better statistics
o getting more statistics (not available for you on really old software)
o using dynamic sampling to get more statistics
o using a sql profile (again, not available for you on really really old software)

try dynamic sampling.
verify your statistics are representative of your data.

you see the estimated counts are way off...

What if execution time is really long?

Arvind Mishra, October 25, 2011 - 10:03 pm UTC

Hello Tom,

If a query's execution time is really long and we are unable to generate tkprof with in a given time for support then how to resolve a tuning problem? You said earlier that explain plan and auto trace can sometimes give us wrong execution plan. What could be the best possible approach in this case?
Tom Kyte
October 26, 2011 - 5:20 am UTC

sql monitor in 11g provides real time feedback - you would be able to see estimated cardinalities being way overshot as they happen - you don't have to wait for the entire query to finish to diagnose it.


v$sql_plan will give you the right plan - and you only need to "start" the query to get that populated. The query does not have to finish

What if execution time is really long

Arvind Mishra, October 26, 2011 - 5:29 am UTC

Thanks Tom

TKPROF Option INSERT

Prashant, December 07, 2011 - 3:06 am UTC

Hi Tom,

I'm using the following command to store data for carrying out analysis and benchmarking

tkprof filename1.trc filename.txt INSERT=filename3.sql

I store data into tables and later use that for comparison with my benchmark (that I would have created from earlier runs..I'm only doing a single user compare).

It would have been useful to see the execution plans or in the best case output from dbms_xplan.display_cursor if only I had the sql_id. I could then have run a script to populate but we don't get either the execution plan or sql id from tkprof output.

Can you please suggest?






Tom Kyte
December 07, 2011 - 1:17 pm UTC

It would have been useful to see the execution plans or in the best case output
from dbms_xplan.display_cursor if only I had the sql_id,


I did not understand that.



but in anycase, the plans are in AWR - cannot you just get them from there?

TKPROF

Prashant, December 07, 2011 - 10:49 pm UTC

Hi Tom

I was looking at analyzing my results using tkprof.

I wasn't able to get execution plan stored for the results. I thought it would have been easier if I could just use one tool to do that. So if tkprof stored sql_id then I could have atleast run one more query on v$sql_plan tables to get that.

I haven't looked into AWR in detail. But if it does and meets my goals for storing stats and plans for benchmarking and comparison purposes I will use that.

Thanks
Prashant

Tom Kyte
December 08, 2011 - 12:47 pm UTC

tkprof would have the plans if you close the cursor/exit the session. can you do that.


tkprof would have the sql for you - but not the sql_id.


AWR definitely has that.

Thanks

Prashant, December 09, 2011 - 6:02 am UTC

Yes Tom. I have closed the session and have the plans in TKRPOF.
My goal however was to insert this into a table for easy comparison. If I use EXPLAIN= it will give not give me the execution plan hence I asked for your inputs.

Thanks for your suggestion on AWR and I will look into it. I am little behind in adopting it.

Thanks
Prashant
Tom Kyte
December 11, 2011 - 2:41 pm UTC

but the plans are already in a table, if you have the sql in one table - you can easily get the sql_id for it and just join to the AWR information.

Thanks

Prashant, December 11, 2011 - 9:57 pm UTC

Yes it makes sense. I faced another problem with the INSERT= option. It generated a sql file that I planned to run from sql prompt. Some of the queries are > 4000 characters and I get the string literal too long error.

Its difficult for me to go and update the individual queries (I have 200+ queries) to use bind variables or use Pl/sql.

Is there any quick fix?

Thanks
Prashant
Tom Kyte
December 12, 2011 - 6:05 am UTC

no. I think everything you actually want or need is already in ASH - you might want to study what you have available there - truly.

statistics_level and cursors closed or not...?

Andrew, January 21, 2012 - 4:43 am UTC

I have been researching extended SQL trace (10046) issues.
There are some who recommend setting of statistics_level = ALL at session level + also to disconnect session.

Also within this thread:
//
Followup February 6, 2008 - 9am Central time zone:
that too - if you disable trace before the cursor is closed - it'll not emit the stat records...
//

What then should be the adopted BEST PRACTICE when application is using a CONNECTION POOL...?

(a) Should the code instrumentation modules switch statistics_level=ALL at the time a connection is aquired and then set to TYPICAL just before releasing a connection?
(b) what about the cursors when the processing control passes back to JAVA when the connection is released back to the pool...? or there is something one should be aware of and do in the code instrumentation...?

Thank you in advance for your valueable advice.

Regards
A

Tom Kyte
January 21, 2012 - 10:35 am UTC

disconnecting the session was almost mandatory prior to 11g, now in 11g with statistics level typical, the stat information is emitted in the trace file for each execution of the cursor.

I find typical has gotten me most all of what I need.

In fact, you'll likely find that ASH gets you most all of what you need in many cases these days, sql_trace becoming something you might not need to drop down into.


I have no problem with typical and with 11g, the stat information is emitted much more frequently - for each execution instead of at the end.

connection pool - tracking a session

Andrew, January 21, 2012 - 5:56 pm UTC

Hi Tom,.

Many thanks for yout response.

+ + +

I am trying to get some solid info as to how WEBLOGIC behaves
- - but I am not having much luckk...

i.e. when one releases a connection is it equivalent to a disconnect in a client-server environment...???

+

As far as instrumentation - - I am referring to a doc published by HOTSOS -

>> SQL Tuning with 10046 Trace Data and DBMS_XPLAN
>> Oracle. Performance. by Mahesh.Vallampati@hotsos.com

+

My question therefore is:
>>
1.) WHEN we use a connection pool - do we get ALL STATS as when all cursors get closed in a Client-Server ENV - or do we need to do some ADDITIONAL WORK..?????
2.) SPECIFICALLY - is ALL the STATS info captured...? - like what happens to the othger connection that GRABS this particular Oracle session (SID+Serial#)....?

Or - to put it more clearly:
>>>
What happens to all session-related info - like stats or some package variables eyc...?
>>>-
When a JAVA or C# program is done with whatever PLSQL or SQL issued against the DB - does it pick up a new session from the POOL as a CLEAN SLATE or does it inherit some data left off by previous sessions - say WEBLOGIC...???

Sorry I am not clear on this - -
- - when you say ... "disconnecting the session was almost mandatory prior to 11g, now in 11g"
- - what does this exactly mean..???

Specifically when we release the Oracle connection to the pool (say BEA Weblogic pool) - does this mean that it is EXACTLY THE SAME as to say DISCONNECT in a Client-Server environment...????????

+ + +

Is there any easy to understand doc on WEBLOGIC and connection pools behavior in Oracle when it comes to terminating a task - like to say disconnect or EXIT/./???

Thank you
Regards
'
Andrew

>>
Tom Kyte
January 21, 2012 - 8:12 pm UTC

i.e. when one releases a connection is it equivalent to a disconnect in a
client-server environment...???


no, it is not, they just put the connection back in the pool and the next request for a connection might get that one. There is no logoff, no logon - just the initial logon - and a logoff when the pool decides to release that connection entirely.


As far as instrumentation - - I am referring to a doc published by HOTSOS -

>> SQL Tuning with 10046 Trace Data and DBMS_XPLAN
>> Oracle. Performance. by Mahesh.Vallampati@hotsos.com


hmmm, an email address :)


1.) WHEN we use a connection pool - do we get ALL STATS as when all cursors get
closed in a Client-Server ENV - or do we need to do some ADDITIONAL WORK..?????
2.) SPECIFICALLY - is ALL the STATS info captured...? - like what happens to
the othger connection that GRABS this particular Oracle session
(SID+Serial#)....?


it depends on the release - 11g, we emit that stuff after each execution. before that, when the cursor is officially, really, truly, totally closed. Which may or may not happen (due to cursor caching)



What happens to all session-related info - like stats or some package variables
eyc...?


In 11g, stats for individual cursors is emitted from execution to execution. In 10g and before - the are accumulated.

package variables - session state - it depends, in some connection pools (APEX for example with its connection pool) they reset the plsql package state *by default* (but you can change that). For other connection pools, you would have to query the vendor to see what they decided to do - it is entirely up to them.


Sorry I am not clear on this - -
- - when you say ... "disconnecting the session was almost mandatory prior to
11g, now in 11g"
- - what does this exactly mean..???


what I said in its entirety was:

... disconnecting the session was almost mandatory prior to 11g, now in 11g with statistics level typical, the stat information is emitted in the trace file for each execution of the cursor. ...

In 11g, you don't have to disconnect the session in order to get the cursors truly, totally, entirely closed in order to get the STAT information emitted into the trace file. In 11g, we emit that information from execution to execution now. In 10g - in order to get a complete trace file - you almost always wanted to disconnect the session in order to get cursors closed to get the stat information.




connection pool instrumentation - follow-up

Andrew, January 23, 2012 - 2:07 pm UTC

Tom, Many thanks.

<< BTW - The presentation PDF I had quoted from - can be found at: http://www.scoaug.org/SQLTuning.pdf >>

I would like to sum up our exchange on the BEST PRACTICE approach on INSTRUMENTATION to facilitate Oracle tuning while also asking you for your valuable comments/suggestions and corretions - THANK YOU.

My objective is to facilitate flexible instrumentation interface for Oracle developers (plsql/java/C#).

The Approach:
-------------
Developers would just need to follow an ultra-simple coding standard
- - with XQT instrumentation package - as follows:

Identify business JOBS - these may have various modules (java, C#, sql, plsql etc.)
Follow Instrumentation (**) structure as outlined below:
--------------------------------------------------------

==>> Look-up "job_id" from the APPS_CFG table and pass it as prms_set IN OUT parameter

** Connect to Oracle + execute XQT.Begin_Job (prms_set IN OUT varchar2)
==== Apps code that does not need Oracle yet (java, C# etc) goes here...
**== App requests an Oracle session - execute XQT.Begin_Session (prms_set IN OUT varchar2)
====== simple SQL called by JAVA or C# modules - code lines go here...
**== Just before disconnect (release a connection) - exec XQT.End_Session (prms_set IN OUT varchar2)
==== more java/C# etc. code goes here ...
**== App requests another session and calls a complex PLSQL package - exec XQT.Begin_Session ...
====== << The PLSQL package has its own additional instrumentation - calls to XQT.Begin_Module >>
**==== exec XQT.Begin_Module (prms_set IN OUT varchar2, mod_id in varchar2, act_id in varchar2)
====== plsql procedure code goes here...
**==== exec XQT.End_Module - (prms_set IN OUT varchar2, mod_id in varchar2, act_id in varchar2)
====== other PLSQL procedures - ditto...
**== exec XQT.End_Session (prms_set IN OUT varchar2)
==== ditto ... ditto ...
** WHEN All done for a business JOB to be tuned:- then exec XQT.End_Job (prms_set IN OUT varchar2)

All the developers need to do is to store the value of "prms_set" in their JAVA/C# (etc.) code to be then used in all future calls to XQT package regardless of the procedure being called.

Behind the scenes:
------------------
(a) XQT.Begin_Job reads config tables to determine if job_id is to be traced for performance and if TRACE=YES (for this job + time window etc.) for a specific job_id in XQT.APPS_CFG table - a unique CLIENT_IDENTIFIER is generated (SEQUENCE Object) and passed as an IN-OUT prms_set to be used for all future XQT package calls.
(b) All XQT procedures check prms_set and return the control in an instant when trace=OFF
=== It takes under 2 us to verify that the trace is not required and continue with main Apps code.
= = BUT: when trace=ON
(c) XQT.Begin_Session => calls DBMS_Session.set_identifier - as provided in prms_set - acquired in (a)
(d) XQT.End_Session executes DBMS_Session.Clear_identifier
(e) XQT.Begin_Module => calls customized version of DBMS_Application_info (to set module, action)
(f) XQT.End_Module clears module/action - to the former level (prms_set maintains the stack)
(g) Finally XQT.End_job updates a record in XQT.APPS_TRC_HIST table

The above structure can have various additional configuration components and thus appropriate actions.

It can also - possibly...? - support ASH by identifying Oracle sessions of interest...?
The XQT Instrumentation interface can opt for various levels of monitoring the JOBS of interest.
It could resort to a FULL BLOWN 10046 level 12 trace but it can also monitor execution times or facilitate ASH - <<< I would greatly appreciate your suggested recommendation as to how to facilitate ASH here >>>

+

Please comment on the following
===============================

Here are the possible additional options:
-----------------------------------------
(1) XQT.APPS_CFG table may hold a value of CONC_SESSION_LIMIT - against any job_id to be traced within a set processing window.
=== If this is set - then as soon as the identified job starts - XQT.Begin_Job acquires a LOCK and all subsequently requested jobs with the same job_id are held awaiting the one being traced to complete and terminate
(2) The APPS_CFG table may also hold a value of TRC_LIMIT
=== If this is set the XQT instrumentation package ensures that it would not generate more than TRC_LIMIT number of "trace-file sets" - Extended-SQL-Trace file-sets (typically would =1, but can be more although not more than CONC_SESSION_LIMIT)

+

Other issues.

Would you please also comment on the following:
===============================================

(A) Some (including HOTSOS) recommend that TRACEFILE_IDENTIFIER could be set to facilitate easier location of trace files that are related to requested Extended (10046) SQL Trace.

Client-Server (DW) etc = YES.
However - this may not make much sense in a connection pool environment.
I tested an option of setting and resetting of the TRACEFILE_IDENTIFIER (it would be feasible with XQT.Begin_Session and End_Session procedures) however, I have discovered that while a few calls to set / reset of this identifier is OK, it is NOT OK when too many (hundreds of such calls) are made. It carries a PENALTY - as it impacts all future DBMS_SESSION calls - as high as 100 times increase of their execution. So, I assume that it may also have a number of other undesirable side-effects...?

With this in mind I am leaning towards an option forgoing to set TRACEFILE_IDENTIFIER and instead redirect the destination directory (UDUMP) with a symbolic link (assuming here UNIX/LINUX environment). This solution also ensures that the trace files will end up on a file system that has adequate capacity. Then when the tracing is finished for that day - the destination of udump may be (if required) reset to its original or other sub-dir i.e. file system.

Once we have trace files at a dedicated sub-directory (with most of them being of interest) TRCSESS could be used for all files stored there to be aggregated with a specified CLIENT_IDENTIFIER.

+

(B) The author of HOTSOS presentation ( http://www.scoaug.org/SQLTuning.pdf ) recommends to set another Oracle parameter => STATISTICS_LEVEL=ALL - claiming its importance as this would set _rowsource_execution_statistics=TRUE - as the "TYPICAL" is not adequate - as quoting him: "the time values do not roll up properly..."

The same author however claims that STATISTICS_LEVEL should NOT be set at the instance level.

I guess we may encounter a problem here - when a connection pool is used, as then one session after another would need to get this parameter set and the chances are high that with a business job that carries many calls to acquire a connection from a pool each call to XQT.Begin_Session would need to set this parameter.

In case when we end up with significant majority of sessions in the pool having this level set to "ALL" then it we may reach some nasty side-effects ... YES/NO...?

Two possible solutions come to mind =
==1. To reset this parameter back to TYPICAL when XQT.End_Session is called
==2. To reset for ALL affected sessions by a DBA - using ALTER SYSTEM command set ... =TYPICAL.

Not sure which one to adopt...?

+

(C) The calls to DBMS_SYSTEM.KSDWRT...?
I have found several who advertise resorting to this (including HOTSOS/Method R) - but I have also found out that you Tom - personally had stayed away from this ("undocumented" Oracle procedure).

You must have very good reasons...? or you do not as a rule resort to undocumented Oracle procedures...?

I would have no problems writing my own custom procedure using UTL_FILE (like you have recommended many times on your web-site).
<<< Although I saw Donald Burleson suggesting to use UTL_FILE to write directly into ALERTLOG - I would rather not resort to such options - - would you...??? >>>

However - if one would like to use HOTSOS (or Method R) Profiler then calls to DBMS_SYSTEM.KSDWRT and passing module/action etc info would carry some additional benefits - as HOTSOS product would extract this data from TRC files and utilize... your thoughts...?

+

(D) The proposed XQT instrumentation (as mentioned earlier) would include a wrapped custom version of DBMS_APPLICATION_INFO package. With this I can see the value of storing in CLIENT_INFO the fact that a particular Oracle session that resides in the pool has already been used during our trace gathering and therefore a call to XQT.Begin_Session might skip setting such parameters as TIMED_STATISTICS or STATISTICS_LEVEL - so there would be no need to do it again and again...
Sadly - not when it comes to the CLIENT_IDENTIFIER as this would have to be reset by XQT.End_Session.

The reason for this parameter (CLIENT_IDENTIFIER) to be reset is that once an Oracle session is released back to the pool - other calls requesting a session from the pool (that have nothing to do with our job_id and trace would clutter the same trace file - - AM I CORREECT...?

(BTW: This is what I have communicated to HOTSOS - as they had neglected to reset it in their ILO)

+

(E) Finally there is an issue with Oracle 10gR2 (still being used by a significant number of customers) - as to the emitting of STAT lines only when all relevant cursors are closed and "truly completely closed"...
==>> (as you had put it) in one of your responses.

So,
I am not sure how to tackle this problem.
I am greatly relieved to hear that the STAT lines get emitted with 11gR2 per call rather than when cursors are closed - which is guaranteed with DISCONNECT - but not possible with a connection pool.

I have in vain been searching for any alternative << like ALTER SYSTEM - to force closure of cursors on sessions of interest >> but without any luck.

Do you have a suggestion on how to best handle this...?

Or ... I must accept the fact that some STAT lines would NOT get captured in TRC files - even if developers are diligent enough to close all cursors, as PLSQL may persist and keep the cursors open or pending to be closed but not really TRULY closed...???

Sorry for a very long response - but I was trying to get a comprehensive approach - and cover all angles.

Thank you

Regards
Andrew

Tom Kyte
January 23, 2012 - 3:43 pm UTC

sorry - I am not at all familiar with XQT - don't know what it is, or what it does, or how it works.

connection pool instrumentation - follow-up

Andrew, January 23, 2012 - 5:58 pm UTC

Dear Tom,

I am sorry.
XQT package is what I have been working on.
>> It is an acronym of eXtended sQl Trace <<

ILO - is instrumentation s/w originally developed by HOTSOS - later entered into Open Source - but not enhanced since (i.e. in the last 4 years).

Having considered the above - I would greatly appreciate your comments - regarding the proposed solutions as well as the issues raised (A) through (E) + your opinion as to the approach that I have adopted.

If you have 5-10 mins time and could respond...

KInd regards
Andrew

connection pool instrumentation - follow-up

Andrew, January 24, 2012 - 12:51 pm UTC

JAN 24, 2012

Tom,

To make things simpler - I have restructured my former communication.

<<< XQT does NOT matter here - so let's forget it >>>

The core of my queries-set that I had sent earlier is as follows:

(A)
--- TRACEFILE_IDENTIFIER
--- ====================
Some (including HOTSOS) recommend that TRACEFILE_IDENTIFIER could be set to facilitate easier location of trace files that are related to requested Extended (10046) SQL Trace.

Client-Server (DW) etc = YES.
However - this may not make much sense in a connection pool environment.
I tested an option of setting and resetting of the TRACEFILE_IDENTIFIER however, I have discovered that while a few calls to set / reset of this identifier is OK, it is NOT OK when too many (hundreds of such calls) are made. It carries a PENALTY - as it impacts all future DBMS_SESSION calls (E.g. DBMS_SESSION.set_identifier) - as high as 100 times increase of their execution.
>>> I may assume that it may also have a number of other undesirable side-effects...?

My approach:
I am leaning towards an option of forgetting setting TRACEFILE_IDENTIFIER - instead redirect destination directory (UDUMP) with a symbolic link (assuming here UNIX/LINUX environment). This solution also ensures that the trace files will end up on a file system that has adequate capacity. Then when the tracing is finished for that day - the destination of udump may be (if required) reset to its original or other sub-dir i.e. file system.

Your opinion..?
---------------

+

(B)
--- STATISTICS_LEVEL=ALL
--- ====================
The author of HOTSOS presentation ( http://www.scoaug.org/SQLTuning.pdf ) recommends to set another Oracle parameter => STATISTICS_LEVEL=ALL - claiming its importance.
In his opinion the "TYPICAL" is not adequate / it facilitates setting of _rowsource_execution_statistics=TRUE
//quote from his presentation: "the time values do not roll up properly..."

The same author however claims that STATISTICS_LEVEL should NOT be set at the instance level.

IMHO - we may encounter a problem here - when a connection pool is used, as then one session after another would need to get this parameter set and the chances are high that with a business job that carries many calls to acquire a connection from a pool each call would need to set this parameter.

In case when we end up with significant majority of sessions in the pool having this level set to "ALL" then it we may reach some nasty side-effects ... ?

My proposed approach:
==1. To reset this parameter back to TYPICAL just before releasing a session back to the pool
==2. To reset for ALL affected sessions by a DBA - using ALTER SYSTEM command set ... =TYPICAL.

Your opinion..?
---------------

+

(C)
--- DBMS_SYSTEM.KSDWRT - to use or not to use - this is a QUESTION.
--- ==================
I have found several who advertise resorting to this (including HOTSOS/Method R) -
- - but I have also noticed Tom, that you personally had stayed away from this ("undocumented" Oracle procedure)... for some reasons...?

What are your reasons...? - or you do not as a rule resort to undocumented Oracle procedures...?

I would have no problems writing my own custom procedure using UTL_FILE (like you have recommended many times on your web-site).

However such a procedure would write to some flat UNIX file - and not to a TRACE file.

So, if one would like to use HOTSOS (or Method R) Profiler then calls to DBMS_SYSTEM.KSDWRT and passing module/action etc info would carry some additional benefits -

With using UTL_FILE and some other than TRC file as an output I would forgo the additional benefit of using HOTSOS Profiler.

Your opinion..?
---------------

Specifically - would you opt for using UTL_FILE and use the current TRC file as an output...?
Would it work..? - would it be safe..?

+

(D)
--- CLIENT_IDENTIFIER
--- =================
As far as I understand connection pools - any packages that are executed retain their global variables.
Similarly parameters set by DBMS_SESSION (Client_Identifier) or by DBMS_APPLICATION_INFO - would stay with that particular session when the APP releases the connection back to the pool.
This would then cause inclusion of some SQL that is unrelated to what we are trying to trace.
CORRECT...?

My approach:
I propose to reset all these session-related parameters before releasing it to the pool.

(BTW: This is what I have communicated to HOTSOS - as they had neglected to reset it in their ILO)

Your opinion..?
---------------

+

(E)
--- STAT lines in 10gR2
--- ===================
I believe that we had agreed that although 11gr2 is OK - the issue remains with Oracle 10gR2.
Emitting STAT lines would not take place unless (as you say) "all cursors are truly and really closed".

How should one tackle this problem when we face 10gR2 + PL/SQL + Connection pool...?

Is there any method to FORCE closure of all cursors for a specific session with 100% degree of certainty.?

Your opinion/advice..?
----------------------

Many thanks

Best wishes
Andrew
Tom Kyte
January 24, 2012 - 1:57 pm UTC

(a)
I am leaning towards an option of forgetting setting TRACEFILE_IDENTIFIER -
instead redirect destination directory (UDUMP) with a symbolic link (assuming
here UNIX/LINUX environment). This solution also ensures that the trace files
will end up on a file system that has adequate capacity. Then when the tracing
is finished for that day - the destination of udump may be (if required) reset
to its original or other sub-dir i.e. file system.


i agree on the tracefile_identifier, but why don't you just alter system set user_dump_dest instead of munging about with symbolic links (which might not work so well on windows for example)



(b) is entirely up to you - what do you want to do? One imposes a higher runtime performance penalty than the other. Is typical good enough for your purposes?


(c) What are your reasons...?

it's undocumented. I don't do undocumented things here.

Specifically - would you opt for using UTL_FILE and use the current TRC file as
an output...?
Would it work..? - would it be safe..?


that would not be wise - the current trc file is already opened by another process, you should not open it also and write to it, you'll munge up that file.


(d) if the first step in grabbing a connection involves setting these things, 'unsetting' them is not necessary as a last step.

It would not hurt, but if everything is setting them, unsetting is not truly necessary.


(e)

How should one tackle this problem when we face 10gR2 + PL/SQL + Connection
pool...?

Is there any method to FORCE closure of all cursors for a specific session with
100% degree of certainty.?


there isn't a good answer for this, the only way is to end the session.

connection pool instrumentation - CLOSURE

Andrew, January 24, 2012 - 3:48 pm UTC

Tom

Thank you.

NOW I am trying to bring this sub-thread to its CLOSURE...

Please see below and add your final comments... thanks

+

(A) - I forgot about this option << alter system set user_dump_dest >> - SURE it is the best approach I agree and thank you for reminding me of this...

+

(B) - WELL ... as far as STATISTICS_LEVEL is concerned - I am not really sure as to the value of having it set =ALL...??? That's why I asked you.

What do you think...? Is it worth it..? Is it really that important (as the guy from HOTSOS claims) to ensure that "the time values do not roll up properly..." ...???

It appears that HOTSOS (that had focused their entire business mission on Oracle performance tuning) - at least that guy - think that
>>> IT IS WORTH IT.
--- ===============

Your opinion...? - Would you consider it or not...?

+

(C) - I see your point and I agree 100% that I would NEVER try to use UTL_FILE to write into the ALERT_LOG file or any TRC file - as it may be VERY UNPREDICTABLE.
>>> Donald Burleson feels otherwise - but he is wearing a cowboy hat isn't he...??? <<<
:-)

My only point here was that HOTSOS must have been diligent enough in their research to decide to use DBMS_SYSTEM.KSDWRT to insert additional lines in the current TRC files.
<< I have found some references on the web that you may get into some trouble when you do not impose some reasonable limits on the lenghts of your messages.
HOTSOS limits their output to module/action names - I would even further limit these to some PK ID's which would be very short.

>> I am not necessarily WHITE-HOT on adopting this feature - i.e. using the DBMS_SYSTEM.KSDWRT but I think that HOTSOS should have done adequate research in this area before opting to use it.

+

(D) - I agree but only 50%.
This is why:
(D.1) Sure - IF all APP modules set the IDENTIFIER then there is no problem - however, there may be other APPS that have nothinh to do with our APP and they do not follow the same convention. A good example is a third-party s/w that for whatever reason is also using the same BEA/Oracle connection pool.

(D.2) But there is another aspect.
There is no point to set the IDENTIFIER when we know that the trace is not needed (which is 99.999% of the cases in PROD). So typically you would want to check and activate setting of an IDENTIFIER only when you need to. Specifically - when you decide to trace a particular busziness process (one out of 2000) - then you would NOT like to set identifiers to all other 1999 business sessions - just the one that is of interest, where users complain about poor performance.
Consequently - you end up setting a CLIENT_IDENTIFIER on a session acquired from the pool - BUT NOT others.
Then when one of the other 1999 APPS comes along right behind you requesting a session from the pool - it would NOT set the IDENTIFIER - hence whatever the SQL executed would be appended to the one we trace - NOT GOOD.

I feel that it is cheaper and cleaner to set and CLEAR the Identifier for the business process that we trace rather than ONLY set but do so from hundreds of other APPS modules regardless if we trace them or not.

Would you agree...???
=====================

+

(E)
Here - rather unfortunately - there is NO GOOD ANSWER.

It would be IDEAL if there was an option at ALTER SYSTEM command to CLOSE FORCE all cursors within a specified session identified by SID+SERIAL#.

There is an ALTER SYSTEM KILL SESSION statement.

I wish there was also an ALTER SYSTEM FORCE CLOSE CURSORS 'sid,sewrial#'...

Oh well... keep on dreaming Andrew...

And sure enough - as 11gR2 now emits all STAT lines regardless - all we need to do is wait a few more years for all to leave 10gR2 and upgrade into 11 or 12 or 13...

BUT - thank you for that confirmation.

Best wishes
Andrew

Tom Kyte
January 25, 2012 - 10:34 am UTC

(b) i have not had any problem with typical. I would need to see a specific instance of the problem they allude to to comment.


(c) anyone that recommends using utl_file to write to a file that is already known to be opened and written to by another process is doing it wrong.

I have a personal policy of not

1) discussing the undocumented
2) recommending using it
3) using it myself

too easy to get burned.

caveat emptor.


(d) i made the statement "if everyone is setting it, it would be unnecessary". If that assumption is not true, my reply doesn't apply.


if you KNOW that there will be things that will grab a connect and NOT set the identifier, then of course you must "unset" it.


(e) it is moot, 10g is done, it is finished, it is not necessary to add anything like "force close" since 11g achieves what we want without the huge performance hit....





connection pool instrumentation - CLOSURE

Andrew, January 29, 2012 - 3:32 am UTC

Dear Tom

Many thanks for your patience and clarification on the issues that I had raised

Thank you

Regards
Andrew

Repeated XCTEND and WAIT for log file sync and SQL*NET message to/from client

Lasse Jenssen, June 20, 2012 - 9:02 am UTC

I see you have explained some of this before, but it doesn't really explain what i see in my tracefile(see below). In the beginning of my trace I see dozens of repeating XCTEND and WAITs for 'log file sync','SQL*NET message to client' and 'SQL*NET message from client'. Some of the XCTEND are commits, other rollbacks, and some show commit with no change in database. This is repeated for the entire tracefile, which is about 150000 lines of trace.

Question:
How can tracefile show repeated commits and rollbacks without showing any statements (no SQL-s, insert, update or delete)?

*** 2012-06-20 13:14:51.598
*** SESSION ID:(21.2) 2012-06-20 13:14:51.597
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 1 p1=1244 p2=0 p3=0
WAIT #0: nam='log file sync' ela= 3109 p1=1244 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 4 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 114084 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=1
WAIT #0: nam='SQL*Net message to client' ela= 1 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 279353 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 4483 p1=1488 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 752878 p1=1413697536 p2=1 p3=0
XCTEND rlbk=1, rd_only=1
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 570096 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 7777 p1=2028 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 270520 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 4018 p1=699 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 228727 p1=1413697536 p2=1 p3=0
XCTEND rlbk=1, rd_only=1
WAIT #0: nam='SQL*Net message to client' ela= 2 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 2748033 p1=1413697536 p2=1 p3=0
XCTEND rlbk=1, rd_only=1
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 742905 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=1
... repeated for about 150000 lines ...


Thanks!</code>
Tom Kyte
June 21, 2012 - 7:44 am UTC

client is just committing or rolling back. clients can do whatever they want to. We just respond to their request.

I'd be asking for the client code to see what they are doing in the code itself and why.


tracefile_identifier in 10g Instance

Rajeshwaran, Jeyabal, August 08, 2012 - 6:03 am UTC

Tom,
Can you help me to clear out this flag from my ORA10GR2 instance? Here is what i did, but it doesn't help much.

rajesh@ORA10GR2>
rajesh@ORA10GR2> show parameter tracefile

NAME                      TYPE        VALUE
------------------------- ----------- -------------------------
tracefile_identifier      string      DEMO_TEST
rajesh@ORA10GR2>
rajesh@ORA10GR2> connect sys/oracle as sysdba
Connected.
sys@ORA10GR2> alter system set tracefile_identifier='' scope=spfile;

System altered.

Elapsed: 00:00:00.11
sys@ORA10GR2>
sys@ORA10GR2> show parameter tracefile

NAME                      TYPE        VALUE
------------------------- ----------- -------------------------
tracefile_identifier      string
sys@ORA10GR2>
sys@ORA10GR2> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
sys@ORA10GR2> startup
ORACLE instance started.

Total System Global Area  209715200 bytes
Fixed Size                  1248116 bytes
Variable Size              92275852 bytes
Database Buffers          109051904 bytes
Redo Buffers                7139328 bytes
Database mounted.
Database opened.
sys@ORA10GR2> show parameter tracefile

NAME                      TYPE        VALUE
------------------------- ----------- -------------------------
tracefile_identifier      string
sys@ORA10GR2> select name,value
  2  from v$spparameter
  3  where name ='tracefile_identifier';

NAME                           VALUE
------------------------------ --------------------
tracefile_identifier

Elapsed: 00:00:00.29
sys@ORA10GR2>
sys@ORA10GR2> connect rajesh/oracle
Connected.
rajesh@ORA10GR2> show parameter tracefile

NAME                      TYPE        VALUE
------------------------- ----------- -------------------------
tracefile_identifier      string      DEMO_TEST
rajesh@ORA10GR2>
rajesh@ORA10GR2> select name,value
  2  from v$spparameter
  3  where name ='tracefile_identifier';

NAME                                               VALUE
-------------------------------------------------- --------------------
tracefile_identifier

Elapsed: 00:00:00.06
rajesh@ORA10GR2>


Tom Kyte
August 17, 2012 - 1:16 pm UTC

clear out what flag???? I don't know what you mean.

clear out what flag???

Rajeshwaran, Jeyabal, September 15, 2012 - 8:36 am UTC

clear out what flag???? I don't know what you mean

-clear out flag means, how can i clear off the contents present in this parameter tracefile_identifier. Above the method i used, is that i got missed out anything?
Tom Kyte
September 16, 2012 - 4:10 am UTC

ops$tkyte%ORA11GR2> alter system reset tracefile_identifier scope=spfile ;

System altered.

how can i resolve this puzzle?

iongxiao, November 05, 2012 - 2:35 am UTC

holle,tom !
now one question arround me all the time,some sql slow in jdbc programs,but in sqlplus the some sql is not execute slowly,for example:

8519 2012-11-05 10:39:22,221|httpWorkerThread-53111-3[INFO] ORM - select t.* from AD.CA_POCKET_19 t where t.ACCT_ID=? | [oracle:(service_name = shzw):OB60@101]
(1)acctId: [10032509419]
8519 2012-11-05 10:39:22,224|httpWorkerThread-53111-3[INFO] ORM - Result Count:12 Time cost([ParseSQL]:2ms, [DbAccess]:755ms, [Populate]:3ms.)[Tx2003602039@(service_name = shzw):OB60@101]

you see,the [DbAccess]:755ms ,the programer feels slowly!!!
but in sqlplus i execute the same sql :
SQL ID: 4ypjhm1z2x1vg Plan Hash: 821849464

select t.*
from
AD.CA_POCKET_19 t where t.ACCT_ID=10032509419


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 6 0 12
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 6 0 12

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 73
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
12 12 12 TABLE ACCESS BY INDEX ROWID CA_POCKET_19 (cr=6 pr=0 pw=0 time=56 us cost=4 size=600 card=8)
12 12 12 INDEX RANGE SCAN IDX_CA_POCKET_19 (cr=4 pr=0 pw=0 time=121 us cost=3 size=0 card=8)(object id 502660)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 13.94 13.94
********************************************************************************

this question how can i resolve?
Tom Kyte
November 05, 2012 - 9:46 am UTC

show us the tkprof from the java jdbc program.

perhaps they are binding wrong and not having the same plan


at the very least, look in v$sql_plan for this statement and tell us how many plans you see for this query.

A reader, June 06, 2014 - 12:11 pm UTC

Hi Tom,

I am curious to know how tkprof got its name.

Please help.

Thanks in advance.