Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, NK.

Asked: May 28, 2001 - 1:26 pm UTC

Last updated: July 17, 2020 - 6:40 am UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

hi tom,

i have seen lot of answers from you regarding sql trace and tkprof for sql performance tuning.but i am still not clear with few things.
i hope you will explain me clearly those things

1.how does this sql trace and tkprof differ from set autotrace on.

2.if we use set autotrace on and explain plan, how do we analyze a
particular query and tell whether the query is tuned or not

3.for sql trace and tkprof i want to know if

if i do 'ALTER SESSION SET SQL_TRACE = TRUE'

where does the trace file get created(my oracle server is running on unix). by formatting the trace file using tkprof how do i analyze it to tell the query is tuned or not?

4.is the order of tables in a query matters in performance?

for this question i got an answer that it matters for RBO but not for CBO.but who decides to go for RBO,CBO.can we tell oracle to go for CBO OR RBO.If we can decide, when to go for RBO,when to go for CBO.

sorry for the flow of questions, but i am confused on the above issues.
please answer the above.

thanks a lot.
nk

and Tom said...

1) autotrace will show you one of set of information for a SINGLE SQL statment.

tkprof shows you a different set of data for all sql statments executed by your application as it runs.


autotrace = great way to test a single query in isolation.
tkprof = great way to test your APPLICATION.

2) oh boy. well, if it runs really fast, its tuned. if it exceeds your desired response time its not tuned. If you have already worked with it for a while and have exhausted every avenue known to you and it still runs too slowly -- its tuned by too slow.

I don't know how to answer this, how do you know when ANYTHING is tuned?

remember -- it can always go 1% faster (at least). forever. diminishing marginal returns kick in at some point in the tuning process. set goals and stop when you achieve them

3) the trace files will go into the directory specified by the init.ora parameter user_dump_dest.

see #2 for "how do I know its tuned"

4) If you are developing a new application in Oracle8i -- you should most definitely be using the CBO. Analyze your tables and it'll happen (as long as you haven't set the optimizer goal to rule that is). Just use it -- the order of tables won't matter.



Rating

  (110 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Tuning SQL

Chris, October 24, 2002 - 5:31 am UTC

Hello again Tom :)
I have this sql cursor :

CURSOR LocalDuration_cur(par1 number DEFAULT 1, par2 number DEFAULT 99, par3 number, par4 number) IS SELECT a.* FROM t_cdr_att a, t_profile_anumber b, t_profile_yn c WHERE (SUBSTR(a.bnumber,1,1) != '0' AND a.anumber = b.anumber AND a.nmfile = c.batchNo AND c.batchID >= par1 AND c.batchID <= par2 AND a.durasi > (b.dcrphlocal+(par3 * b.dcrphlocal))) and c.batchNo not in (select d.batchNo from t_fraudullent_yn d, t_profile_yn e where d.batchNo = e. batchNo and d.fraudcodeid = par4) order by a.nmfile, a.anumber;

how to tune this SQL statement of that cursor ?
I mean .. should I change it into another way of logic..can u give me example...? or do i have to put index or what should I do with this statement ?
This is just for example so that I can do similiar thing to lots more SQL statements I had here :)

thx before :)



Tom Kyte
October 24, 2002 - 7:01 am UTC

there are entire tomes written on tuning sql.

guy harrison has a good book you might be interested in.
</code> http://www.amazon.com/exec/obidos/tg/detail/-/0130123811 <code>




On Tkprof and Query Tuning

A Reader, November 10, 2002 - 6:26 pm UTC

Tom,
Here is my understanding. Please do correct me if iam wrong. I havent used Tkprof and i would like to know if i could find it useful in my case.

(1) If an application has performance issues, using SQL Trace and Tkprof, identify the poorly performing areas.

(2) But, for tuning Queries in Oracle reports, we already know that a certain query is taking much more time than it should take. Given such a situation, how would you go about it. What tool do you use to further diagonise the query, given that there is no redundant table or join in the query.

(3) In some cases, even for invalid bind variable values (when there are no records to match) queries take 50 to 60 seconds. Would Tkprof be any useful in this case because we know that this SQL needs to be tuned. With valid values, the query runs for minutes.

(4) Some queries run faster when given the RULE hint. Why is that so? Could this be due to CBO not using Indexes. What could be possibly done for that?


Thanks a lot for your time and patience...




Tom Kyte
November 10, 2002 - 8:33 pm UTC

1) correct

2) same as I would in #1 plus my head (my knowledge of how to apply SQL to questions in order to get answers efficiently)

3) yes it would -- it would show you at what step of the plan the query starts getting "no data" -- as each step of the plan will be listed there with the number of rows flowing out of each step. You could try to figure out how to have it processed differently so it can stop sooner

4) and some queries run faster given the FIRST_ROWS and ALL_ROWS. It is because the piece of software that is the optimizer is a piece of software.

You should look at the optimizer_index_cost_adj parameter -- many times the default of 100 is too high.

I Cant Find TKPROF in my 9i Installation?

Anand H S, March 16, 2003 - 9:37 am UTC

Hi Tom,
I am not able to find the TKPROF utility in my Oracle 9i installation.
-- Has the name changed?
-- Is there a new way of viewing trace files? Or
-- Does it needs to be installed separately?

Kindly Elaborate.
Regards,
Anand H S

Tom Kyte
March 16, 2003 - 10:50 am UTC

you must have 9.0.0 on windoze. I guess they figured no one on windows used the command line and they forgot to include tools that are command line driven (that is a joke ;)

Either get the current release of the software (9iR2) or open a tar with support -- you can get it from metalink as a "patch" of sorts.

sql_trace affecting performance?

Mark Williams, April 03, 2003 - 10:51 am UTC

I had a particularly troublesome query that was running really slow, about 35 seconds.

I did a "alter session set sql_trace=true" to get a tkprof, and suddenly the query starts running in a secode. Turn sql_trace off, and it starts taking 35 seconds again.

I won't ask for help in the actual tuning here (I'll leave that till I get an opportunity to ask a new question), but have you ever come across this behaviour? If so, what might be the reason?


Tom Kyte
April 03, 2003 - 11:35 am UTC

what version?

Version

Mark Williams, April 03, 2003 - 11:47 am UTC

8.1.7


Tom Kyte
April 03, 2003 - 1:17 pm UTC

bummer, no v$sql_plan to look at.

any chance of reproducing in 9i?

More than one of me?

Mark A. Williams, April 03, 2003 - 1:12 pm UTC

Hey! There's more than one of me...

- Mark

Tom Kyte
April 03, 2003 - 2:09 pm UTC

geez -- i assumed it WAS you. was going to send you an email ;)

RE: More than one of me?

Mark A. Williams, April 03, 2003 - 3:13 pm UTC

> geez -- i assumed it WAS you...

Nah, I'd never give you only 3 stars... :)

- Mark

Mark J. Williams writes

The other Mark Williams, April 04, 2003 - 3:27 am UTC

Hey, its quite a common name.

No, I don't have access to a 9i DB unfortunately.

Anyway, needless to say, after last nights shutdown, cold backup and startup (I know, I know, don't ask) the problems gone away.

RE: Mark J. Williams

Mark A. Williams, April 04, 2003 - 8:57 am UTC

Hi The Other Mark Williams...

Yeah, I know it is common... just having a bit of fun... Didn't mean any offense, so I hope you didn't take it that way!

Cheers,

Mark

Course no offense

Mark Williams, April 04, 2003 - 10:12 am UTC

There may be many Mark Williams, but there's only 1 Tom Kyte!

TARs, Support etc.

Richard, September 05, 2003 - 10:45 am UTC

Spleen vent, I'm afraid:

I bought the 9i (9.0.1) CD pack for W98 in order to teach myself Oracle 9i at home.

Sadly, this version has a few bugs: most annoyingly (so far) is the absence of TKPROF, but the effect that PAUSE has on one's SQL Plus session rivals David Blaine's ability to make things disappear.

Do I really have to pay out for MetaLink support, in order to get a patch, or is there another way round this?

I could understand *new* functionality failing, but not *oldies* like PAUSE and TKPROF!

Spleen vent over!

Tom Kyte
September 05, 2003 - 6:48 pm UTC

i know the pause issue you are talking about -- i personally was never affected by it due to the fact that sqlplus in a dos window is somewhat "superior" (and is not affected by the pause issue).

Yes, in order to access metalink, you need a support contract -- however, I might suggest that you could just download 9iR2 which is the currently supported product and more relevant today in any case.



Patches should be freely available!

Richard, September 06, 2003 - 9:24 am UTC

Thank you for your reply.

I guess I will have to follow your advice and download the latest version. It's just as well I have access to a large bandwidth connection at work, though, else I'd be reaching retirement age before the 9iR2 download dribbled through my 56k home connection!

What a pity that patches for howlers like those I've encountered aren't freely available.

Regards.

trace

A reader, October 03, 2003 - 9:40 am UTC

Tom,
We are trying to run trace on a few Oracle reports and on an EJB based application. The DBA has asked us to
start the trace and then give him the SID and process#, he then would then give us back the trace files.
My questions
1) As a developer how can I find my session id and process#(no DBA privileges)
2) Is there a SQL that tells me how I am connected to the database(dedicated server or shared server-MTS)

3) Can wait events be captured with sys.dbms_system.set_sql_trace_in_session?
(wait events as in alter sessionset events '10046' trace name context forever, level 12
3) Can tracing be set up for an EJB based application where the connection are being pooled. If so how?
Also, the connection from the application server to the database is MTS

Thank you


Tom Kyte
October 03, 2003 - 10:14 am UTC



1) you'll NEED access to v$session. ask the DBA to create a view:

create view YOUR_SESSION
as
select * from v$session where sid = (select sid from v$mystat where rownum=1);

and grant select on that view to you -- that'll give you YOUR row and YOUR row alone.

2) once you have access to v$session via that view -- you can look at the server column in it for your session. it'll either say dedicated or shared

3) you don't want to use shared server with connection pooling. that is much like caching a cache ( double buffering)

for sql_trace, you'll need to use dedicated server.

you can do it with a bean easily -- but the bean has to help. The bean would

a) grab connection
b) alter session set sql_trace=true;
c) do bad stuff
d) alter session set sql_trace=false;
e) return connection


you can use dedicated server easily -- just put server=dedicated in the connect string. you must do this or your trace data will be scattered over many shared server trace files on the server and putting that stuff back together is like humpty dumpty.

trace

A reader, October 03, 2003 - 10:35 am UTC

Thank you

TKPROF

A reader, October 29, 2003 - 10:28 pm UTC

Hi Tom,

In the session tkprof output, the top SQL statement is a procedure inserts new records into the database.

Could you help me explain why the "cpu" and "disk" are all 0s and elapsed is very big? Thanks.


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7624 0.00 69.17 0 0 0 0
Execute 7624 0.00 5.97 0 0 0 7624
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15248 0.00 75.14 0 0 0 7624

Tom Kyte
October 30, 2003 - 7:06 am UTC



looks like you tkprofed and incomplete trace file, thats all.

I can say -- the program has a bug. for it PARSED the same statement over and over and over and over and over again. that parse count should be 1!!!

(look you spend more time parsing then executing, hmmmmmmm)

TKPROF Translation

Denise, November 11, 2003 - 3:14 pm UTC

Hi Tom

We recently upgraded to a new version of our third party
fundraising application software.

We've noticed a considerable slowdown with performance
especially when users were posting, merging records,
and updating.

I set sql_trace on and then tkprof-ed the results..
then went through chapter 10 of your book on the TKPROF
translation information. The TKPROF file was a very large
file so I'm pasting the Summary Information in hopes that
you can decipher what is taking place.

Another quirky thing is...as soon as I set sql_trace=
true...everything started whizzing...users said "what did
you do....everything is now taking only seconds to complete instead of 30 minutes"?...when it is turned off everything
slows down again(like someone else here mentioned).

Any ideas?

anyhow here is the summary of the tkprof file:

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 655 0.84 0.87 0 0 0 0
Execute 917 0.13 0.21 5 1175 206 48
Fetch 907 2.61 3.17 46 589079 833 787
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2479 3.58 4.25 51 590254 1039 835

Misses in library cache during parse: 2
Misses in library cache during execute: 1

906 user SQL statements in session.
68 internal SQL statements in session.
974 SQL statements in session.
********************************************************************************
Trace file: ora01612.trc
Trace file compatibility: 8.00.04
Sort options: default

1 session in tracefile.
906 user SQL statements in trace file.
68 internal SQL statements in trace file.
974 SQL statements in trace file.
224 unique SQL statements in trace file.
17006 lines in trace file.


we are using 8.1.7(going to 9i real soon).
add this to my many thousand other thanks.










Tom Kyte
November 12, 2003 - 6:38 am UTC

thats not a really useful extract there -- that is "recursive sql" -- just sql that was executed on your behalf to run sql. if you use lots of plsql with sql -- it would be there -- but if your application just submits sql, that summary isn't about your sql at all!

are you tracing the entire instance? or a single application?

what is different when we trace? we write that file, we create a serious bottleneck. If the creation of a bottleneck makes things go fast -- then there is some other very serious bottleneck that we are bypassing by slowing down over here.

I would suggest looking at a 10 minute statspack report at this point

TKPROF Followup

Denise, November 12, 2003 - 8:18 am UTC

thanks Tom....

in this case we set SQL_Trace on for the entire instance..
so we could get a report on what was happening when some
users posted, others merged...etc.

It is at that point when things started moving fast.

It has also been my understanding that tracing is not
good when set to true in a production environment...
and I find it perplexing that in our case..the opposite
is true and now the users want me to leave it on...
meaning set sql_trace=true???

I'll read your chapter on setting up the statspack utility
and see what I can produce from that.

as ever...thanks



Tom Kyte
November 12, 2003 - 10:41 am UTC



i'm going to guess.

setting sql_trace=true introduced an artificial bottleneck that just made things flow better. remove bottleneck and things go bad.

Analogy:

premise: adding more CPU will make my system go faster.

test: double CPU capacity, go from 1ghz to 2ghz cpus.

outcome: system runs much much slower

reason: system was mixed workload, lots of reports and OLTP users. OLTP does disk lots, reports did lots of disk and CPU. reports were constrained as to their use of the disks by the amount of cpu they got (did lots of formatting in the report engine, they were starved for cpu). Adding CPU made the reports able to process the data faster, causing them to flood the IO system -- to the obvious detriment of the OLTP users -- whom the CPU was bought for (erroneously)


So, you need to find the true bottleneck. turn it off, statspack for a bit. or just trace a SINGLE application instance -- ONE session!!!

STATPACK Utility...and more

Denise, November 12, 2003 - 11:16 am UTC

exactly Tom..you hit the nail on the head.

I created the STATPACKS utility...followed instructions
in your book...even created a new tablespace to house
the user perfstat and files.

I have yet to execute the package due to immense conflict
from the software vendor indicating our support contract
will be nulled if we implement any statistics that could
cause serious performance issues.

We are already having serious performance issues that they
are unwilling or unable to deal with causing us(me)to
try and find the solution so we can get it fixed.
I am utterly so frustrated with these people!!!

I showed my boss your comments in the StatsPack WrapUp
of your book on page 494. She agrees but feels she cannot
compromise our support contract.

However one thing I failed to mention. We begin to feel
the bottlenecking in the afternoon hours..mainly after lunch.
It appears the mornings go fairly smooth with posting,merging...etc.

The slowdown then begins in the afternoon hours.

Could this be something to due with Multi-Disk Stacking
issue on NT4...we are running a RAID5 disk system.
I'm just thinking off the top of my head here.....sigh!

Tom..for the zillionth time..thanks for your immense patience, support and guidance.











Tom Kyte
November 12, 2003 - 4:33 pm UTC

you turn on sql_trace and don't think this is "intrusive"???? man, that is like so intrusive -- especially compared to statspack (which will not affect performance at all -- not a bit -- except for the second it takes to execute statspack.snap!!)

Tom....

Denise, November 12, 2003 - 5:12 pm UTC

you seem to echo my frustration..
however what good will it do to run it(statspack) if I
can't pass it on to the yo-yo's that will need to evaluate
it..then possibly become infuriated with our 'disobedience'.

My boss said she is going to contact some big-wig and
express her extreme dissatisfaction with tech support..
ho-hum.

you're the best...take care.


Tom Kyte
November 13, 2003 - 6:28 am UTC


we can look at it here?

Tom...

Denise, November 13, 2003 - 1:09 pm UTC

I showed my boss your response and she said we will wait
till next week to see what solution the vendor proposes
to address the performance issue...if it remains an
ongoing issue she will authorize my running the statspack
utility and post the results here.

on a lighter note:
I now have my boss checking out your website for Oracle
solutions with issues...everytime something comes up I
tell her.."well Tom says this...and Tom says that"...and
"Read this part in Tom's book where he explains...etc".

Let me know if you have an "ASKTom" poster....I would
love one hanging in my office with a caption saying:
"Oracle powers the World....
Tom powers the force behind Oracle".

;~D


ps: when I get those stats I'll post them.














Why CPU time is greater than elapsed time in tkprof report

Jennifer Chen, February 24, 2004 - 4:26 pm UTC

Here is the first part of my tkprof report:

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: aliastst_ora_2500.trc
Sort options: execpu
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

BEGIN MASTERINDEX_PKG.GET_MASTER_INDEX( :1,:2,:P_REFCURSOR ); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 44 0.03 0.01 0 0 0 0
Execute 44 3.00 2.96 0 0 0 44
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 88 3.03 2.98 0 0 0 44

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 66
********************************************************************************

Also, can you please explain why the SP was parsed every time when it was executed:

PROCEDURE get_master_index (
p_mpi_number IN alias.people_index.mpi_number%TYPE,
p_index_value IN CHAR,
p_refcursor OUT sys_refcursor
)
AS
BEGIN
CASE UPPER (p_index_value)
WHEN 'MNU'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, mnu_type, mnu_number, primary_value_flag,
sf_flag
FROM alias.name_mnu
WHERE mpi_number = p_mpi_number;
WHEN 'DLN'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, dl_number, dl_state, primary_value_flag
FROM alias.name_dln
WHERE mpi_number = p_mpi_number;
WHEN 'SMT'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, smt_number, primary_value_flag
FROM alias.name_smt
WHERE mpi_number = p_mpi_number;
WHEN 'SOC'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, soc, primary_value_flag, sf_flag
FROM alias.name_soc
WHERE mpi_number = p_mpi_number;
WHEN 'DOB'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, TO_CHAR (dob, 'MM/DD/YYYY'),
primary_value_flag, dob_count, sf_flag
FROM alias.name_dob
WHERE mpi_number = p_mpi_number;
WHEN 'SEX'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, sex, primary_value_flag
FROM alias.name_sex
WHERE mpi_number = p_mpi_number;
WHEN 'RAC'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, rac, primary_value_flag
FROM alias.name_rac
WHERE mpi_number = p_mpi_number;
WHEN 'INDEX'
THEN
OPEN p_refcursor
FOR
SELECT mni_number, mpi_number, last_name_code,
first_name_code, middle_name_code,
last_name_group_num, first_name_group_num,
middle_name_group_num, last_name, first_name,
middle_name, other_name, real_name_flag,
name_type_code, sf_flag
FROM alias.name_index
WHERE mpi_number = p_mpi_number;
WHEN 'PEOPLE'
THEN
OPEN p_refcursor
FOR
SELECT mpi_number, hgt, wgt, eye, hai, skn, fpc,
system_owner
FROM alias.people_index
WHERE mpi_number = p_mpi_number;
ELSE
raise_application_error (-20000, 'The Table Is Not In The List');
END CASE;
END get_master_index;

Thanks so much for your time and help.

Tom Kyte
February 24, 2004 - 8:59 pm UTC

do you have "effective oracle by design"? I go into this in more detail

but in a nutshell:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6904032233295#14588115938786 <code>



hard parse and soft parses

Jennifer Chen, February 25, 2004 - 10:03 am UTC

Hi Tom,

I read both your "effective oracle by design" and "Expert One-on-One Oracle".

Also, this morning I read metalink Note:32895.1 that explains parse further.

points 6 and 7 in the note:

6. Both the soft and hard parse register as a parse in tkprof. Hashing the current statement updates the parse count.

7. Soft parse does not involve parsing the statement, it just hashes the statement and compares this with the hashed value in the SQL area.

Since there was only one miss in library cache during parse, does that indicate oracle has only done one hard parse and all the rest 43 times were just soft parses?

Is there way that I can distinguish hard parse versus soft parse in TKPROF?

Parse time in this particular case is trivial, should I too concern about soft parses here? By eliminating these parses, I can split this one stored procedure to 9 seperate stored procedures. But I was trying to consolidate SPs since I already have about 500 them in our application.

Your thoughts on this are greatly appreciated.

Thank you for taking time to respond my questions.




Tom Kyte
February 25, 2004 - 11:14 am UTC

7.) is wrong wrong wrong (effective Oracle by design goes into this, "beginning Oracle programming" which I wrote the statement processing chapter in goes into this in more depth.

I've sent an email to the author of that note requesting it be corrected.

see:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2588723819082 <code>


But yes, the misses are hard parses, that is how to tell.

Parses in single user mode are misleading. It is the fact that they "latch" and latches are locks and locks are serialization devices. if you have a multi-user system, excessive parsing will get in your way.


Actually in looking at this stored procedure -- it looks like it is part of a massive slow by slow (row by row) process somewhere doing tons of lookups.

I would personally be fishing much further upstream -- looking for why I needed such a routine in the first place. Probably just need a single query somewhere that "joins"

different execution time between stand alone sql and stored procedure

Jennifer Chen, February 25, 2004 - 9:40 pm UTC

Hi Tom,

Thank you so much for your quick response. Your previous note to the author was very helpful too.

The routine stored procedure I wrote is for the developer to pass in ID and table name index and get rows from the corresponding table because we are eliminating stand alone sql in the application code. I tracked down a developer who used a stand alone insert statement and compared it to the packaged stored procedure:

They both have a hard parse since I restarted database and turned trace on. The parse cpu time for the stand alone sql takes much longer. I guess this is probably because the syntax check and semantics check for the stored procedure are done during the compilation. But I am curious about why execution time for the stored procedure is much less than that for the stand alone sql statement?
Forgive me if you already answered this question before. I am amazed that you write faster than we can read.
Thank you for your public services.


INSERT INTO alias.chr_inquiry_log
(inquiry_log_number, criminal_justice_agency, ctn,
dln, dls, dob,
enter_date_time, fbi, inquiry_for, message_number,
mke, mnu_number, mnu_type, nam, oca, opr,
prn, pur, rac, remarks, req, response_count,
sex, SID, soc, tcn, ter, ucn
)
VALUES (:b27, :b26, :b25,
:b24, :b23, TO_DATE (:b22, 'MM/DD/YYYY'),
SYSTIMESTAMP, :b21, :b20, :b19,
:b18, :b17, :b16, :b15, :b14, :b13,
:b12, :b11, :b10, :b9, :b8, :b7,
:b6, :b5, :b4, :b3, :b2, :b1
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 1.09 1.08 0 0 0 0
Execute 1 0.53 0.56 4 1 30 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.62 1.65 4 1 30 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 66 (recursive depth: 1)
********************************************************************************

BEGIN UTILITY_PKG.ADD_CHR_INQUIRY_LOG( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,
:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27 ); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.18 0.17 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.20 0.17 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 66

Here is the content of UTILITY_PKG.ADD_CHR_INQUIRY_LOG

PROCEDURE add_chr_inquiry_log (
p_inquiry_log_number IN alias.chr_inquiry_log.inquiry_log_number%TYPE,
p_criminal_justice_agency IN alias.chr_inquiry_log.criminal_justice_agency%TYPE,
p_ctn IN alias.chr_inquiry_log.ctn%TYPE,
p_dln IN alias.chr_inquiry_log.dln%TYPE,
p_dls IN alias.chr_inquiry_log.dls%TYPE,
p_dob IN CHAR,
p_fbi IN alias.chr_inquiry_log.fbi%TYPE,
p_inquiry_for IN alias.chr_inquiry_log.inquiry_for%TYPE,
p_message_number IN alias.chr_inquiry_log.message_number%TYPE,
p_mke IN alias.chr_inquiry_log.mke%TYPE,
p_mnu_number IN alias.chr_inquiry_log.mnu_number%TYPE,
p_mnu_type IN alias.chr_inquiry_log.mnu_type%TYPE,
p_nam IN alias.chr_inquiry_log.nam%TYPE,
p_oca IN alias.chr_inquiry_log.oca%TYPE,
p_opr IN alias.chr_inquiry_log.opr%TYPE,
p_prn IN alias.chr_inquiry_log.prn%TYPE,
p_pur IN alias.chr_inquiry_log.pur%TYPE,
p_rac IN alias.chr_inquiry_log.rac%TYPE,
p_remarks IN alias.chr_inquiry_log.remarks%TYPE,
p_req IN alias.chr_inquiry_log.req%TYPE,
p_response_count IN alias.chr_inquiry_log.response_count%TYPE,
p_sex IN alias.chr_inquiry_log.sex%TYPE,
p_sid IN alias.chr_inquiry_log.SID%TYPE,
p_soc IN alias.chr_inquiry_log.soc%TYPE,
p_tcn IN alias.chr_inquiry_log.tcn%TYPE,
p_ter IN alias.chr_inquiry_log.ter%TYPE,
p_ucn IN alias.chr_inquiry_log.ucn%TYPE
)
AS
BEGIN
INSERT INTO alias.chr_inquiry_log
(inquiry_log_number, criminal_justice_agency, ctn,
dln, dls, dob,
enter_date_time, fbi, inquiry_for, message_number,
mke, mnu_number, mnu_type, nam, oca, opr,
prn, pur, rac, remarks, req, response_count,
sex, SID, soc, tcn, ter, ucn
)
VALUES (p_inquiry_log_number, p_criminal_justice_agency, p_ctn,
p_dln, p_dls, TO_DATE (p_dob, 'MM/DD/YYYY'),
SYSTIMESTAMP, p_fbi, p_inquiry_for, p_message_number,
p_mke, p_mnu_number, p_mnu_type, p_nam, p_oca, p_opr,
p_prn, p_pur, p_rac, p_remarks, p_req, p_response_count,
p_sex, p_sid, p_soc, p_tcn, p_ter, p_ucn
);
END add_chr_inquiry_log;

Tom Kyte
February 26, 2004 - 8:52 am UTC

what version/platform and what exactly was the sequence of steps here -- is this tkprof from a single session? why is the insert above the procedure call?

PIO > LIO

Vlado, February 26, 2004 - 10:49 am UTC

Tom,

I have a Oracle 9.0.2 instance on Sun Solaris, on which I execute a multi-table insert statement which is based on a join on three tables A, AC, FP.
- A & AC have 1 million rows each, they have the same primary key columns and the join is on the primary key columns.
- Every row in A matches to a corespsonding row in AC.
- FP has approx. 25000 rows, and is referenced using an outer join.
- There are no filters, so the insert statement will insert a total of 2 million rows into two tables (1 million into each table).

Below is part of a session trace I run on the insert statement (10046, level 8) and analyzed with Trace Analyzer.
The number of PIOs (1941934) is what I'm concerend about as it is 7 times that of the LIOs (119884 + 153987). The only explanation I have is that it could be caused by the hash join, which might have to write to the temp tablespace.

call count cpu elapsed disk query current rows misses
------- ------ ------- -------- -------- -------- -------- -------- -------
Parse 1 0.13 0.15 1 3 0 0 1
Execute 1 570.99 2535.55 1941933 119881 153987 2000000 0
------- ------ ------- ------- -------- -------- -------- -------- -------
total 2 571.12 2535.71 1941934 119884 153987 2000000 1

Rows Row Source Operation
------- ---------------------------------------------------
2 MULTI-TABLE INSERT (cr=119875 pr=1941933 pw=156642 time=531.54)
1000000 .HASH JOIN (cr=105407 pr=1917909 pw=94885 time=2183.15)
1000000 ..HASH JOIN (cr=88404 pr=144013 pw=28140 time=135.64)
43863 ...TABLE ACCESS FULL FP (cr=376 pr=374 pw=0 time=0.28)
1000000 ...TABLE ACCESS FULL A (cr=88028 pr=88031 pw=0 time=38.57)
1000000 ..TABLE ACCESS FULL AC (cr=17003 pr=17008 pw=0 time=12.61)


I found it strange that Oracle choose to first join FP & A, so I added a hint to force Oracle to first join A & AC, and then FP. When I executed the query again, the performance was much better:

call count cpu elapsed disk query current rows misses
------- ------ ------- --------- -------- -------- -------- -------- --------
Parse 1 0.40 0.49 10 174 0 0 1
Execute 1 552.77 1211.41 330686 119840 153959 2000000 0
------- ------ ------- --------- -------- -------- -------- -------- --------
total 2 553.17 1211.90 330696 120014 153959 2000000 1

Rows Row Source Operation
------- ---------------------------------------------------
2 MULTI-TABLE INSERT (cr=119834 pr=330685 pw=159897 time=450.74)
1000000 .HASH JOIN (cr=105407 pr=306716 pw=98118 time=133.24)
43863 ..TABLE ACCESS FULL FP (cr=376 pr=374 pw=0 time=0.26)
1000000 ..HASH JOIN (cr=105031 pr=201454 pw=45338 time=710.34)
1000000 ...TABLE ACCESS FULL A (cr=88028 pr=88025 pw=0 time=38.79)
1000000 ...TABLE ACCESS FULL AC (cr=17003 pr=16977 pw=0 time=16.27)


So, I have two questions:
1. What is the cause for the PIO > LIO?
2. Why did the PIO count change so much between the two runs (assuming the only difference was that I added a hint to force the join on A & AC first)

Tom Kyte
February 26, 2004 - 1:42 pm UTC

1) You have PIO's to and from temp -- when the blocks are in TEMP, they do not incurr LIO's (we don't have them in the buffer cache ever). So, what you are seeing is reads to temp.

2) different amounts of data put to temp caused that. See the "pw" -- thats writes to temp there.

Thank you for looking into this

Jennifer Chen, February 26, 2004 - 11:00 am UTC

Hi Tom,

Thank you for looking into this. Here are answers to your questions:

1. what version/platform
We are running Oracle 9.2.0.4.0 on Windows 2000 SP4.

2. what exactly was the sequence of steps here --
I set sql_trace=TRUE in init.ora and bounced DB. Our developer restarted app server, connected DB through ODBC, and run the application (name search feature) via Brower.

3. is this tkprof from a single session?
Yes. I generated report:
...\udump> tkprof ora02500.trc aliastst_ora_2500.trc sort=execpu

4. why is the insert above the procedure call?
Our application codes are written in c++ and suppose to only invoke packaged stored procedures to access data. According to the developer, when he did name search, there should only insert one record into the chr_inquiry_log table. He said he does not have stand alone sql in the code. Then, I must understand and interpret the TKPROF report wrong.

Could you please explain how to interpret these two paragraphs in the TKPROF report?

Thanks again for your time and help.


Tom Kyte
February 26, 2004 - 2:06 pm UTC

Ok, the sort=execpu sorted the data for us. that is why the insert appeared above the procedure call.


I just ran this test on windows:


set echo on
connect tkyte/tkyte
drop table t;
create table t (x int);

create or replace function f( x in number ) return number
deterministic
as
l_n number := 0;
begin
for i in 1 .. x
loop
l_n := ln(i);
end loop;
return l_n;
end;
/

create index t_idx on t(f(x));



create or replace procedure p( x in number, y in number )
as
l_n number := 0;
begin
for i in 1 .. y
loop
l_n := ln(i);
end loop;
insert into t values( p.x );
end;
/

variable x number
variable y number
exec :x := 0; :y := 10000
exec p(:x,:y);


alter session set sql_trace=true;
exec p(:x,:y);
disconnect
connect tkyte/tkyte
alter session set sql_trace=true;
exec :x := 10000; :y := 0
exec p(:x,:y);
disconnect
connect tkyte/tkyte


So, we warm up everything and then execute p with 0, 10000 (means p the procedure runs long, insert into t goes fast -- ln(i) is expensive to compute)

then we run p with 10000, 0 means p the procedure runs fast, insert into t goes slow....


the tkprofs show (on windows)


BEGIN p(:x,:y); END;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.73 0.73 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.73 0.73 0 0 0 1
********************************************************************************
INSERT into t values ( :b1 )

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 3 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 3 1



and then:

BEGIN p(:x,:y); END;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1
********************************************************************************
INSERT into t values ( :b1 )

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.72 0 1 4 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.72 0 1 4 1



so, we can see tkprof is not rolling up the aggregate times from the sql executed in a procedure into the procedure itself.


So, what your report above means is

a) the procedure "sans sql" runs in about 0.20 CPU seconds.

b) the SQL insert executed by that procedure took long to parse (probably lots of indexes, constraints and foreign keys to get together) and long to execute (complex indexes, triggers, constraints, fkeys, etc)


that's all -- it means the low hanging fruit to pick is:

a) make sure to avoid the parse as much as you can. PLSQL will go along way there, no matter how many times in a session you call that procedure, it should only show 1 parse in tkprof for the insert

b) look at the table and see if you have too many unnecessary indexes, what the constraints are, if anything obviously could be "tuned up" a little.

RE: I just ran this test on windows

Mark A. Williams, February 26, 2004 - 3:12 pm UTC

"I just ran this test on windows"

It must be getting unusually cold somewhere :)

(though I suspect VMWare was involved)

- Mark

Tom Kyte
February 26, 2004 - 4:22 pm UTC

vmware, definitely.

It is ironically amusing to run windows in a window. Very comforting in a way. When I'm done, I can just kill -9 it all if i like :)




THANK YOU

A reader, February 26, 2004 - 4:21 pm UTC

Hi Tom,

You explained everything very, very clear.

Many many thanks!!!

Re: PIO > LIO

Vlado, February 27, 2004 - 9:04 am UTC

But why would there be different amounts of data put to temp, if all that changed was the order of the joins?

Tom Kyte
February 27, 2004 - 9:08 am UTC

you are comparing apples to bananas here. change the order of the tables and everything is different.

also, it could be that you were using pga aggregate target and got different amounts of ram each run (due to differing loads on your system)

9i SQL Trace & Tkprof

Denni50, March 03, 2004 - 2:56 pm UTC

Tom

I'm trying to determine why my 9i test db is running very
slow. I'm the only user that rus the system as I'm developing tables, forms...etc

I noticed when I run a simple select query the CPU jumps
to 98-100% usage, the data should be whizzing off the
the screen but it kinda/sorta lumbers along.

I tried changing some of the init.ora parameters to see
if that would help. The init.ora file is the generic
file Oracle creates during installation.
I had to comment out #dispatchers(MTS) that was causing
start up problems.

anyway to make a long story short I ran a trace/tkprof
on a simple select statement:

TKPROF: Release 8.1.7.0.0 - Production on Wed Mar 3 14:39:36 2004

(c) Copyright 2000 Oracle Corporation. All rights reserved.

Trace file: ora00315.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

alter session set sql_trace=true


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 71
********************************************************************************

select distinct idnumber
from
gift_orig where idnumber < 2000


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 156.25 2820.00 117 121 0 999
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 156.25 2820.00 117 121 0 999

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 71

Rows Row Source Operation
------- ---------------------------------------------------
999 SORT UNIQUE
20467 INDEX RANGE SCAN (object id 32275)




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 68 156.25 2820.00 117 121 0 999
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 71 156.25 2820.00 117 121 0 999

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0

2 user SQL statements in session.
0 internal SQL statements in session.
2 SQL statements in session.
********************************************************************************
Trace file: ora00315.trc
Trace file compatibility: 8.00.04
Sort options: default

1 session in tracefile.
2 user SQL statements in trace file.
0 internal SQL statements in trace file.
2 SQL statements in trace file.
2 unique SQL statements in trace file.
104 lines in trace file.


The cpu/elapsed is way out there...this is on an NT4 OS
and the System Administrator states this is an Oracle
issue not NT4???...any clues(sigh)??

I'm sorry for so many questions..I'm trying to tackle
a zillion projects at one time and they each create their
own set of problems.


Tom Kyte
March 03, 2004 - 4:25 pm UTC

use the 9i tkprof please -- we changed the timing information -- those numbers are really teeny tiny as you'll see :)

9i trace file with 8i tkprof:


select count(*)
from
all_objects


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 300.00 156.00 0 9 0 0
Execute 1 0.00 0.81 0 0 0 0
Fetch 2 18400.00 19010.04 78 163246 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 18700.00 19166.85 78 163255 0 1


same trace file, using 9i tkprof:

select count(*)
from
all_objects


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.01 0 9 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.84 1.90 78 163246 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.87 1.91 78 163255 0 1




You have to move your decimal point 4 to the left:

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 68 0.01 0.28 117 121 0 999
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 71 0.01 0.28 117 121 0 999


is what you have for real (instant tuning!)

Tom...

denni50, March 04, 2004 - 9:56 am UTC

duh!

I did the tkprof from my client machine which is
connected to the 817 db...gotta remember when I need
9i specifics I need to do it from the server.

Got 3 servers going here and I'm beginning to feel
like the Led Zep song "dazed and confused".

anyway will need to do some research on running a 9i db
on NT4.

thanks and have a great w/e!(off friday..need a break!)



Tom Kyte
March 04, 2004 - 1:52 pm UTC

You should try 8ir3, 9ir2 on windows under vmware on the same box with 9ir1, 9ir2 and 10gr1 on linux :)

9.0.1.1.1 Tkprof.exe Patch

denni50, March 04, 2004 - 11:31 am UTC

Tom
Oracle 9.0.1.1.1 tkprof tool is not included. I've searched the Oracle bin directory.

I understand this is a bug with Oracle regarding tkprof
on this particular version and there is a patch.

We are not registered users with metalink and do not have
a Support Identifier Code due to reasons I'd rather not
discuss.

Is there any other way to get the patch?
thanks

Tom Kyte
March 04, 2004 - 2:26 pm UTC

sorry -- I cannot get patches for you HOWEVER............


you can download 9ir2 and use it's tkprof, it can read 9ir1 trace files. (otn.oracle.com)

thanks Tom...

denni50, March 04, 2004 - 3:00 pm UTC

was looking at the otn website to see if they had any patches...thought about downloading 9.2..heck might
as well install that version over 9.0.1.1.1.

was reading up on this vmware software..sounds really
neat you can load different OS in a virtual mode, channel back and forth..etc.

Don't think I have enough powers of persuasion to ask
organization to buy me software so I can "play" on the
old server.

When I hit the lotto I'm getting myself that new compact
IBM e_server with the xeon processors...then I'll install
Linux...10i to 20i...windoz...and the whole shebang!

I know Tom I'm getting silly...and signing off for the week.

take care
:~)



how do I kown my session has beening tracing?

steven, March 05, 2004 - 12:42 am UTC

after i alter session set sql_trace=true;
SQL> show parameters sql_trace

sql_trace  boolean    FALSE

just show system parameter;

where can i find sql_trace has opened for my session?
i know trace is process work.i can't find it on v$process.



 

Tom Kyte
March 05, 2004 - 7:55 am UTC

ops$tkyte@ORA9IR2> declare
  2     l_event number;
  3  begin
  4          sys.dbms_system.read_ev( 10046, l_event );
  5          dbms_output.put_line( 'result = ' || l_event );
  6  end;
  7  /
result = 0
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> alter session set sql_trace=true;
 
Session altered.
 
ops$tkyte@ORA9IR2> declare
  2     l_event number;
  3  begin
  4          sys.dbms_system.read_ev( 10046, l_event );
  5          dbms_output.put_line( 'result = ' || l_event );
  6  end;
  7  /
result = 1
 
PL/SQL procedure successfully completed.
 

Pauline, March 29, 2004 - 12:47 pm UTC

Tom,
I try to use your example to see the return value by using
DBMS_SYSTEM and find something wrong after udgrading our
database from 8.1.7.4 to 9.2.0.4.
We have external user called ORACLE in Oracle 8.1.7.4 version of database. This user could execute SYS's DBMS_SYSTEM or DBMS_UTILITY packages. See below:


<dbdev1:dev4:/appl/oracle/admin/sql> sqlplus /

SQL*Plus: Release 8.1.7.0.0 - Production on Mon Mar 29 12:14:55 2004

(c) Copyright 2000 Oracle Corporation.  All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.4.0 - 64bit Production

SQL> sho user
USER is "ORACLE"
SQL> SELECT  *  FROM SYS.dba_role_privs
     WHERE grantee = 'ORACLE';   

GRANTEE                        GRANTED_ROLE                   ADM DEF
------------------------------ ------------------------------ --- ---
ORACLE                         DBA                            NO  YES

SQL> desc sys.dbms_system
PROCEDURE DIST_TXN_SYNC
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 INST_NUM                       NUMBER                  IN
PROCEDURE KCFRMS
PROCEDURE KSDDDT
PROCEDURE KSDFLS
PROCEDURE KSDIND
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 LVL                            BINARY_INTEGER          IN
PROCEDURE KSDWRT
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 DEST                           BINARY_INTEGER          IN
 TST                            VARCHAR2                IN
PROCEDURE READ_EV
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 IEV                            BINARY_INTEGER          IN
 OEV                            BINARY_INTEGER          OUT
PROCEDURE SET_BOOL_PARAM_IN_SESSION
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SID                            NUMBER                  IN
 SERIAL#                        NUMBER                  IN
 PARNAM                         VARCHAR2                IN
 BVAL                           BOOLEAN                 IN
PROCEDURE SET_EV
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SI                             BINARY_INTEGER          IN
 SE                             BINARY_INTEGER          IN
 EV                             BINARY_INTEGER          IN
 LE                             BINARY_INTEGER          IN
 NM                             VARCHAR2                IN
PROCEDURE SET_INT_PARAM_IN_SESSION
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SID                            NUMBER                  IN
 SERIAL#                        NUMBER                  IN
 PARNAM                         VARCHAR2                IN
 INTVAL                         BINARY_INTEGER          IN
PROCEDURE SET_SQL_TRACE_IN_SESSION
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SID                            NUMBER                  IN
 SERIAL#                        NUMBER                  IN
 SQL_TRACE                      BOOLEAN                 IN
PROCEDURE WAIT_FOR_EVENT
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 EVENT                          VARCHAR2                IN
 EXTENDED_ID                    BINARY_INTEGER          IN
 TIMEOUT                        BINARY_INTEGER          IN
SQL> exec sys.dbms_system.set_sql_trace_in_session(14,3,true);

PL/SQL procedure successfully completed.

SQL> exec sys.dbms_system.set_sql_trace_in_session(14,3,false);

PL/SQL procedure successfully completed.

SQL> exec dbms_utility.analyze_schema(schema=>'STAGING', method=>'ESTIMATE', estimate_percent=>'15' );

PL/SQL procedure successfully completed.

Once we upgrade to Oracle 9i, we give this user SELECT ANY DICTIONARY privilege since 9i O7_DICTIONARY_ACCESSIBILITY default to FALSE. This user still has EXECUTE ANY
PROCEDUE privilege. But it only can execute DBMS_UTILITY package as before but can't execute DBMS_SYSTEM package.

See below:

<dbdev1:dev1:/appl/oracle/admin/sql> sqlplus /

SQL*Plus: Release 9.2.0.4.0 - Production on Mon Mar 29 12:32:26 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

SQL> sho user
USER is "ORACLE"
SQL> exec dbms_utility.analyze_schema(schema=>'STAGING', method=>'ESTIMATE', estimate_percent=>'15' );

PL/SQL procedure successfully completed.

SQL>  exec sys.dbms_system.set_sql_trace_in_session(18,1,true);
BEGIN sys.dbms_system.set_sql_trace_in_session(18,1,true); END;

      *
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_SYSTEM' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

SQL> desc SYS.DBMS_SYSTEM
ERROR:
ORA-04043: object SYS.DBMS_SYSTEM does not exist

Now let's login as SYS to query:


SQL> conn / as sysdba
Connected.
SQL> sho user
USER is "SYS"
SQL> desc DBMS_SYSTEM
PROCEDURE DIST_TXN_SYNC
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 INST_NUM                       NUMBER                  IN
PROCEDURE KCFRMS
PROCEDURE KSDDDT
PROCEDURE KSDFLS
PROCEDURE KSDIND
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 LVL                            BINARY_INTEGER          IN
PROCEDURE KSDWRT
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 DEST                           BINARY_INTEGER          IN
 TST                            VARCHAR2                IN
PROCEDURE READ_EV
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 IEV                            BINARY_INTEGER          IN
 OEV                            BINARY_INTEGER          OUT
PROCEDURE SET_BOOL_PARAM_IN_SESSION
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SID                            NUMBER                  IN
 SERIAL#                        NUMBER                  IN
 PARNAM                         VARCHAR2                IN
 BVAL                           BOOLEAN                 IN
PROCEDURE SET_EV
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SI                             BINARY_INTEGER          IN
 SE                             BINARY_INTEGER          IN
 EV                             BINARY_INTEGER          IN
 LE                             BINARY_INTEGER          IN
 NM                             VARCHAR2                IN
PROCEDURE SET_INT_PARAM_IN_SESSION
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SID                            NUMBER                  IN
 SERIAL#                        NUMBER                  IN
 PARNAM                         VARCHAR2                IN
 INTVAL                         BINARY_INTEGER          IN
PROCEDURE SET_SQL_TRACE_IN_SESSION
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SID                            NUMBER                  IN
 SERIAL#                        NUMBER                  IN
 SQL_TRACE                      BOOLEAN                 IN
PROCEDURE WAIT_FOR_EVENT
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 EVENT                          VARCHAR2                IN
 EXTENDED_ID                    BINARY_INTEGER          IN
 TIMEOUT                        BINARY_INTEGER          IN

SQL> select status from user_objects where object_name like 'DBMS_SYSTEM';


STATUS
--------
VALID
VALID

What is wrong? Your pinpoint is always appreciated.

Thanks very much.
 

Tom Kyte
March 29, 2004 - 12:49 pm UTC

grant select on dbms_system to that user.

Pauline, March 31, 2004 - 4:24 pm UTC

Tom,
1.Thanks very much for the response. After I grant execute on DBMS_SYSTEM to this user as SYS and then connect to
this user,I can execute sys.dbms_system.set_sql_trace_in_session

But I still have suspecion about the privileges issue. 
See below:
SQL> SELECT grantor, owner,table_name "Object", PRIVILEGE, grantable "Grant Option?",  ' ' "By Role"
  FROM SYS.dba_tab_privs
 WHERE grantee = 'ORACLE'
 and grantor='SYS'
 and table_name like 'DBMS%'
UNION
SELECT grantor, owner,table_name "Object", PRIVILEGE, grantable "Grant Option?",  grantee
  FROM SYS.dba_tab_privs
 WHERE grantee IN (SELECT granted_role
                     FROM SYS.dba_role_privs
                    WHERE grantee = 'ORACLE')
and grantor='SYS'
and table_name like 'DBMS%';

GRANTOR    OWNER                          Object                         PRIVILEGE            Gra By Role
---------- ------------------------------ ------------------------------ -------------------- --- ------------------------------
SYS        SYS                            DBMS_CDC_UTILITY               EXECUTE              NO  SELECT_CATALOG_ROLE
SYS        SYS                            DBMS_DEFER_QUERY               EXECUTE              NO  DBA
SYS        SYS                            DBMS_DEFER_SYS                 EXECUTE              NO  DBA
SYS        SYS                            DBMS_FLASHBACK                 EXECUTE              NO  DBA
SYS        SYS                            DBMS_RESUMABLE                 EXECUTE              NO  DBA
SYS        SYS                            DBMS_STORAGE_MAP               EXECUTE              NO  DBA
SYS        SYS                            DBMS_SYSTEM                    EXECUTE              NO

7 rows selected.

Why after upgraded to 9i, we have to explicitely grant execute on dbms_system to this user who owned DBA role
plus we already grant select any dictionary to this 
user? Why this user can execute DBMS_UTILITY even we don't
see the privilege from above query? What should be the
standard behavior for the user who has the DBA role and try
to execute SYS's procedure/package in Oracle 9i? I try to
grant this user execute_catalog_role as sys but it doesn't
work.

2. When I try to see what is return value by executing
sys.dbms_system.read_ev to compare before and after session
set sql_trace=true, I don't see any return value:

SQL> sho user
USER is "ORACLE"
SQL> declare
  2        l_event number;
   begin
  3    4             sys.dbms_system.read_ev( 10046, l_event );
           dbms_output.put_line( 'result = ' || l_event );
  5    6     end;
  7  /

PL/SQL procedure successfully completed.

SQL> alter session set sql_trace=true;

Session altered.

SQL> declare
  2        l_event number;
  3     begin
  4             sys.dbms_system.read_ev( 10046, l_event );
  5             dbms_output.put_line( 'result = ' || l_event );
  6     end;
  7  /

PL/SQL procedure successfully completed.

What could be wrong?

Thanks again. 

Tom Kyte
March 31, 2004 - 4:29 pm UTC

"07_DICTIONARY_ACCESSIBILITY default to FALSE."

it does that -- it protects all SYS owned code, makes it so that 'EXECUTE ANY PROCEDURE' isn't "good" enough anymore.

It is a security thing


SQL> set serveroutput on

you need to do that to see dbms_output stuff. 

Pauline, March 31, 2004 - 5:00 pm UTC

Thanks so much.

interpreting tkprof

Florin, May 07, 2004 - 4:13 am UTC

Hi Tom,

I have a query which is running very slow (even that uses index). The tkprof shows the following:

select count(*) into :b1
from
AC_PROCESSING_ACCOUNTING where (BALANCE_DATE is null and NXT_JOB_NUMBER=:b2)



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 204 0.03 0.01 0 0 0 0
Fetch 204 3.61 89.00 16536 118207 0 204
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 408 3.64 89.01 16536 118207 0 204

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 23 (LNOLOCALC)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: RULE
0 SORT (AGGREGATE)
0 TABLE ACCESS (BY INDEX ROWID) OF 'AC_PROCESSING_ACCOUNTING'
0 INDEX (RANGE SCAN) OF 'AC_PROCESSING_ACCOUNTING_6IX'
(NON-UNIQUE)


How should I interpret this?

Thanks in advance

Tom Kyte
May 07, 2004 - 8:26 am UTC

it is probably because of the index (indexes <> fast=true)

look at the physical IO's in this case. 16,536 of them. You were doing about 186 PIO's per second, not too shabby for most conventional disks.


Your query does 580 logical IO's per execution. that is pretty high.

I'd actually be asking myself these questions:

a) to what end is this count(*), WHY am I counting? is it to see if I should process something? If so, JUST PROCESS SOMETHING and upon discovering "nothing to process" -- stop (eg: the best way to speed up something -- don't even do it in the first place!!!). I find almost all count(*)'s -- never needed.

b) if I cannot get it out of my head that the count(*) isn't needed -- am I really just looking for "existence". eg: count(*) = 1 is as good as count(*) = 12332 -- i don't really care about the count, just that there is a record? if so, add "and rownum = 1"

c) why am I running this query 204 times???? perhaps my algorithm is botched and this query belongs inside of a larger query (eg: remove procedural code, as much as possible)

d) why am I accessing the table. My index must be on NXT_JOB_NUMBER. If I simply added BALANCE_DATE to it -- we'd be able to get a plan that was just:

0 SELECT STATEMENT GOAL: RULE
0 SORT (AGGREGATE)
0 INDEX (RANGE SCAN) OF 'AC_PROCESSING_ACCOUNTING_6IX'
(NON-UNIQUE)

no table access.

e) why are we still running the less capable RBO :)


Brilliant!!!

Florin, May 07, 2004 - 10:22 am UTC

Many thanks for the explanations!

Can we create a trigger to trace a seesion after it starts?

PAUL, August 20, 2004 - 9:09 am UTC


Hi Tom, can we write a trigger that start tracing a session when it starts? How to do this?



Tom Kyte
August 20, 2004 - 11:27 am UTC

use a login trigger.

create trigger whatever after logon on ....
begin
execute immediate 'alter session set ....'
end;

owner of said trigger needs to have alter session granted DIRECTLY to them.

if you have either of my books -- i have many examples of this.

Correct, thanks!

PAUL, August 27, 2004 - 6:07 am UTC

Thanks, I have ordered one from Amazon already. Shipping in progress. ^_^

BTW, sometimes I can generate a trace file of a session, the case is like this:

1) I start tracing session A, and I got the session A's trace file.

2) Then, I alter session A set trace off. Then, I delete the trace file ( session A is still opening).

3) Then, I want trace the session A again, so I alter session A turn the trace on. This time, no trace file of session A is generated. WHY?

Please kindly advice.

PAUL

Tom Kyte
August 27, 2004 - 7:52 am UTC

because when you "erase" a file, all you've done is wipe out its directory entry -- the file still exists for anyone that has it open.

a trace file is not "closed" when you stop tracing, we just stop writing to the trace file. the trace file still exists. you 'erased' it but that just removed the directory entry -- the file still existed, still consumed space on the file system. When you turned on tracing again, we did not "open a file", we just started writing to the file we already had open - -but that you cannot see anymore since you removed the directory entry.

But why?

PAUL, August 27, 2004 - 10:51 am UTC

You mentioned that:
"When you turned on tracing again, we did not "open a file", we just started writing to the file we already had open - -but that you cannot see anymore since you removed the directory entry."

But, I've already physically deleted the trace file, how can Oracle starts writing additional SQL statements to the "deleted file"? Seems not possible.

BTW, how can I trace these SQL statements of the same session if Oracle doesn't re-generate a new trace file? Any other way to handle it?

Many Thanks,
PAUL

Tom Kyte
August 27, 2004 - 11:04 am UTC

you didn't physically delete anything.

you simply removed an entry from a directory file.


how to handle it? don't prematurely erase file directory entries, don't use "rm" on an open file (cause it just makes it inaccessible to you)

it is the way the OS works. some OS's won't let you 'erase' a file that is open, some do. those that do -- don't really 'erase' anything.

Thanks!

PAUL, August 30, 2004 - 10:11 am UTC


DBMS_SYSTEM & timed_statistics

PAUL, August 31, 2004 - 5:36 am UTC

Hi Tom,

It's so strange of this procedure.
I try to turn on the "timed_statistics" before tracing a session. However, when I got the trace file, there is no time statistic inside!!
I've granted the DBMS_SYSTEM to the user already, any thing wrong??


...
BEGIN
-- Turn on 'timed_statistics'
sys.dbms_system.set_bool_param_in_session(SessionId,Serial,'timed_statistics',TRUE);
-- Start tracing
sys.dbms_system.set_sql_trace_in_session(SessionId,Serial,TRUE);
END;
...


Regards,
PAUL

Tom Kyte
August 31, 2004 - 9:47 am UTC

looks like that one does not get triggered, suggest you just use

alter system set timed_statistics=true;

and leave it on permanently -- i do, nominal overhead -- yet so very very useful.

follow

PAUL, August 31, 2004 - 5:45 am UTC

But if I execute these DBMS_SYSTEM procedures in SQLPLUS:

e.g.
exec sys.dbms_system.set_bool_param_in_session(7,12055,'timed_statistics',TRUE);
exec sys.dbms_system.set_sql_trace_in_session(7,12055,TRUE);

I can get the time statistics!
Do I need to grant more rights to the user when using DBMS_SYSTEM inside packages?

Thanks,
PAUL

Tom Kyte
August 31, 2004 - 9:53 am UTC

not for me, the cpu time isn't showing up.

Performance Query

William, September 08, 2004 - 2:24 am UTC

Tom,

I have traced a slow part of our application with a logon trigger. After using tkprof on the trace file, I found the 2 offending SQL:

SELECT COUNT(ID) COUNTER
From
OBJECT_LINKS WHERE (FROM_OBJ_TYPE_CODE = :OBJ_TYPE AND FROM_OBJ_ID =
:OBJ_ID AND DATE_LOGICALLY_DELETED IS NULL AND FROM_RELEASE_LEVEL
= 'T' ) OR (TO_OBJ_TYPE_CODE = :OBJ_TYPE AND TO_OBJ_ID = :OBJ_ID
AND DATE_LOGICALLY_DELETED IS NULL AND TO_RELEASE_LEVEL = 'T')


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3.81 15.28 22134 41456 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 3.81 15.28 22134 41456 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 46 (PROMIS)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
0 CONCATENATION
0 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
51967 INDEX RANGE SCAN OBJECT_LINK_TO_FK_I (object id 31459)
0 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
1384704 INDEX RANGE SCAN OBJECT_LINK_FROM_FK_I (object id 31457)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 SORT (AGGREGATE)
0 CONCATENATION
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'OBJECT_LINKS'
51967 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'OBJECT_LINK_TO_ID_FK_I' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'OBJECT_LINKS'
1384704 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'OBJECT_LINK_FROM_ID_FK_I' (NON-UNIQUE)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 22134 0.04 12.11
latch free 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00
********************************************************************************

SELECT COUNT(ID) COUNTER
From
OBJECT_LINKS WHERE (FROM_OBJ_TYPE_CODE = :OBJ_TYPE AND FROM_OBJ_ID =
:OBJ_ID AND DATE_LOGICALLY_DELETED IS NULL AND
((FROM_RELEASE_LEVEL = 'G' OR FROM_RELEASE_LEVEL IS NULL)
OR ((FROM_RELEASE_LEVEL = 'T') AND (CHECK_ACCESS.ACCESS_TO_ENTITY(USER,
TO_OBJ_ID,TO_OBJ_TYPE_CODE) <> 'N')))) OR (TO_OBJ_TYPE_CODE = :OBJ_TYPE
AND TO_OBJ_ID = :OBJ_ID AND DATE_LOGICALLY_DELETED IS NULL AND
((TO_RELEASE_LEVEL = 'G' OR TO_RELEASE_LEVEL IS NULL) OR
((TO_RELEASE_LEVEL = 'T') AND (CHECK_ACCESS.ACCESS_TO_ENTITY(USER,
FROM_OBJ_ID,FROM_OBJ_TYPE_CODE) <> 'N'))))


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3.89 14.69 22134 41456 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 3.89 14.70 22134 41456 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 46 (PROMIS)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
11 CONCATENATION
1 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
51967 INDEX RANGE SCAN OBJECT_LINK_TO_FK_I (object id 31459)
10 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
1384704 INDEX RANGE SCAN OBJECT_LINK_FROM_FK_I (object id 31457)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 SORT (AGGREGATE)
11 CONCATENATION
1 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'OBJECT_LINKS'
51967 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'OBJECT_LINK_TO_ID_FK_I' (NON-UNIQUE)
10 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'OBJECT_LINKS'
1384704 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'OBJECT_LINK_FROM_ID_FK_I' (NON-UNIQUE)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 22134 0.07 11.57
SQL*Net message from client 2 0.00 0.00
********************************************************************************


The strange part of this is that if I run the queries in SQL*Plus, they complete in less than 1 second.

var OBJ_TYPE VARCHAR2(6)
var OBJ_ID NUMBER

BEGIN
:OBJ_TYPE := 'PERSON';
:OBJ_ID := 787656;
END;
/

SELECT COUNT(ID) COUNTER
From OBJECT_LINKS
WHERE (FROM_OBJ_TYPE_CODE = :OBJ_TYPE
AND FROM_OBJ_ID = :OBJ_ID
AND DATE_LOGICALLY_DELETED IS NULL
AND FROM_RELEASE_LEVEL = 'T' )
OR (TO_OBJ_TYPE_CODE = :OBJ_TYPE
AND TO_OBJ_ID = :OBJ_ID
AND DATE_LOGICALLY_DELETED IS NULL
AND TO_RELEASE_LEVEL = 'T')
/


COUNTER
----------
0

1 row selected.

Elapsed: 00:00:01.35

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=23 Card=1 Bytes=37)
1 0 SORT (AGGREGATE)
2 1 CONCATENATION
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'OBJECT_LINKS' (Cost=2 Card=1 Bytes=37)
4 3 INDEX (RANGE SCAN) OF 'OBJECT_LINK_TO_ID_FK_I' (NON-UNIQUE) (Cost=3 Card=8)
5 2 TABLE ACCESS (BY INDEX ROWID) OF 'OBJECT_LINKS' (Cost=2 Card=1 Bytes=37)
6 5 INDEX (RANGE SCAN) OF 'OBJECT_LINK_FROM_ID_FK_I' (NON-UNIQUE) (Cost=3 Card=8)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
14 consistent gets
6 physical reads
0 redo size
190 bytes sent via SQL*Net to client
248 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Can you help?

William

Tom Kyte
September 08, 2004 - 9:33 am UTC

did you do a 10046 level 12 trace? if so, the ACTUAL bind variable values are in the trace file -- use THOSE in sqlplus and show us what you see.

Execution Plan

William, September 09, 2004 - 12:31 am UTC

Tom,

Can you explain this from the tkprof output?
The "Row Source Operation" shows "INDEX RANGE SCAN OBJECT_LINK_TO_FK_I" but the "Execution Plan" shows :
INDEX GOAL: ANALYZED (RANGE SCAN) OF
'OBJECT_LINK_TO_ID_FK_I' (NON-UNIQUE)


Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
11 CONCATENATION
1 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
51967 INDEX RANGE SCAN OBJECT_LINK_TO_FK_I (object id 31459)
10 TABLE ACCESS BY INDEX ROWID OBJECT_LINKS
1384704 INDEX RANGE SCAN OBJECT_LINK_FROM_FK_I (object id 31457)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 SORT (AGGREGATE)
11 CONCATENATION
1 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'OBJECT_LINKS'
51967 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'OBJECT_LINK_TO_ID_FK_I' (NON-UNIQUE)
10 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'OBJECT_LINKS'
1384704 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'OBJECT_LINK_FROM_ID_FK_I' (NON-UNIQUE)



Tom Kyte
September 09, 2004 - 8:09 am UTC

Row Source Operation = "exactly what happened"

Execution Plan = output of explain plan = "what probably would happen, but is often wrong"


since tkprof runs lots faster without explain= and the row source operation is always the source of truth, my advice is to never use explain=


I can make explain plan 'lie' by:

a) using binds (bind variable peeking, data type conversions)
b) having a different environment in the application then tkprof had (eg: alter session)




Using alter session to control session enviroment

A reader, October 12, 2004 - 7:34 am UTC

Hi Tom,

Somewhere in this thread you mention about using a logon trigger as an option of setting oracle init parameters to control the session's environment.

I was considering using this approach for setting session level parameters based on the application that was establishing the session to the d/b (i.e by checking the machine name the session was originating from and the image that was requesting the session).
Would this work for me and what should I be aware of before implementing it ?

Thanks
Arief

Tom Kyte
October 12, 2004 - 8:31 am UTC

if I were doing this at the application level and I owned the application, i would have the application actually do it.

but you could do it this way. just realize that "program" in v$session won't be foolproof ($ cp sqlplus your_application_name -- for example).



logon trigger to change session env

A reader, October 12, 2004 - 10:24 am UTC

If the application name is changed we would have bigger problem. It is a canned app so no chance for implementing this via the app.

Oh...one more thing.

I also plan to use the machine name to enforce specific resource plan/group to the user (the user is going to be the same i.e the application user, irrespective of the machine originating the connection, much like the Oracle Apps/E-Business Suite user). However we know what end users are going to connect to the app via which machine and what functions they are going to perform in the app. So we need to control the resources they will use on the d/b end.

Is this going to work out? Is this even the right approach for this situation.

Thanks for your feedback.

Arief

Tom Kyte
October 12, 2004 - 10:35 am UTC

using the machine name for resource manager?

that'll not work -- in 10g you can use a SERVICE NAME (the tns connect information), but in 9i -- it is by user.

Controlling resources....

A reader, October 12, 2004 - 11:34 am UTC

I saw the following api and it got me thinking...

dbms_resource_manager.switch_consumer_group_for_sess (session id, session serial, consumer group).

I was planning to use the machine name as a criteria for differentiating the session using the application from a specific server. I would of course extract the session id and the serial before using the above api.

Any suggestions on how I may achieve the following:
Single schema/application user. Different servers (middle tiers) making connection to the d/b, each producing a differing load on the d/b (using different parts of the app). How can I control resource usage for this scenario?

Let's say d/b connections from Server A should be limited to queries that don't consume more than X% of the CPU at any given time, or not generate more than x of redo or some other resource.

Any neat tricks up your sleve ?

Thanks a lot.

Arief

Tom Kyte
October 12, 2004 - 11:41 am UTC

err, yes... i suppose you could do that. thanks for pointing that out -- yes, that would work as long as the session is used by a single application.

I'm not aware of anyone doing it though -- maybe someone passing by will have "actual experiences" to contribute.

Resource management

A reader, October 12, 2004 - 12:28 pm UTC

Hi Tom:

Thanks a lot for your response.
The session is guarnateed to be used by one application only (dedicated server connection).


I'd appreciate feed back from anyone who has tried this implementation.

Thanks once again.

Arief


Nice

Raju, November 02, 2004 - 12:58 pm UTC

Hello Tom,
Which .SQL file should I run to get my database installed
with Dbms_System package?
Please do reply.
Bye!




Tom Kyte
November 03, 2004 - 5:51 am UTC

it is installed, is is owned by sys, it has no synonyms, and it has no grants (and in 9i by default, you will have to grant on it directly -- grant execute on dbms_system to george -- since it is owned by sys and sys owned things are locked down)

Thanks

Raju, November 03, 2004 - 12:09 pm UTC

Hi Tom,
Thanks for your response.
When we enable pl/sql tracing using dbms_trace package,How
to see the output?Oracle doc.says that we have to run the
tracetab.sql which automatically creates the tables which
will be written by the system.What are those tables?How to get a meaningful output?
Please do reply


Tom Kyte
November 04, 2004 - 9:24 am UTC

search for dbms_trace on this site and you'll see an example (or two or three or maybe more)

A reader, January 09, 2005 - 8:53 pm UTC

Tom,

Can you please tell something about this query?
Can you tell how it can be corrected?

SELECT COUNT(*)
FROM
HMASTER WHERE HOST = :b1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 254 2.84 639.56 0 0 762 0
Fetch 253 1.32 145.11 0 0 0 253
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 507 4.16 784.67 0 0 762 253

Misses in library cache during parse: 0

Tom Kyte
January 10, 2005 - 8:21 am UTC

umm, no not really. sort of insufficient data here.

I can say, I would probably not RUN it at all. 99.999999999999999999999999% of the time, these "look for a count and do something" should just be "do something and if we discover there is nothing to be done, so be it"

eg: instead of:


select count(*) into L_var from t where ....;
if ( l_var > 0 )
then
do_something;
end if;


just code:

do_something;

do something will do nothing since there aren't anything's in the table to do something to.

A reader, January 09, 2005 - 10:09 pm UTC

Tom,

This looks like an expensive query too. Can you please suggest an alternative to this?

SELECT LPAD(TABKEY.NEXTVAL,13,'0')
FROM
DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1285 0.80 0.72 0 0 0 0
Execute 1285 0.19 0.31 0 0 0 0
Fetch 1285 0.58 0.54 0 1285 5212 1285
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3855 1.57 1.57 0 1285 5212 1285




Tom Kyte
January 10, 2005 - 8:28 am UTC

don't select it from dual,

insert it directly.


don't:

select lpad( .... ) into whatever from dual;
insert into t ( ... ) values ( whatever, .... );


just

insert into t ( .... ) values ( lpad(....), ..... );


and if you needed the value in your program:

begin
insert into t ( .... ) values ( lpad(....), ..... )
returning that_column into YOUR_HOST_VARIABLE;
end;





quick question on tkprof

Menon, January 28, 2005 - 2:05 pm UTC

Hi Tom
Does "alter session set events '10046 trace name context
off'" turn the tracing off and flush the contents?
Or do we have to exit the session?

This is not an issue if you tune using sql plus but
if you are trying to turn on and off the tracing
for the same connection without disconnecting (e.g.
in connection pool) then this is an issue.

Thanx!

Tom Kyte
January 28, 2005 - 3:02 pm UTC

to "flush the contents" you must

a) close all cursors or....
b) terminate the session (which does a)


alter session set events '10046 trace name context off';


thanx.

Menon, January 28, 2005 - 3:38 pm UTC

Did not understand why you repeated the "alter session.."
at the end? Just checking if I missed something.

Also, if you are using statement caching, the cursors
may not actually get closed. So in effect if you just
execute "alter session set events '10046 trace name context off'", it does not guarantee that the trace file
generated will have everything, correct?



Tom Kyte
January 28, 2005 - 7:19 pm UTC

correct, you need to have the cursors actually closed, if you are using jdbc3.0 statement caching for example, they won't be (which is GOOD, unless you are tracing)

thanx!

Menon, January 28, 2005 - 7:24 pm UTC


Thanks

veeresh, February 03, 2005 - 2:11 am UTC

Hi Tom,
My doubts in explain plan for report queries.
How i can explain plan for those reports queries where datalink exists for more then one query group ?



Tom Kyte
February 03, 2005 - 1:11 pm UTC

sorry, not sure what you mean... don't know what a "datalink" is.


turn on sql_trace, tkprof will show you all of the sql executed and all of the plans used by that sql.

Datalink

veeresh, February 04, 2005 - 6:35 am UTC

Hi Tom,
"Datalink" In Oracle Reports One Tool Is available to link with two or more then two queries .
For Example :
Query1 As Select * From Dept
Query2 As Select * From Emp where Deptno = :Deptno

Deptno Field will be used as datalink to join this two queries.
How oracle joins internally these two queries ?
If I will come to know about that then I can explain plan as a single statement not individually.

Thanks in advance



Tom Kyte
February 04, 2005 - 11:43 am UTC

i don't know, trace it and see -- i've never written a report in my life using reports.

q on tkprof output

A reader, February 16, 2005 - 7:53 pm UTC

Tom
Can you re-iterate the meaning of "cr=", "pr=" etc
in an execution plan for tkprof?
e.g.
INDEX FULL SCAN A_IDX1 PARTITION: KEY 371 (cr=290 pr=0 pw=0 time=24697 us)(object id 45714 )

Which Oracle doc explains this?

My guesses are
So cr="consistent read"
pr=physical read (temp)
pw=physical write (temp)
time=time in microseconds

Thanx.

Tom Kyte
February 16, 2005 - 7:57 pm UTC

:1,$s/(temp)//g

for those that do not speak vi/sed, that means no temp, just reads and writes of anything.

insert /*+ append */ into t select * from big_table.big_table where rownum <=
1000000


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.02 0 30 0 0
Execute 1 6.16 10.44 14783 15475 1109 1000000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 6.17 10.46 14783 15505 1109 1000000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 217

Rows Row Source Operation
------- ---------------------------------------------------
1 LOAD AS SELECT (cr=15475 r=14783 w=14647 time=10446628 us)
1000000 COUNT STOPKEY (cr=14793 r=14783 w=0 time=5745182 us)
1000000 TABLE ACCESS FULL OBJ#(37783) (cr=14793 r=14783 w=0 time=4356881 us)



big_table was a simple big_table, no views....


r=physical IO to read big_table
w=physical writes for the direct IO to T

thanx!

A reader, February 16, 2005 - 8:59 pm UTC

does it show the "estimated cardinality" in this
as well? Or is that only obtainable through explain plan.
I know tkprof shows you reality but does it also
show what the estimated cardinality was assumed
by the plan with which it actually executed the
query?

Thanx.

Tom Kyte
February 17, 2005 - 7:30 am UTC

tkprof is showing actual row counts that flowed through out of each phase of the plan.

these are reality

autotrace/explain plan in sqlplus -- those show what was guessed.

when "tuning" a query, the first thing I do anymore is ask for a tkprof (reality) and an autotrace traceonly explain of the same (the guess). Then I look for serious mismatches between the guess and reality -- those bad cardinalities lead to bad plans being selected. Then I look for "why" (stale stats, missing stats, incomplete stats....)

SAN SAN SAN

Dave T, February 17, 2005 - 6:29 am UTC

HI Tom,

We are having major problems on an environment where there are two concurrent data load runnings with large volumes.

I've set up the following test case to run when the server is under heavy usuage and low usuage.

Here are the results for heavy usuage:

SELECT /*+ FULL ( P ) FULL ( PT ) NOPARALLEL ( P ) NOPARALLEL ( PT ) USE_HASH ( P PT ) */
COUNT(*)
FROM DS_POLICY P,
DS_POLICY_TRANSACTION PT
WHERE P.ID = PT.FKPOLICYUID
AND P.START_DATE = PT.FKPOLICYSTARTDATE

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 269.51 218.98 2129476 1971265 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 269.51 218.98 2129476 1971265 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 128

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
41614211 HASH JOIN
41614225 PARTITION RANGE ALL PARTITION: 1 55
41614225 TABLE ACCESS FULL DS_POLICY_TRANSACTION PARTITION: 1 55
42319656 PARTITION RANGE ALL PARTITION: 1 55
42319656 TABLE ACCESS FULL DS_POLICY PARTITION: 1 55


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file scattered read 61936 1.48 2056.93
direct path write 39788 0.45 88.86
db file sequential read 117 0.43 2.15
latch free 16 0.01 0.04
direct path read 39788 0.85 545.63
SQL*Net message from client 2 0.05 0.05
********************************************************************************

I'm trying to prove that there is a problem with I/O.

Do you think this is a suitable test? Also do you think the above results show I/O problems?

Thanks.

Tom Kyte
February 17, 2005 - 9:21 am UTC

ask the "IO guys" if 0.03 seconds average wait is "good" for your hardware (2056/61936)


direct path read/write is temp space for the has join, you could probably maybe get rid of that with "memory"

but the db file scattered reads are full scans, no getting rid of those probably -- a max wait of 1.48 seconds indicates "something probably isn't optimal".

A reader, February 17, 2005 - 10:07 am UTC

Tom,

Thanks for the feedback.

We have tried increasing the hash_area_size parameter but this leads to 'Out of Memory' errors.

We've set up a meeting with the hardware boys.

From what I can gather through research the wait time is unacceptable for a SAN.

Thanks again.

Dave T, February 17, 2005 - 10:08 am UTC

Tom,

Thanks for the feedback.

We have tried increasing the hash_area_size parameter but this leads to 'Out of Memory' errors.

We've set up a meeting with the hardware boys.

From what I can gather through research the wait time is unacceptable for a SAN.

Thanks again.

Dave T, February 25, 2005 - 5:31 am UTC

This is a good example of where a scientific approach can yield divendends.

The evidence uncovered through the TKPROF has led to further investigation into the I/O subsystem.

This has proved that there is an external I/O problem.

The undeniable stats provided to the Ops team were the catalyst that led to an investigation.

Be scientific in your approach.

OK

Raju, May 08, 2005 - 9:45 am UTC

Hi Tom,
suppose if I am the DBA,Can I exclude some users
from tracing their statements?


Tom Kyte
May 08, 2005 - 1:41 pm UTC

they should need alter session, but that seems to maybe "not working" in 10g.

so in 9i and before don't grant alter session.

I'll file a bug against 10g.



OK

Raju, May 08, 2005 - 10:04 pm UTC

Hi Tom,
Would be happy,if you give the sql statement.

Tom Kyte
May 09, 2005 - 6:58 am UTC

revoke alter session from <whomever>;

Which one to use?

Reader, June 08, 2005 - 8:07 am UTC

For us to use TKPROF, we want to trace sessions, which one should be used?

SQL> ALTER SESSION SET SQL_TRACE = TRUE;

(This one could be stopped by this statement

SQL> ALTER SESSION SET SQL_TRACE = FALSE;
Session altered.)

====
OR
====

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

(How can we stop this one?)

If we want to trace other session how can we do that? or instance level tracing?

Could you please help me in understand this...

Many Many Thanks 

Tom Kyte
June 08, 2005 - 9:10 am UTC

if you want waits and binds, you need to use the second method (or dbms_system.set_sql_trace_in_session package or in 10g dbms_monitor)

ops$tkyte@ORA10G> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
Elapsed: 00:00:00.10
ops$tkyte@ORA10G> @traceoff
ops$tkyte@ORA10G> alter session set events '10046 trace name context off';
 
Session altered.
 


turn it on, turn it off


the packages turn it on/off for other sessions. 

Difference is confusing :(

Reader, June 16, 2005 - 5:00 am UTC

I was tryin to read the tkprof generated file and was lookin at the help file but following was confussing:

a) My generated file had:

SELECT ALL PREMPROD.PM_DIVISION.DIVN_BL_NAME, PREMPROD.PT_POLICY.POL_NO,
PREMPROD.PT_POLICY.POL_CUST_CODE, PREMPROD.PT_POLICY.POL_ASSURED_NAME,
PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_2, PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_4,
PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_7, PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_8,
PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_9, PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_3,
PREMPROD.PT_POLICY.POL_NL_PERIOD_DESC
FROM PREMPROD.PM_DIVISION, PREMPROD.PT_POLICY_SCHEDULE, PREMPROD.PT_POLICY
WHERE (PREMPROD.PT_POLICY.POL_PDS_CODE IN ('080', '088', '089')
AND PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_3 LIKE '%278091758616%')
AND ((PREMPROD.PT_POLICY_SCHEDULE.PSC_POL_SYS_ID=PREMPROD.PT_POLICY.POL_SYS_ID)
AND (PREMPROD.PM_DIVISION.DIVN_CODE=PREMPROD.PT_POLICY.POL_DIVN_CODE))
ORDER BY PREMPROD.PM_DIVISION.DIVN_BL_NAME ASC,
PREMPROD.PT_POLICY.POL_NO ASC, PREMPROD.PT_POLICY.POL_CUST_CODE ASC

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.11 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 14.15 37.39 33811 122064 74 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 14.18 37.51 33811 122064 74 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 59

Rows Row Source Operation
------- ---------------------------------------------------
0 SORT ORDER BY (cr=122064 r=33811 w=13642 time=0 us)
0 MERGE JOIN (cr=122064 r=33811 w=13642 time=0 us)
141823 SORT JOIN (cr=94982 r=13642 w=13642 time=0 us)
141823 MERGE JOIN (cr=94982 r=7696 w=7696 time=0 us)
11 TABLE ACCESS BY INDEX ROWID PM_DIVISION (cr=2 r=0 w=0 time=0 us)
11 INDEX FULL SCAN PK_PM_DIVISION (cr=1 r=0 w=0 time=0 us)(object id 31312)
141823 SORT JOIN (cr=94980 r=7696 w=7696 time=0 us)
141823 TABLE ACCESS FULL PT_POLICY (cr=94980 r=0 w=0 time=0 us)
0 SORT JOIN (cr=27082 r=20169 w=0 time=0 us)
0 TABLE ACCESS FULL PT_POLICY_SCHEDULE (cr=27082 r=20169 w=0 time=0 us)

********************************************************************************

ALTER SESSION SET SQL_TRACE = FALSE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.02 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 59



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.03 0.13 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 1 14.15 37.39 33811 122064 74 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 14.18 37.53 33811 122064 74 0

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 3 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 3 0 0

Misses in library cache during parse: 0

3 user SQL statements in session.
1 internal SQL statements in session.
4 SQL statements in session.
********************************************************************************
Trace file: orakic_ora_369130.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
3 user SQL statements in trace file.
1 internal SQL statements in trace file.
4 SQL statements in trace file.
4 unique SQL statements in trace file.
69 lines in trace file.


My questions are following:

a) My index is not used where there is index on :-
Table: PT_POLICY_SCHEDULE
Index: IDX_PDS_F_3
Column: PSC_FIELD_3

B) Where in my plan it shows
Rows Row Source Operation
------- ---------------------------------------------------
0 SORT ORDER BY (cr=122064 r=33811 w=13642 time=0 us)
0 MERGE JOIN (cr=122064 r=33811 w=13642 time=0 us)
141823 SORT JOIN (cr=94982 r=13642 ......
...........................................

but in example of TKPROF in manual
</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/sqltrace.htm#12126 <code>

It shows,
Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
13 NESTED LOOPS (OUTER)
14 TABLE ACCESS (FULL) OF 'EMP'

Why it is Execution Plan, and mine is Row Source Operation?

c) After reading this plan, what should i look into or what kind of methodolgy should i use to tune this statement?

I always read your suggestion from asktom when ever I want to do something but this time i was really confused and was not able to find the difference. I hope you will really help me out.

Thank you in advance for your time.

Tom Kyte
June 16, 2005 - 9:55 am UTC

a) AND PREMPROD.PT_POLICY_SCHEDULE.PSC_FIELD_3 LIKE '%278091758616%')

why would you want to use an index on that? it has to read the entire index anyway?

b)

because they used "explain=user/pass" which is an option you never want to use.

You have the real plan that was used at runtime.

c) ask yourself why you need to use a leading percent and if you really do need to use a leading percent in this case.

sorry forgot the Explain plan

Reader, June 16, 2005 - 7:40 am UTC

Rows Plan
---------- -------------------------------------------------
14617 SELECT STATEMENT
14617 SORT ORDER BY
14617 MERGE JOIN
13237 SORT JOIN
13237 MERGE JOIN
11 TABLE ACCESS BY INDEX ROWID PM_DIVISION
11 INDEX FULL SCAN PK_PM_DIVISION
13237 SORT JOIN
13237 TABLE ACCESS FULL PT_POLICY
41018 SORT JOIN
41018 TABLE ACCESS FULL PT_POLICY_SCHEDULE

tkprof insert=XXX.sql

A reader, July 19, 2006 - 8:26 am UTC

Hi

I was wondering how can we fix sql text length problem when using tkprof insert option.

I have sql statements as long as 12000 characters, when execute sql script generated by tkprof insert=XXX.sql option it fails giving ORA-01704 string literal too long, I have tried in 8i and 9i.

Is it a bug?

Tom Kyte
July 19, 2006 - 1:24 pm UTC

it would be a bug, but not one that is fixed as far as I know .

tkprof and insert option

A reader, July 19, 2006 - 3:24 pm UTC

I see, it's a bug. That is really a pain. I am migrating databases from 8i to 9i and would like to store query statistics in the table to compare afterwards.

Do you know any other way or I have to write a parser to parse the tkprof output....?


Tom Kyte
July 22, 2006 - 4:16 pm UTC

what about using statspack - collects the sql, and the execution "profile" of the sql

tkprof and insert option

A reader, July 23, 2006 - 5:15 am UTC

Hi

I have statspack as well however I would like to store single execution statistics that is why I have used tkprof.

I wrote a parser with awk 2 days ago, works well.

Cheers

The most silly question on TKPROF

Arindam Mukherjee, September 11, 2006 - 11:02 am UTC

Respected Sir,

Today it’s first time I have generated output file using TKPROF in line with your instruction. Now you please tell me how to delete or purge those trace files NOT tkprof output files otherwise my trace files will eat up space in the server.
Please remember I don’t have any special privilege to access any uncommon directory. So through SQL can I remove that?


Tom Kyte
September 11, 2006 - 11:11 am UTC

technically, you could use utl_file to erase them IF the dba sets it up for you (by creating a directory that points to the trace directory). But you would then have special privileges...

select generating redo

A reader, October 08, 2006 - 7:25 am UTC

Hi Tom,

i am confused about something, i am working with 2 sessions on the same table.

session 1:
SQL>  delete from test_assm ;

202 rows deleted.

Session 2:

SQL>  select * from test_assm ;

202 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=43 Card=502 Bytes=10
          05506)

   1    0   PARTITION RANGE (ALL)
   2    1     TABLE ACCESS (FULL) OF 'TEST_ASSM' (Cost=43 Card=502 Byt
          es=1005506)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        549  consistent gets
          0  physical reads
       3536  redo size
     409173  bytes sent via SQL*Net to client
        646  bytes received via SQL*Net from client
         15  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        202  rows processed

now i can see that the select has generate about 3536 redo size , even if i repeat the select many times it gave the same result.

once i commit in session 1 , i can select in the second session without any redo , could you please explain ?

thanks 

Tom Kyte
October 08, 2006 - 7:39 am UTC

3.5k of redo? so? not seeing the point.

You are doing work to undo their changes, but for 3.5k of redo - I'm not really "very interested", it is nothing.

thanks

A reader, October 08, 2006 - 7:51 am UTC

Hi Tom,

yes they aren't much , i am just surprised , i guess that undo work are done using the undo segment and there should be no redo logs needed

will you please give us more lite on this issue ?

many thanks

Tom Kyte
October 08, 2006 - 9:26 am UTC

for 3-3.5k of redo, we are talking "peanuts, noise, nothing to look at".

TKPROF and execution plan

A reader, October 11, 2006 - 6:58 am UTC

Hi Tom,
i was tring to use the tkprof you have discussed in chapter 10 in your book but i am unable to view
the execution plan in the tkprof output file 

here is what i tried

SQL> create or replace PROCEDURE tk_test(X_V_ID in number)
  2  is
  3  V_ID number;
  4  begin
  5   V_ID :=X_V_ID;
  6  update test_mssm set name = 'tesssssssssst' where id < V_ID;
  7  end;
  8  /

Procedure created.

SQL>  alter session set sql_trace = true ;

Session altered.

SQL> exec tk_test(50) ;

PL/SQL procedure successfully completed.

SQL> alter session set sql_trace = false ;

Session altered.

SQL>

=============================================TKPROF OUTPUT===========
BEGIN tk_test(50) ; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS
********************************************************************************
UPDATE test_mssm set name = 'tesssssssssst'
where
 id < :b1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00      37.96          0        152         93          49
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00      37.96          0        152         93          49

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
********************************************************************************

select file#
from
 file$ where ts#=:1
======================================Cut Here=============

as you can see the update statment has no execution plan

can you please help in figuring that out ?

thanks 

Tom Kyte
October 11, 2006 - 8:15 am UTC

because you set sql trace = false. you only get the row source operation when the cursor is closed, plsql caches cursors.

do this:

set sql trace on
do stuff
EXIT SQLPLUS


then tkprof.

usual quick

A reader, October 11, 2006 - 8:53 am UTC

Hi Tom,

thank you very much for the tkprof answer

you have already mentioned that the plsql need to be exit to display the execution plan in your book , but i didn't understood it correct , i set the trace off and then i exit ;) sorry it is my mistake

one more thing: you mentioned that the elapsed time will be in hundert of seconds (1/100 of seconds ) what is the database version ? i have read one of your threads you mentioned that 8i will use 1/1000 and 9i will use 1/1000000

thanks

Tom Kyte
October 11, 2006 - 9:09 am UTC

the elapsed time as displayed in the tkprof report is reported to the 1/100 of the second.

9.12

for example. 9 and 12/100ths of a second.

i understood it now

A reader, October 11, 2006 - 9:19 am UTC

Hi Tom,
i got it now

what about the raw trace file

PARSE #1:c=2000,e=1734,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=1133386352033001

here 2000 cpu time , but what is the unit ? i am using oracle 9i ? i guess it is 2000/1000000 sec , am i correct ?

2) for the open cursors
ops$tkyte@ORA8I.WORLD> select * from my_stats where name = 'opened cursors current';

i guess that behaviour changed in 9i , am i correct ? when i run this query on 9.2.0.4 it list the same v$open_cursor
, it list all my plsql cached cursors , is that normal

thanks

Tom Kyte
October 11, 2006 - 3:50 pm UTC

it is tracked typically in 1/1000th of a second, but multiplied out to be 1/1000000th of a second (hence the zeros at the end)

not sure what you mean by the second.

set up trace file

Dawar Naqvi, November 04, 2006 - 10:33 am UTC

OS: IBM/AIX 5.2 Unix
DB: 10.2

Tom,

How to set up trace file?

We are running third part application on our database servers.
One of their module has some issues.

And vendors want from us to set up trace file.

Any feed back will be appreciated.


cheers,
Dawar


Tom Kyte
November 04, 2006 - 12:30 pm UTC

umm, you don't "set up" trace files, they just "are"

the only parameters to peek at would be

SQL> show parameter dump
 

is there way to know which session has trace turned on?

Yong Wu, November 17, 2006 - 4:05 pm UTC

Tom,

How to check which session has trace on?
I know the way to check /udump/ora_{spid}. it needs to callout on os. is there way to get from inside database?
and how long the trace have been running?

thanks

Tom Kyte
November 20, 2006 - 2:28 am UTC

not that I know of.

you'd have to look at the traces, they have the start time in them as well.

A reader, December 08, 2006 - 12:51 am UTC

thanks

identify error source from trace file

Sanji, February 14, 2007 - 3:01 pm UTC

Tom,
We are attempting to identify the source of "ORA-06502: PL/SQL: numeric or value error" during the execution of a package.

I traced the session that executed this package, but am unable to pinpoint the source of the problem.

From the trace file
.
.
FETCH #31:c=0,e=25,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=320427864131
FETCH #31:c=0,e=26,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=320427864243
FETCH #31:c=0,e=15,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=320427864344
EXEC #35:c=0,e=38,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=320427864566
FETCH #35:c=0,e=1168,p=1,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=320427865772
=====================
PARSING IN CURSOR #42 len=63 dep=1 uid=20 oct=3 lid=20 tim=320427867965 hv=147662276 ad='1e674d00'
SELECT *
FROM shift_pattern
WHERE shftpat_id = :b1
END OF STMT
PARSE #42:c=0,e=1074,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=0,tim=320427867950
EXEC #42:c=0,e=1253,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=320427869411
FETCH #42:c=0,e=583,p=1,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=320427870048
EXEC #21:c=0,e=54,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=320427870938
FETCH #21:c=0,e=482,p=1,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=320427871467
EXEC #21:c=0,e=25,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=320427872113
FETCH #21:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=320427872160
EXEC #1:c=49660000,e=536293932,p=170509,cr=435877,cu=1523,mis=0,r=0,dep=0,og=4,tim=320427880224
ERROR #1:err=20312 tim=32811814
*** 2007-02-14 13:49:09.279
=====================
PARSING IN CURSOR #1 len=33 dep=0 uid=5 oct=42 lid=5 tim=320557863856 hv=4238949625 ad='1e459e10'
alter session set sql_trace=false
END OF STMT
PARSE #1:c=0,e=622,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=320557863827
EXEC #1:c=0,e=137,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=320557864219

How do i identify the sql causing the error from the trace file ?

The tkprof from the trace too doesn't suggest any information pertaining to the error.

Thanks
Sanji


Tom Kyte
February 14, 2007 - 3:26 pm UTC

sql trace might not tell you anything since you don't need to be doing SQL to get it.

but find out what cursor #1 was - you exec #1, and get an error on #1

it's the package execution

sanji, February 14, 2007 - 3:58 pm UTC

Tom,

PARSING IN CURSOR #1 len=51 dep=0 uid=5 oct=47 lid=5 tim=319891586162 hv=1303641171 ad='1e815ef8'
BEGIN workbrain.pkg_ur_interface.do_empload; END;
END OF STMT
PARSE #1:c=20000,e=74489,p=6,cr=66,cu=0,mis=1,r=0,dep=0,og=4,tim=319891586147

and that's the package we are trying to execute.

exec pkg_ur_interface.DO_EMPLOAD;
BEGIN pkg_ur_interface.DO_EMPLOAD; END;

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at "WORKBRAIN.WBUTIL", line 176
ORA-06512: at "WORKBRAIN.PKG_EMPLOYEE", line 1250
ORA-06512: at "WORKBRAIN.PKG_EMPLOYEE", line 1362
ORA-06512: at "WORKBRAIN.PKG_EMPLOYEE", line 1505
ORA-06512: at "WORKBRAIN.PKG_EMPLOYEE", line 1529
ORA-06512: at "WORKBRAIN.PKG_UR_INTERFACE", line 1181
ORA-06512: at line 1

From dba_source, line 176 of wbutil package

FUNCTION UnloadTable RETURN VARCHAR2 IS
173
strValues VARCHAR2(4000);
174
BEGIN
175
FOR i IN 1..tabFields.COUNT LOOP
176
strValues := strValues||'"'||tabFields(i).FieldName||chrEq||tabFields(i).FieldValue||'"'||chrDel;
177
END LOOP;
178
strValues := SUBSTR(strValues,1,LENGTH(strValues) - 1);
179
RETURN (strValues);
180
END;

Thanks
Sanji
Tom Kyte
February 15, 2007 - 9:47 am UTC

you are building a concatenated string in a loop


for i in 1 .. N
loop
str := str || other_stuff
end loop;

I would assume that your string is too small

ops$tkyte%ORA9IR2> declare
  2          strValues varchar2(4000);
  3  begin
  4          for i in 1 .. 4001
  5          loop
  6                  strValues := strValues || 'x';
  7          end loop;
  8  end;
  9  /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 6


A slight confusion

Sanji, February 15, 2007 - 11:10 am UTC

Tom, The example works if we change 4001 to 4000

SANJI:FDCTL@ORADEV1>declare
2 strvalues varchar2(4000);
3 begin
4 for i in 1..4000 <---- if we change this from 4001
5 loop
6 strvalues := strvalues || 'x';
7 end loop;
8 end ;
9 /

PL/SQL procedure successfully completed.

Thanks
Sanji
Tom Kyte
February 16, 2007 - 10:55 am UTC

umm,

no kidding - that should be obvious.

I was trying to show what was going wrong in the code above - they are building a string like I was - and the string they built - didn't fit into the string they tried to put it into.

Got it

Sanji, February 15, 2007 - 5:35 pm UTC

I got it. Sorry for the stupid question.

Thanks
Sanji

Tracing from DBMS_SCHEDULER 10.2.0.3

Anindya, July 18, 2008 - 5:41 am UTC

Tom,

I have been trying to trace a job.
First, I tried using DBMS_MONITOR, but it did not work. Also I wanted it to be enabled and disabled from within the job; not sure if setting and resetting the Client Identifier in code logic can do the trick.

However, I tried all known methods like SQL_Trace=true or dbms_system.set_ev; but nothing did work.
I could not find any relevant note on metalink.

Below is the test script:
It requires the EMP table in addition to the script.
---------------------------------
create table P_CONTROL (Activate_Ind varchar2(1), Last_SPID varchar2(20) );
/

Insert into P_CONTROL (Activate_Ind) values ('Y');
commit;
/*
begin
dbms_monitor.client_id_trace_enable(client_id=>'TEST_ID',binds=>false,waits=>true);
commit;
end;
/
*/

create or replace procedure PC_MONITOR
is
lv_emp emp.ename%type;
lv_sal emp.sal%type;
lv_SID varchar2(10);
lv_serial varchar2(10);
lv_control_val P_CONTROL.Activate_ind%type;

begin
select Activate_ind into lv_control_val
from P_CONTROL;
select sid, serial# into lv_SID, lv_serial from v$session where sid = (select sid from v$mystat where rownum=1);

update P_CONTROL set Last_SPID =
(select SPID from v$process
where addr = (select paddr from v$session where sid= lv_SID));
dbms_session.set_identifier('TEST_ID') ;
COMMIT ;
--DBMS_SUPPORT.start_trace(waits=>TRUE, binds=>FALSE);
sys.dbms_system.set_ev(lv_SID, lv_SERIAL, 10046, 8, '');
execute immediate 'alter session set sql_trace = true';
while lv_control_val = 'Y'
LooP

for i in (select ename, sal from emp)
loop
lv_emp := i.ename;
lv_sal := i.sal;
end loop;

select Activate_ind into lv_control_val
from P_CONTROL;

if lv_control_val = 'N' then
sys.dbms_system.set_ev(lv_SID, lv_SERIAL, 10046, 0, '');
execute immediate 'alter session set sql_trace = false';
end if;
End Loop;

end PC_MONITOR;
/


select sid, serial#, program, client_identifier, user, sql_trace, service_name, module, status from v$session where username ='SCOTT';

exec dbms_Scheduler.Create_Job_Class(Job_Class_Name => 'TEST_CLASS1',Service => 'xxx.xxx.com');

begin
--DBMS_SCHEDULER.DROP_JOB( JOB_NAME => 'TEST_CLASS1_JOB') ;
DBMS_SCHEDULER.CREATE_JOB( JOB_NAME => 'TEST_CLASS1_JOB',
JOB_CLASS => 'TEST_CLASS1',
JOB_TYPE => 'STORED_PROCEDURE',
JOB_ACTION => 'PC_MONITOR',
ENABLED => FALSE);
end;

exec DBMS_SCHEDULER.ENABLE('TEST_CLASS1_JOB');

select sid, serial#, program, client_identifier, user, sql_trace, service_name, module, status from v$session where username ='SCOTT';

update scott.p_control set activate_ind = 'N';
commit;

trcsess output=trace1.trc clientid=TEST_ID *

tkprof trace1.trc trace1.sql sys=NO explain = scott/PWD
-------------------------------------

The test did not generate any trace file; BDUMP or UDUMP.
Is there any setting that you think I am missing here?
Or could this be a bug? OS is RHEL 4.

Note:
----
1)
If I use exec DBMS_SCHEDULER.RUN_JOB('TEST_CLASS1_JOB');
then the trace is generated and my session itself runs the procedure.
But for the test given, I can see j001 has created the session with service and client ID properly set. But no trace file for j001 too.
2)
Trace file is available if I modify the
DBMS_SCHEDULER.CREATE_JOB as ENABLED => TRUE

thanks,

Tom Kyte
July 18, 2008 - 4:52 pm UTC

if j001 had a trace file open

and someone erased it (which only unlinks the file name, it doesn't really 'erase' anything)

then j001's trace file will be invisible to you until j001 is stopped and restarted.



ops$tkyte%ORA10GR2> create or replace procedure PC_MONITOR
  2  is
  3          l_x number := 5;
  4  begin
  5          dbms_monitor.session_trace_enable( waits=>true, binds=>true );
  6          dbms_session.set_identifier('TEST_ID') ;
  7          for x in ( select l_x data from dual )
  8          loop
  9                  null;
 10          end loop;
 11          dbms_monitor.session_trace_disable;
 12  end;
 13  /

Procedure created.

ops$tkyte%ORA10GR2> pause

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_Scheduler.Create_Job_Class(Job_Class_Name => 'TEST_CLASS1',Service => 'ora10gr2');

PL/SQL procedure successfully completed.


ops$tkyte%ORA10GR2> begin
  2  DBMS_SCHEDULER.CREATE_JOB( JOB_NAME => 'TEST_CLASS1_JOB',
  3                                  JOB_CLASS => 'TEST_CLASS1',
  4                                  JOB_TYPE => 'STORED_PROCEDURE',
  5                                  JOB_ACTION => 'PC_MONITOR',
  6                                  ENABLED => FALSE);
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec DBMS_SCHEDULER.ENABLE('TEST_CLASS1_JOB');

PL/SQL procedure successfully completed.




in the trace file, we find for j000 in this case

<b>*** CLIENT ID:(TEST_ID) 2008-07-18 16:35:50.786
=====================
PARSING IN CURSOR #3 len=26 dep=2 uid=344 oct=3 lid=344 tim=1187903662877460 hv=1410576239 ad='3b9ef46c'
SELECT :B1 DATA FROM DUAL
END OF STMT
PARSE #3:c=0,e=234,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1187903662877450
BINDS #3:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1206001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=b72f86d8  bln=22  avl=02  flg=05
  value=5</b>
EXEC #3:c=1000,e=845,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1187903662878426
FETCH #3:c=0,e=41,p=0,cr=0,cu=0,mis=0,r=1,dep=2,og=1,tim=1187903662878500
"ora10gr2_j000_12613.trc" line 393 of 429 --91%-- col 22

Info needed :dbms_system.ksdfls

Mahalingesh, April 08, 2009 - 10:52 am UTC

Hi,

Could you please tell little more on dbms_system.ksdfls?

My question is - In Linux nodes whenever i am backing up the alertlog by using mv or cp the OS processes associated(i.e., lgwr, psp0, background process..) still point the original Alertlog.

Can you clearup all OS process on Alertlog by using
dbms_system.ksdfls?

Thanks
Mahalingesh
Tom Kyte
April 13, 2009 - 2:53 pm UTC

that just flushes the buffer.


but you should just

a) copy (not mv) the alert
b) rm it

then the processes will do the right thing, if you use lsof, you'll see it get into a status of (deleted) and then when the backgrounds go to write to it, they'll open a fresh on. You'll see the deleted status go away as the backgrounds write to the new one.

What is tim=

Ravi, March 11, 2011 - 6:13 am UTC

Tom,
EXEC #3:c=1000,e=845,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1187903662878426

What does the "tim=1187903662878426" tell?
Is that the elapsed time since the Database had been up and running or since some arbitary time (Julian).

The reason I am asking is, if I had TWO traces running (on 2 different sessions) at the same time, can I use the value on that field across the trace files, to design a Timeline of Events across the sessions?

Many thanks

Ravi
Tom Kyte
March 11, 2011 - 7:42 am UTC

it is the time from some arbitrary epoch in time. You can use the timestamp found in the trace files in various places (always at the top) to 'ground' a given value as a fixed time.

You can use that tim= field to derive a timeline between events across trace files - we ourselves do that all of the time with the trcsess utility.

Explain plan

Prince, March 14, 2011 - 6:15 am UTC

I have a query like this running very badly...
SELECT DISTINCT a.name, 'Y', DECODE(to_char(a.START_DT, 'DD/MM/YYYY HH24.MI.SS'), NULL, b.status_date, to_char(a.START_DT, 'DD/MM/YYYY HH24.MI.SS'))
FROM TAB1 a,TAB2 b WHERE
a.acc_num = b.cust_id
AND b.success_flg = 'Y'
AND b.error_flg = 'N';

The below is the plan before and after creating a bitmap index on success_flg and error_flg.

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 111 | 136K| | | |
| 1 | SORT UNIQUE | | 1 | 111 | 136K| | | |
| 2 | HASH JOIN | | 1 | 111 | 136K| | | |
| 3 | TABLE ACCESS FULL | TAB2 | 1 | 33 | 19 | | | |
| 4 | REMOTE | | 20M| 1559M| 135K|E.,OM |SERIAL| |
-------------------------------------------------------------------------------------------------------

Note: cpu costing is off, 'PLAN_TABLE' is old version

-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 111 | 136K| | | |
| 1 | SORT UNIQUE | | 1 | 111 | 136K| | | |
| 2 | HASH JOIN | | 1 | 111 | 136K| | | |
| 3 | TABLE ACCESS BY INDEX ROWID | TAB2 | 1 | 33 | 1 | | | |
| 4 | BITMAP CONVERSION TO ROWIDS| | | | | | | |
| 5 | BITMAP INDEX SINGLE VALUE | FLAG_BIX | | | | | | |
| 6 | REMOTE | | 20M| 1559M| 135K|E.,OM |SERIAL| |
-----------------------------------------------------------------------------------------------------------------

Note: cpu costing is off, 'PLAN_TABLE' is old version


Please suggest me how to tune this. Thanks in advance.
Tom Kyte
March 14, 2011 - 8:16 am UTC

I would use a driving site hint perhaps to have this optimized at the remote site, the site that has the 20million records.

search for driving_site in the documentation to learn about this particular hint that gives the optimizer more information about how best to do its job.

Performance Issue.

John, March 14, 2011 - 6:18 am UTC

Tom,
Is there any issue in performance if I wl use multiple commit statement inside a program ?
Tom Kyte
March 14, 2011 - 8:20 am UTC

Here is a quote from Expert Oracle Database Architecture on this topic:

<quote>



Bad Transaction Habits


Many developers have some bad habits when it comes to transactions. I see this frequently with developers who have worked with a database that “supports” but does not “promote” the use of transactions. For example, in Informix (by default), Sybase, and SQL Server, you must explicitly BEGIN a transaction; otherwise, each individual statement is a transaction all by itself. In a similar manner to the way in which Oracle wraps a SAVEPOINT around discrete statements, these databases wrap a BEGIN WORK/COMMIT or ROLLBACK around each statement. This is because, in these databases, locks are precious resources, and readers block writers and vice versa. In an attempt to increase concurrency, these databases want you to make the transaction as short as possible—sometimes at the expense of data integrity.


Oracle takes the opposite approach. Transactions are always implicit, and there is no way to have an “autocommit” unless an application implements it (see the “Using Autocommit” section later in this chapter for more details). In Oracle, every transaction should be committed when it must and never before. Transactions should be as large as they need to be. Issues such as locks, blocking, and so on should not really be considered the driving forces behind transaction size—data integrity is the driving force behind the size of your transaction. Locks are not a scarce resource, and there are no contention issues between concurrent readers and writers of data. This allows you to have robust transactions in the database. These transactions do not have to be short in duration—they should be exactly as long as they need to be (but no longer). Transactions are not for the convenience of the computer and its software; they are to protect your data.


Committing in a Loop

Faced with the task of updating many rows, most programmers will try to figure out some procedural way to do it in a loop, so that they can commit every so many rows. I’ve heard two (false!) reasons for doing it this way:

o It is faster and more efficient to frequently commit lots of small transactions than it is to process and commit one big transaction.
o We don’t have enough undo space.


Both of these reasons are misguided. Furthermore, committing too frequently leaves you prone to the danger of leaving your database in an “unknown” state should your update fail halfway through. It requires complex logic to write a process that is smoothly restartable in the event of failure. By far the best option is to commit only as frequently as your business processes dictate and to size your undo segments accordingly.

Let’s take a look at these issues in more detail.


Performance Implications

It is generally not faster to commit frequently—it is almost always faster to do the work in a single SQL statement. By way of a small example, say we have a table, T, with lots of rows, and we want to update a column value for every row in that table. We’ll use this to set up such a table (run these four setup steps before each of the three cases below):

ops$tkyte%ORA11GR2> drop table t;
Table dropped.

ops$tkyte%ORA11GR2> create table t as select * from all_objects;
Table created.

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> variable n number


Well, when we go to update, we could simply do it in a single UPDATE statement, like this:

ops$tkyte%ORA11GR2> exec :n := dbms_utility.get_cpu_time;
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> update t set object_name = lower(object_name);
71952 rows updated.

ops$tkyte%ORA11GR2> exec dbms_output.put_line( (dbms_utility.get_cpu_time-:n) ||
 ' cpu hsecs...' );
126 cpu hsecs...


Many people—for whatever reason—feel compelled to do it like this—slow-by-slow/row-by-row—in order to have a commit every N records:

ops$tkyte%ORA11GR2> exec :n := dbms_utility.get_cpu_time;
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> begin
  2     for x in ( select rowid rid, object_name, rownum r
  3                  from t )
  4     loop
  5          update t
  6             set object_name = lower(x.object_name)
  7           where rowid = x.rid;
  8          if ( mod(x.r,100) = 0 ) then
  9             commit;
 10          end if;
 11     end loop;
 12     commit;
 13  end;
 14  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> exec dbms_output.put_line( (dbms_utility.get_cpu_time-:n) ||
 ' cpu hsecs...' );
469 cpu hsecs...


In this simple example, it is many times slower to loop in order to commit frequently. If you can do it in a single SQL statement, do it that way, as it is almost certainly faster. Even if we “optimize” the procedural code, using bulk processing for the updates

ops$tkyte%ORA11GR2> exec :n := dbms_utility.get_cpu_time;
PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> declare
  2      type ridArray is table of rowid;
  3      type vcArray is table of t.object_name%type;
  4
  5      l_rids  ridArray;
  6      l_names vcArray;
  7
  8      cursor c is select rowid, object_name from t;
  9  begin
 10      open c;
 11      loop
 12          fetch c bulk collect into l_rids, l_names LIMIT 100;
 13          forall i in 1 .. l_rids.count
 14              update t
 15                 set object_name = lower(l_names(i))
 16               where rowid = l_rids(i);
 17          commit;
 18          exit when c%notfound;
 19      end loop;
 20      close c;
 21  end;
 22  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> exec dbms_output.put_line( (dbms_utility.get_cpu_time-:n) ||
 ' cpu hsecs...' );
170 cpu hsecs...

PL/SQL procedure successfully completed.


it is in fact much faster, but still much slower than it could be. Not only that, but you should notice that the code is getting more and more complex. From the sheer simplicity of a single UPDATE statement, to procedural code, to even more complex procedural code—we are going in the wrong direction! Furthermore (yes, there is more to complain about), the preceding procedural code is not done yet. It doesn’t deal with “what happens when we fail” (not if we but rather when we). What happens if this code gets halfway done and then the system fails? How do you restart the procedural code with a commit? You’d have to add yet more code so you knew where to pick up and continue processing. With the single UPDATE statement, we just reissue the UPDATE. We know that it will entirely succeed or entirely fail; there will not be partial work to worry about. We visit this point more in the section “Restartable Processes Require Complex Logic” below.

Now, just to supply a counterpoint to this discussion, recall in Chapter 7 “Concurrency and Multi-Versioning” when we discussed the concept of write consistency and how an UPDATE statement, for example, could be made to restart. In the event that the preceding UPDATE statement was to be performed against a subset of the rows (it had a WHERE clause, and other users were modifying the columns this UPDATE was using in the WHERE clause), then there would be a case either for using a series of smaller transactions rather than one large transaction or for locking the table prior to performing the mass update. The goal here would be to reduce the opportunity for restarts to occur.

If we were to UPDATE the vast majority of the rows in the table, that would lead us toward using the LOCK TABLE command. In my experience, however, these sorts of large mass updates or mass deletes (the only statement types really that would be subject to the restart) are done in isolation. That large, one-time bulk update or the purge of old data generally is not done during a period of high activity. Indeed, the purge of data should not be affected by this at all, since you would typically use some date field to locate the information to purge, and other applications would not modify this data.

</quote>

Driving_site Hint

Prince, March 17, 2011 - 12:37 am UTC

Hi Tom,
I used the driving_site hints and the query is running very very fine.

Thanks a ton for your help and support.

Prince

Not enough Undo space

Uwe, April 21, 2011 - 7:47 am UTC

Dear Tom,

many moons ago I've learnt from your site how frequent commits affect performance and scalability and changed my coding habits (and evangelized others). Yet I've never seen you debunk the quote

> We don’t have enough undo space.

You state that it's misguided, but could you provide an alternative sceario - as you usually do?

E.g.: A data mart has 5 GB of Undo space, which is sufficient for 98% of all writes. Now we load a fact table during an end-of-month batch (that means, an earlier loading of smaller portions is not an option) with 100 GB of data in one single "insert ... select from <dblink>" and - of course - runs out of Undo.
I'd usually go down the road of bulk inserting instead of arguing with management about leasing costly disk space for the remaining 2% of the writes. Wouldn't you?

Cheers,
Uwe
Tom Kyte
April 25, 2011 - 8:11 am UTC

what do you mean by debunking an obvious issue?

"we don't have enough undo space"

"THEN CREATE MORE"

there, debunked.


which is sufficient for 98% of all
writes.


which means of course that it is INSUFFICIENT for 2% of the operations. And as long as it is insufficient for N% where N > 0, it is too small.


If you were inserting 100gb, you might consider insert /*+ append */ wouldn't you - which means, you can skip undo generation for the table. Viola.



Disk is cheap, our time is expensive. But even so, to load 100gb, you would be using append.


You could load it up using bulk processing and intermittent commits - but then you end up writing a ton more code that needs to be debugged and maintained over time (it has to be restartable too don't forget!!! and the system you are loading into has to be expecting wholly inconsistent data whilst you are doing this).

So, either size it right hardware wise or spend $$$$$ writing code all day.

tkprof

Manoj Kaparwan, April 22, 2011 - 6:55 am UTC

Tom
I have

table t1 with 1 million rows and it has 50 columns .
ID is one of the column and there are 1/2 million rows with ID=1

I need to tkprof the following

select * from t1 where id=1

How can I avoid it data flowing ( into client SQL*Plus) as I am just interrested in tkprof of the statement.

Is it ok to do below?

set timing on
set autotrace traceonly
alter session set events '10046 trace name context forever,level 12';
select * from t1 where id=1
alter session set events '10046 trace name context off';
exit



Tom Kyte
April 25, 2011 - 10:48 am UTC

yes, that works fine, that'll array 15 fifteen rows at a time and not print the data.

tkprof.. contd

Manoj Kaparwan, April 22, 2011 - 8:05 am UTC

Tom
Continuing above question...

I did used - traceonly statistics

Looks in thie case we dont have any recursive SQLs and hence no entry in tkprof also.

So would the following be the best way?




set timing on
set autotrace traceonly statistics
alter session set events '10046 trace name context forever,level 12';
select * from t1 where id=1
alter session set events '10046 trace name context off';
exit





Tom Kyte
April 25, 2011 - 10:50 am UTC

I don't know what you mean by "Looks in thie case we dont have any recursive SQLs and hence no entry in tkprof
also."

set autotrace traceonly will show the explain plan and the statistics after running the query - it'll be in the trace file.

But this method works as well.

Debunked

Uwe, April 29, 2011 - 11:41 am UTC

You wrote:
> "THEN CREATE MORE"
> there, debunked.

That was easy, thanks! :-)


> you might consider insert /*+ append */

Point taken. I should have constructed my example with UPDATE...

> Disk is cheap, our time is expensive.
True in many cases, but in a managed environment with disks mirrored across different locations, "cheap" becomes relative. As of today, we're talking 5$ per GB per month, which I don't consider cheap.
Anyway, I'm 100% pro "keep it simple" and pro "do it in SQL instead of PL/SQL, whereever possible". Still, sometimes the more complex solution might be the more pragmatic one.

Kind regards,
Uwe
Tom Kyte
April 29, 2011 - 11:47 am UTC

If you had said "update", I would have said - if you are doing such a massive update that it generates gobs of undo - you probably meant to really do a CREATE TABLE AS SELECT - select the updated data and do a create table, then index it (if you need any indexes), then drop old and rename new.

That can be done without any undo and redo. And is likely faster than a huge update.


tkprof

sam, April 30, 2011 - 8:09 am UTC


How to ensure that I am getting all the SQL's in tkprof

A reader, November 11, 2011 - 8:09 am UTC

Hello Tom,

I have a oracle Apps report , when I trace this I can see that there are call to 4 different procedures apart from few sql's .so sql's and then some calls to few stored procedures. and then again few SQL's. but what I am not able to see is the sql's from within that procedures in the trace file. I am assuming that those procedures will have some sql's within them. is there a way to avoid this and get all the sql's from those procedures in the trace file output. as tkprof is of large size,I have not pasted it here.

Thanks
Tom Kyte
November 11, 2011 - 8:43 am UTC

Tell us - when is sql trace getting turned on?

If you

a) call a procedure and it runs some sql
b) then turn on trace
c) call that procedure again

You may not see that sql because the sql would be cached OPENED by plsql (you want it to be)

You need to get sql trace turned on sooner - rather than later.


Do you have access to ASH - which is always on?

Please comment

Sandy, August 28, 2012 - 5:48 am UTC

Please comment, why for below example single statement is slower than pl/sql block?

here is example...

SQL*Plus: Release 10.1.0.4.2 - Production on Tue Aug 28 18:24:45 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


SQL> drop table t;

Table dropped.

SQL> create table t as select * from all_objects;

Table created.

SQL> insert into t select * from t;

47156 rows created.

SQL> insert into t select * from t;

94312 rows created.

SQL> insert into t select * from t;

188624 rows created.

SQL> insert into t select * from t;

377248 rows created.

SQL> insert into t select * from t;

754496 rows created.

SQL> insert into t select * from t;

1508992 rows created.

SQL> COMMIT;

Commit complete.

SQL> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

SQL> variable n number

SQL> exec :n := dbms_utility.get_cpu_time;

PL/SQL procedure successfully completed.

SQL> update t set object_name = lower(object_name);

3017984 rows updated.

SQL> COMMIT;

Commit complete.

SQL> exec dbms_output.put_line( (dbms_utility.get_cpu_time-:n) ||' cpu hsecs...' );
5877 cpu hsecs...

PL/SQL procedure successfully completed.

SQL> exec :n := dbms_utility.get_cpu_time;

PL/SQL procedure successfully completed.

SQL> DECLARE
2 TYPE ridarray IS TABLE OF ROWID;
3
4 TYPE vcarray IS TABLE OF t.object_name%TYPE;
5
6 l_rids ridarray;
7 l_names vcarray;
8
9 CURSOR c
10 IS
11 SELECT ROWID, object_name FROM t;
12 BEGIN
13 OPEN c;
14
15 LOOP
16 FETCH c
17 BULK COLLECT INTO l_rids, l_names
18 LIMIT 10000;
19
20 FORALL i IN 1 .. l_rids.COUNT
21 UPDATE t
22 SET object_name = LOWER (l_names (i))
23 WHERE ROWID = l_rids (i);
24
25 COMMIT;
26 EXIT WHEN c%NOTFOUND;
27 END LOOP;
28
29 CLOSE c;
30 END;
31 /

PL/SQL procedure successfully completed.

SQL> exec dbms_output.put_line( (dbms_utility.get_cpu_time-:n) ||' cpu hsecs...' );
5656 cpu hsecs...

PL/SQL procedure successfully completed.

Tom Kyte
August 29, 2012 - 1:48 pm UTC

first, I would hardly call them different. 58 versus 56 seconds. big deal.

second, might the first update have placed into the buffer cache everything the second one needed.


load two tables
update one
update the other.

rinse and repeat...

third - try to use tkprof to do things like this, you would have seen what was materially different (like physical IOs) between the two.


Also, you don't say whether you were the only game in town or competing for resources. perhaps when you ran the first one - someone else was doing something and you were competing for resources with them - spinning on a CBC latch for example.

But what I observed myself was different:



ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2> drop table t1;
ops$tkyte%ORA11GR2> drop table t2;
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table t1 as select * from all_objects;
ops$tkyte%ORA11GR2> insert into t1 select * from t1;
ops$tkyte%ORA11GR2> insert into t1 select * from t1;
ops$tkyte%ORA11GR2> insert into t1 select * from t1;
ops$tkyte%ORA11GR2> insert into t1 select * from t1;
ops$tkyte%ORA11GR2> insert into t1 select * from t1;
ops$tkyte%ORA11GR2> insert into t1 select * from t1;
ops$tkyte%ORA11GR2> create table t2 as select * from t1;
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T1' );
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T2' );
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> variable n number
ops$tkyte%ORA11GR2> exec :n := dbms_utility.get_cpu_time;

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> update t1 set object_name = lower(object_name);

4660928 rows updated.

ops$tkyte%ORA11GR2> COMMIT;

Commit complete.

ops$tkyte%ORA11GR2> exec dbms_output.put_line( (dbms_utility.get_cpu_time-:n) ||' cpu hsecs...' );
6251 cpu hsecs...

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec :n := dbms_utility.get_cpu_time;

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> DECLARE
  2          TYPE ridarray IS TABLE OF ROWID;
  3          TYPE vcarray IS TABLE OF t2.object_name%TYPE;
  4          l_rids ridarray;
  5          l_names vcarray;
  6  
  7          CURSOR c
  8      IS
  9      SELECT ROWID, object_name
 10        FROM t2;
 11  BEGIN
 12          OPEN c;
 13  
 14          LOOP
 15                  FETCH c BULK COLLECT INTO l_rids, l_names LIMIT 10000;
 16  
 17                  FORALL i IN 1 .. l_rids.COUNT
 18                          UPDATE t2 SET object_name = LOWER (l_names (i)) WHERE ROWID = l_rids (i);
 19  
 20                  COMMIT;
 21                  EXIT WHEN c%NOTFOUND;
 22          END LOOP;
 23  
 24          CLOSE c;
 25  END;
 26  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> exec dbms_output.put_line( (dbms_utility.get_cpu_time-:n) ||' cpu hsecs...' );
12406 cpu hsecs...

PL/SQL procedure successfully completed.




Empty Trace File

KevB, October 16, 2012 - 1:48 am UTC

Hi Tom,

I have been trying to trace a user session for the purpose of locating a problem. The normal command we use is:

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE(sid, serial#);


This works very well and tells us what we need to know although there is a margin for error involved when selecting the correct SID and SERIAL# where one user has multiple sessions open.

Because of that, I tried another approach:

Create an identifying entry in the trace file and alert log:
EXEC DBMS_SYSTEM.KSDWRT(3, 'Start Trace');


Set the client identifier;
BEGIN
DBMS_SESSION.SET_IDENTIFIER(client_id => 'smith');
END;
/


Tailor the tracefile name to make it easier to find;
EXEC DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('smith');

Once the user Smith has completed their task I disable the job and look at the tracefile with TKPROF and the following is returned;

TKPROF: Release 10.2.0.4.0 - Production on Tue Oct 16 08:33:29 2012

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Trace file: odb_ora_17499_SMITH.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
Trace file: odb_ora_17499_SMITH.trc
Trace file compatibility: 10.01.00
Sort options: default

0 session in tracefile.
0 user SQL statements in trace file.
0 internal SQL statements in trace file.
0 SQL statements in trace file.
0 unique SQL statements in trace file.
1 lines in trace file.
0 elapsed seconds in trace file.


The one line being referred to is the date and time that the trace was started.

My question is why is this file empty. Using the previous command the tracefile generated was upwards of 44MB for the same trace and my understanding has it that because I am tracing all sessions returned for a given user my file should be alot bigger and certainly not empty!

I would be grateful if you could offer a small amount of insight into anything I might have done wrong.

Many thank, as always, in advance.

Regards,

Kev
Tom Kyte
October 16, 2012 - 9:49 am UTC

give more details on how to exactly reproduce this.

You don't tell me in which sessions and really clearly in what order you are doing things in.


you would use EXEC DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('smith');


and then in various sessions issue
DBMS_SESSION.SET_IDENTIFIER(client_id => 'smith');


and do stuff. then you would disable tracing (client_id_trace_disable)

and then use trcsess to scan all of the trace files to look for "smith" stuff and create a brand new trace file.

Slight ammendment to the above

KevB, October 16, 2012 - 1:52 am UTC

Sorry, I made a small mistake in the above question.

The tracefile was named using:
ALTER SESSION SET TRACEFILE_IDENTIFIER='SMITH';


The code I posted above started the trace running.

sqltrace vs autotrace

Biswaranjan, February 10, 2013 - 3:11 am UTC

Hi Tom,

I do use autotrace and sqltrace ,but I found something
which confused me.

My Question is,
Is it possible that the execution plan of sqltrace and autotrace can be different from each other ,for a particular query for same data(same environment).

Everytime I see plans in both (sqltrace and autotrace)
are same for the same query.

But what I found strange is in autotrace for a particular query the plan is showing incorrect but the statisitcs are correct.

And for the same when I see from sqltrace-tkprof ,it shows
correct plan (which differs from autotrace's plan) and
correct stastics.

But I remember one of your sentences on some other page(TKPROF DOES NOT LIE :)) and I like that.

Could you please tell whether that is possible ( I mean have
you come across like this above situation).

regards,
Biswaranjan.


Tom Kyte
February 11, 2013 - 10:01 am UTC

it is not only possible, it is common and frequent

http://asktom.oracle.com/Misc/when-explanation-doesn-sound-quite.html



:) too good.

Biswaranjan, February 11, 2013 - 10:33 am UTC

I just went through and tried to see something related to my strange thing.

when I saw your sentence " Explain plan is blind to the bind",
I felt proud that I am on a man's page whose name is "Thomas Kyte".

I didnt control my feelings so wrote.

:), but now I will read that whole page.
Tom Kyte
February 12, 2013 - 7:30 am UTC

Blind to the bind ;) I didn't remember writing that - but it is a good heading

Abnormal trace file generation by parallel process

aliyar, March 11, 2013 - 12:18 am UTC

Hi Tom,

in our production database , identified Abnormal trace file generation by parallel processes. it occupies easily in GB and trace file generation is happening very frequently

Exadata 08 node rac primary and 08 node standby
issue is with primary node 1 & 2 only

DB : 11.2.0.2
o/s : linux

no events/traces enabled


SQL> select distinct SQL_TRACE from gv$session;

SQL_TRAC
--------
DISABLED

SQL> 


SQL> show parameter event

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
event         string
xml_db_events        string  enable
SQL> show parameter trace

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_trace       integer  0
sec_protocol_error_trace_action      string  TRACE
sql_trace        boolean  FALSE
trace_enabled        boolean  TRUE
tracefile_identifier       string
SQL> 



 SQL> show parameter parallel

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback      string  LOW
parallel_adaptive_multi_user      boolean  FALSE
parallel_automatic_tuning      boolean  FALSE
parallel_degree_limit       string  CPU
parallel_degree_policy       string  MANUAL
parallel_execution_message_size      integer  16384
parallel_force_local       boolean  FALSE
parallel_instance_group       string
parallel_io_cap_enabled       boolean  FALSE
parallel_max_servers       integer  128
parallel_min_percent       integer  0

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
parallel_min_servers       integer  32
parallel_min_time_threshold      string  AUTO
parallel_server        boolean  TRUE
parallel_server_instances      integer  8
parallel_servers_target       integer  384
parallel_threads_per_cpu      integer  2
recovery_parallelism       integer  0
SQL> 


looks all fine. not sure from where i need to start again investigation.

find the below errors from trace files:

kdstProcessSageData : unknown opcode - 40
kdstProcessSageData : unknown opcode - 40
kdstProcessSageData : unknown opcode - 40
kdstProcessSageData : unknown opcode - 40

can you please help me on this issue.

Thanks for your help

Thanks
Aliyar

Tom Kyte
March 11, 2013 - 9:36 am UTC

please utilize support for something like this, this is obviously a support issue.

Question - tkprof - recursive SQL

Ape, July 10, 2020 - 11:39 pm UTC

Hi,

Below is the summary of tkprof output


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 0.02 0.02 0 170 0 0
Execute 8 0.06 0.07 0 48 0 6
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.08 0.09 0 218 0 6

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 324 0.06 0.07 0 186 0 0
Execute 56878 0.99 1.16 83 110 5852 208
Fetch 61325 3.83 4.10 38 251277 3 106599
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 118527 4.89 5.33 121 251573 5855 106807

From the above - if our procedure had only inserts in that - can we infer that it is due to FK constraints check while doing inserts into child tables - that our procedure is slow ?

Connor McDonald
July 11, 2020 - 5:58 am UTC

be aware that SQL statements issued inside a procedure will be deemed to be recursive (ie, recursive to the proc), eg

SQL> create or replace
  2  procedure MY_PROC is
  3  begin
  4  insert into t values (1);
  5  insert into t values (2);
  6  insert into t values (3);
  7  insert into t values (4);
  8  insert into t values (5);
  9  end;
 10  /

Procedure created.

SQL> alter session set sql_trace = true;

Session altered.

SQL> alter session set tracefile_identifier = PROC;

Session altered.

SQL> exec MY_PROC;


which gave a tkprof result of:

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           1

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0         12           5
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       10      0.00       0.00          0          0         12           5



Apr, July 11, 2020 - 11:14 am UTC

But such high fetch values

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 324 0.06 0.07 0 186 0 0
Execute 56878 0.99 1.16 83 110 5852 208
Fetch 61325 3.83 4.10 38 251277 3 106599
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 118527 4.89 5.33 121 251573 5855 106807

inside a procedure having only INSERTS - could be either queries against internal objects or for enforcing referential integrity - right(btw there are a few Foreign keys in the child table where we are inserting into) ?
Connor McDonald
July 13, 2020 - 3:12 am UTC

Well that is only 1 fetch per execute. For all we know, you have 56000 inserts in there.

Apr, July 11, 2020 - 11:24 am UTC

How can we find out whether Foreign key constraint checks are slowing down any procedure ? Is that possible via tkprof report or any other tool in Oracle ? thanks
Connor McDonald
July 13, 2020 - 3:27 am UTC

You'll see it as an increase in current mode reads typically, eg

SQL> create table t1 as select rownum-1 x from  dual connect by level <= 1000;

Table created.

SQL> alter table t1 add primary key ( x );

Table altered.

SQL>
SQL> create table t2 ( id int, x int, constraint fk foreign key (x) references t1 (x ));

Table created.

SQL>
SQL> exec dbms_monitor.session_trace_enable(waits=>true);

PL/SQL procedure successfully completed.

SQL>
SQL> begin
  2  for i in 1 .. 200000
  3  loop
  4    insert into t2 values (i, mod(i,1000));
  5  end loop;
  6  commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> disc
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.5.0.0.0
SQL> conn &me
Connected.

SQL> alter table t2 modify constraint fk disable;

Table altered.

SQL> truncate table t2;

Table truncated.

SQL> exec dbms_monitor.session_trace_enable(waits=>true);

PL/SQL procedure successfully completed.

SQL>
SQL> begin
  2  for i in 1 .. 200000
  3  loop
  4    insert into t2 values (i, mod(i,1000));
  5  end loop;
  6  commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.5.0.0.0



When I look at the two tests I see

With FK

INSERT INTO T2 
VALUES
 (:B1 , MOD(:B1 ,1000))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 200000      3.78       3.87          2       1240     607335      200000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   200001      3.78       3.87          2       1240     607335      200000



Without FK
INSERT INTO T2 
VALUES
 (:B1 , MOD(:B1 ,1000))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 200000      2.81       2.88          2       1240     207053      200000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   200001      2.81       2.88          2       1240     207053      200000


A reader, July 16, 2020 - 9:14 pm UTC

thank you

Question - tkprof - recursive SQL

Apr, July 16, 2020 - 9:39 pm UTC

Couple of more questions - sorry to bother you again

But such high fetch values

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 324 0.06 0.07 0 186 0 0
Execute 56878 0.99 1.16 83 110 5852 208
Fetch 61325 3.83 4.10 38 251277 3 106599
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 118527 4.89 5.33 121 251573 5855 106807


In addition to number of INSERTS of 56000 - we see that out of 4.89 seconds total , just 0.99 sec was used for INSERT and remaining 4.1 seconds were used for FETCH . We don't have any fetch happening in any other part of the application. So my assumption that this FETCH is FK lookup - is that correct assumption ?

Another question - in our tkprof report - even though we have FK constraints, the INSERT to child table shows only stats for the actual INSERT only - nowhere it shows stats for the actual FK lookup - as you demonstrated. Only in the summary it shows the aggregated values, as pasted above. What could be missing here ?
Connor McDonald
July 17, 2020 - 6:40 am UTC

So my assumption that this FETCH is FK lookup - is that correct assumption ?

No - look at my example. No fetch

Check for things like triggers and/or nested PL/SQL calls

Question - tkprof - recursive SQL

Apr, July 16, 2020 - 9:43 pm UTC

minor correction above . I mentioned CPU time instead of total elapsed time. Total elapsed time - 5.33 seconds out of which only the INSERT as such took only 1.16 seconds and rest was consumed by FETCH(4.1 seconds). From the tkprof and dbms_output we know for sure that it is the INSERT which is taking 4 out of 5 seconds

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.