Skip to Main Content
  • Questions
  • To trace sql statements in a trigger


Question and Answer

Tom Kyte

Thanks for the question.

Asked: January 02, 2003 - 9:53 pm UTC

Last updated: November 11, 2009 - 9:21 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked


I couldn't find other Q&A related to this, so I have to post it. Hope you can help me to solve this.

I would like to search for inefficient query in a trigger in a production server. I think I have two choices to do this:
1. use tkprof
2. use set autotrace on and manually examine one by one

I couldn't use method 1 b'cos we(developer) don't have any access to the client's server to read the tkprof report.

Other than method 2, is there any other method to do that?



and Tom said...

Then the client is quite simply preventing you from doing the job they hired you to do???

Here is a solution that I'm proposing in my next book I'm working on, I'll let you peek at it. This is the 817 way using global temp tables -- there is an easier way using pipelined functions in 9i. We'll give client sessions -- regardless of platform or access to the database servers filesystem -- access to THEIR trace files via sqlplus. you'll set up a user:

create user trace_files identified by trace_files default tablespace users quota unlimited on users;

grant create any directory, /* to read user dump dest */
create session , /* to log on in the first place */
create table , /* used to hold users -> trace files */
create view , /* used so users can see what traces they have */
create procedure , /* to create the function that returns the trace data */
create trigger , /* to capture trace file names upon logoff */
administer database trigger /* to create the logoff trigger */
to trace_files;

/* these are needed to find the trace file name */
grant select on v_$process to trace_files;
grant select on v_$session to trace_files;
grant select on v_$instance to trace_files;

in that user, you'll execute:

create view session_trace_file_name
select d.instance_name || '_ora_' || ltrim(to_char(a.spid)) || '.trc' filename
from v$process a, v$session b, v$instance d
where a.addr = b.paddr
and b.audsid = userenv('sessionid')

create table avail_trace_files
( username varchar2(30) default user,
filename varchar2(512),
dt date default sysdate,
constraint avail_trace_files_pk primary key(username,filename)
organization index

create view user_avail_trace_files
select * from avail_trace_files where username = user;

grant select on user_avail_trace_files to public;

create global temporary table trace_file_text
( id number primary key,
text varchar(4000)
grant select on trace_file_text to public

create or replace directory UDUMP_DIR
as '/export/home/ora817/admin/ora817dev/udump'
note -- that is MY user dump dest, use the right one for you of course

create or replace trigger capture_trace_files
before logoff on database
for x in ( select * from session_trace_file_name )
if ( dbms_lob.fileexists( bfilename('UDUMP_DIR', x.filename ) ) = 1 )
insert into avail_trace_files (filename) values (x.filename);
end if;
end loop;

create or replace procedure trace_file_contents( p_filename in varchar2 )
l_bfile bfile := bfilename('UDUMP_DIR',p_filename);
l_last number := 1;
l_current number;
select rownum into l_current
from user_avail_trace_files
where filename = p_filename;

delete from trace_file_text;
dbms_lob.fileopen( l_bfile );
l_current := dbms_lob.instr( l_bfile, '0A', l_last, 1 );
exit when (nvl(l_current,0) = 0);
insert into trace_file_text (id,text)
values (l_last, utl_raw.cast_to_varchar2(
dbms_lob.substr( l_bfile, l_current-l_last+1,
l_last ) )
l_last := l_current+1;
end loop;
grant execute on trace_file_contents to public

Now, a developer can use a script like this (i called it tklast)

column filename new_val f
select filename
from trace_files.user_avail_trace_files
where dt = ( select max(dt)
from trace_files.user_avail_trace_files

exec trace_files.trace_file_contents('&f')
set termout off
set heading off
set feedback off
set embedded on
set linesize 4000
set trimspool on
set verify off
spool &f
select text from trace_files.trace_file_text order by id;
spool off
set verify on
set feedback on
set heading on
set termout on
host tkprof &f tk.prf
edit tk.prf

and have a session like this:

SQL> alter session set sql_trace=true;
SQL> run some sql
SQL> connect user/password (to log out and log in)
SQL> @tklast

there you go -- that'll trace THEIR last session. You can modify tklast to suite your needs but thats the idea.

Now access to the servers file system is moot, you do not need it

TKPROF for the masses!


  (23 ratings)

Is this answer out of date? If it is, please let us know via a Comment



Reader, January 03, 2003 - 9:27 am UTC

Excellent way of getting using TKPROF for developers

Perfect, but just one reminder

Dusan, January 03, 2003 - 12:17 pm UTC

I spent about three hours of tuning this stuff on my database (, Sun Solaris, SunOS 5.8). The problem was that trace files were in lowercase:
-rw-r----- 1 oracle oinstall 1270 Jan 3 16:38 i42_ora_11628.trc

and query returned first part in uppercase:
select d.instance_name || '_ora_' || ltrim(to_char(a.spid)) || '.trc' filename
from v$process a, v$session b, v$instance d
where a.addr = b.paddr

Tom Kyte
January 03, 2003 - 12:51 pm UTC

hey -- thanks -- I'll make that change.

Robert, March 19, 2003 - 6:05 pm UTC

One more potential gotcha

Glenn A. Santa Cruz, May 15, 2003 - 11:21 am UTC

Tom, you may want to include (or at least mention) the parameter TRACEFILE_IDENTIFIER when constructing the tracefile name. Even though this is not supported on all operating systems (according to Oracle docs), it could certainly affect your script.

When multiple developers are simultaneously tracing, we use this parameter to help more easily identify trace files.

Tom Kyte
May 15, 2003 - 5:54 pm UTC

good point - i'll just add 'don't use tracefile identifier if you want this stuff to work'

"tkprof" trivia

A reader, May 15, 2003 - 6:43 pm UTC

Excellent solution.

Tom, can you tell us a little bit about the history of tkprof - why is it named as such? Thanks!


Mirjana, May 19, 2003 - 11:13 am UTC

tkprof a trace file from session which updates tables with triggers

A reader, August 23, 2003 - 3:09 am UTC


I am a bit confused of how to tkprof trace file generated when there are triggers being fired as well. I did a small test, create emp table and myobj table (as select * from dba_objects)

create or replace trigger after_upd_on_obj
after update on obj
for each row
update myobj set object_id=1;

then run

update emp set deptno=20;

tkprof the session and saw this

update emp set deptno=20

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1058 38649 12
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1058 38649 12


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 12 0.00 0.00 0 1469 42121 39804
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 13 0.00 0.00 0 1469 42121 39804


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 1058 38649 12
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 1058 38649 12


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 107 0.00 0.00 0 0 0 0
Execute 118 0.00 0.00 0 1803 42184 39867
Fetch 64 0.00 0.00 0 129 0 22
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 289 0.00 0.00 0 1932 42184 39889

I dont understand why update emp set deptno=20 generates so much current mode reads, also the non-recursive and recursive doesnt make sense to me neither. I guess trigger is treated as recursive right?

If we sum current mode reads of update emp and update myobj is

38649 + 42121 = 80770 not seen anywhere in the tkprof file

I then test again by disabling the trigger and for only 20 current mode reads for update emp as follows:

update emp set deptno=20

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 30 12
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 30 12


Tom Kyte
August 23, 2003 - 10:26 am UTC

care to share the entire test case with us from scratch.

i could probably gen up the same example, but it's so much easier when you do what I do which is show how to reproduce from square one!!


Paul, August 20, 2004 - 4:55 am UTC

Hi Tom,

I have a similar case that needs to find out all SQL statements in ALL sessions of a schema.
What's the best way to collect SQL statements in these sessions, use trace file? use V$SQLAREA?

Coz, after collecting the SQL statements, i need to generate a plan text report on these SQL statement.

As the Application Server may generate a new session if the client application start running, I need to keep track the newly created sessions as well. What's the good approach to do this?

Many many thanks,

Tom Kyte
August 20, 2004 - 11:14 am UTC

in 8i, you can use dbms_fga to audit the sql performed by a "schema" (user)

use fine grained auditing to capture this.

What TKPROF stands for

Drew, September 15, 2004 - 10:50 am UTC

Transient Kernel Profile (or Profiler, take your pick)


Goran Paues, February 22, 2005 - 7:59 am UTC

Hi Tom,

When trying to implement this solution I got a problem with the Oracle supplied package DBMS_PCLXUTIL, which I use to rebuild indexes. I got this error:

ORA-20001: Build index failed. See trace file for more information

A look in the trace file revealed the following:

Sun Dec 19 02:27:25 2004
Errors in file /app/aix18/oracle/admin/billing/bdump/billing_j007_2609346.trc:
ORA-12012: error on auto execute of job 14236
ORA-00001: unique constraint (TRACE_FILES.AVAIL_TRACE_FILES_PK) violated
ORA-06512: at line 7
Sun Dec 19 02:27:25 2004
Errors in file /app/aix18/oracle/admin/billing/bdump/billing_j004_13963370.trc:
ORA-12012: error on auto execute of job 14246
ORA-00001: unique constraint (TRACE_FILES.AVAIL_TRACE_FILES_PK) violated
ORA-06512: at line 7
Sun Dec 19 02:50:02 2004

From what I can understand, all the jobs generated by dbms_pclxutil log off the database when done and thus fire the trigger capture_trace_files. But why are two different jobs trying to insert the same record into avail_trace_files?

Tom Kyte
February 22, 2005 - 8:53 am UTC

they get the same session id over and over though.

i would not suggest running this in "production" (and your fix will be to put an exception handler in there to catch the dup val on index.

A reader, June 08, 2005 - 8:18 am UTC

Namrata, June 08, 2005 - 8:19 am UTC

9i Solution

Marcio Portes, October 04, 2005 - 9:00 am UTC

Tom, I am on the road and unfortunatly I didn't bring your book with me. Even though I know and did read that you wrote the same solution showed out on your first answer but so using 9i. Could you share with us? Thanks.

Tom Kyte
October 04, 2005 - 4:31 pm UTC

all of the code is available for download from that book:

</code> <code>

udump path,

A reader, March 16, 2006 - 5:01 pm UTC

In the beginning of this thread you created your own UDUMP directory. Can I create the UDUMP directory on my PC instead on the server where Oracle runs?

Lot of times a developer may not have access to the box and it may not allow to store our trace files anywhere in that box.

Tom Kyte
March 17, 2006 - 4:58 pm UTC

the file system must be writeable by the OS process running on the server - the database server needs visibility to that file system.

the above example (way at the top) shows one method to get the trace file using sqlplus.

the UDUMP directory I created above - it was NOT telling oracle where to put the files, I was pointing to the directory that oracle was already placing these files on the server - so we could read them.

tkprof of a different sesson,

A reader, March 22, 2006 - 2:39 pm UTC

I need to tkprof for a different session. That session may or may not have sql_trace set ON, but I think I can use dbms_system package to trace any session.

However, my issue is, say if an automated job runs at 4:00am, I want to trace that session.

How do I achieve that?


Tom Kyte
March 22, 2006 - 5:12 pm UTC

short of sitting there and waiting for it?

You would have the automated job enable tracing when it starts typically.

If the job has a well known username in the database and nothing else is running by that user, you might be able to use a logon trigger

Additionally, if the program name is set by the client application in v$session, you could use a logon trigger looking for that program name - every time something logs in, query v$session for it's row and see if you want to execute the sql trace statement.

tkprof of a different session,

A reader, March 22, 2006 - 6:44 pm UTC

Well, enabling the trace within the application is not so easy in our shop which need a code change.

So you are saying, I need a crontab job that probably runs for every minute (for example) to check whether my USER and my PROGRAM or any other information which can track the application session that has made an entry to v$session.

Once it extracts its SID and SERIAL#, then Oracle starts tracing it (I think the tracing automatically stops after the application terminates.

If there are quite a few trace files generated in that window, how do I figure out which is the one I am interested in?


Tom Kyte
March 22, 2006 - 7:16 pm UTC

I think I talked about logon triggers too??

logoff trigger used with trace

A, October 14, 2009 - 1:31 am UTC


At the very start of this post you suggested using LOGOFF trigger - with Oracle 817.

I am working with 10gR2.0.3 and I implemented LOGON (after) and LOGOFF (before) triggers to capture specific users sessions for tracing.

I have tested the use of DBMS_APPLICATION_INFO to customise the package so that specific modules/actions would be traced for an instrumented plsql code.

The logoff trigger is there to catch contents of plsql table that stores frequencies of access to certain procedures that I need to know about - how many per session and within a particular module/action such subprograms execute. All of the above I tested and it WORKS!!!

The only snag that I encounter is that the number of times LOGON trigger fires (and writes to a TRACELOG table) is higher than the number of times the LOGOFF trigger fires to complete gathering all needed info.

As an example - say 1000 times LOGON trigger fires and leaves record the LOGOFF sibling does it 924.

The application is running ETL process for many hours and it is scheduled with Tivoli - and I cannot imagine sensible reasons for LOGOFF trigger to "miss" these exit points.

As I collect sid+serial#+SCN + other data into TRACELOG table, I am able to verify that the sessions started (as recorded by LOGON trg) are no longer present in v$session although records that LOGOFF trg should insert are not there.

Is there any special case where LOGOFF trg may miss termination of a session..?

Best regards
Tom Kyte
October 23, 2009 - 1:15 pm UTC

if you get disconnected and do not have a clean logoff - you will not see the logoff trigger fire.

kill a session
firewall rule that disconnects you
client disappears (dies)

trigger used with trace

A, October 21, 2009 - 10:31 pm UTC


I was hoping to get this LOGOFF trigger issue resolved - please see above.

It seemed to be the most logical solution - but it must be reliable.

Please help


other reasons for LOGOFF trigger missing

A, October 25, 2009 - 12:13 am UTC


I was running the complex ETL with another guy - we were the only 2 people connected to the system - late Sunday night.

I cannot imagine any sessions getting killed like you suggested would/could happen.
Tivoli awaits for a particilar module to finish and goes on to another as per schedule.

What could be OTHER POSSIBLE explanations for LOGOFF trigger to miss certain sessions getting disconnected - OTHER than a kill ..?

And if we assume a possibility that a session may get killed - would there be a way to TRAP THAT...? - - so even if LOGOFF trigger does not fire (as you explained as one possible scenario) then trapping an Alterr session or system with "kill" option would get trapped..???

Thank you
best regards
Tom Kyte
October 26, 2009 - 1:59 pm UTC

dead sessions, that would be the cause.

if you get killed, there is no way to catch that, you are dead - gone - zippo - made to disappear.

reasons for LOGOFF trigger not to fire

Andreas, October 26, 2009 - 1:45 pm UTC

Hello Tom,

You provided me with 5 possible scenarios:

kill a session
firewall rule that disconnects you client disappears (dies)

I checked the environment and the ETL application:
(1) Nobody touched the system - hence nobody could kill any session + there are no procedures to do so either.
(2) ora600 does show up in an alertlog, does it not - we checked - and there has been NONE
(3) there are no clients connected - all processes are run by Tivoli that in tuen submits UNIX shell programs that are invoked from the same machine - no client, no NET issue
(4) No fire wall - as per (3) above + it is a UAT platform and no fire walls - NONE.

Our issue is to collect reliably 2 things:
(a) the full trace file 10046 which ends normally and when it finishes the STAT rows provide important info - so LOGOFF trigger was meant to turn 10046 OFF
(b) collect the final results showing frequencies of executions of certain packaged programs using customized dbms_application_info that would collect such stats in a plsql array (showing what combo package_name + procedure (module+action) respectively executed during this run, as some of them are being executed excessively. Again LOGOFF trigger appeared to be a sensible point of collection of these stats.

So if one cannot rely on LOGOFF trigger, what would you suggest..?

Naturally we may use the custom version of app_info to dump such frequencies every time in reaches Mod(x,1000)=0
Could you think of a better solution if not LOGOFF trigger?
Tom Kyte
October 26, 2009 - 3:30 pm UTC

(3) shells can get signals that kill them, easily.

(a) you do not want to turn off 10046 trace, you just want to exit normally, if you turn off tracing before you logoff - you can and will LOSE STAT RECORDS. It would be self defeating (your goal would be subverted by your approach). The way to get the stat records written to a trace file (pre 11g) is to either 1) close the cursor or 2) exit your session.

... Naturally we may use the custom version of app_info to dump such frequencies
every time in reaches Mod(x,1000)=0 ...

if you are executing things 100's of thousands of times, this sample (without a logoff trigger) sounds closer than close enough for measuring this.

In 10g - we have ASH and AWR - they are all based on sampling data - every few seconds we see what is going on and save to disk important stuff every hour. It is all very sample based.

Trace on/off - not to use LOGOFF trigger

A, October 26, 2009 - 11:46 pm UTC


Many thanks for this explanation. I have always believed that the full trace details require proper scoping of WHAT needs to be traced. Accordingly my first approach was to use LOGON and LOGOFF triggers which would identify a session to be traced as originating from ETL suite and not some other user, and then collect all trace data with 10046 contect forever, level 12 and to be terminated with 10046 and 'context off' specified as the very last thing that is done before logoff. I DO ACCEPT your advice - but now I am wondering what the real use of contect off is - if we lose STAT records and should rely on session's graceful EXIT ..? or I am missing something...

However, since our initial instrumentation effort, as we have realized that some procedures that execute millions of times should be excluded - I am coming up with a custom version of DBMS_APPLICATION_INFO. This will run against an array that contains all packaged procedures (thousands) of which only a selected number will be traced at any given ETL run and in addition the trace may me suspended while some of these repeating procedures get invoked. This will be accomplished by resetting the module/action pairs via APP INFO.

Your response therefore generated additional angle that I need to consider - so specifically my questions now are:
(a) if upon reentry to a frequently executing sub-program the mod/act is reset and accordingly tracing is suspended until the exit from a sub-program upon which the mod/act pair is reset back to the calling program that I want traced - is it good - i.e. will it provide us with a complete set of trace data as per the selective settings..?
(b) what is the best approach to ensure that the STAT rceords would not get lost - in a case when for some reason the session may stay idle and remains there hours beyond ETL finish line - as the ETL failed to disconnect a session but tracing itself stopped as per mod/act being reset now to NULLS..?
(c) conversely what is the best approach to ensure that as much as possible is saved - STAT records including - if a session dies for some reason..?

Thank you
Regards - Andreas

Tom Kyte
October 27, 2009 - 11:34 am UTC

you stand a GOOD CHANCE of losing stat records by turning sql trace off!!!!
you stand a GOOD CHANCE of losing stat records by turning sql trace off!!!!
you stand a GOOD CHANCE of losing stat records by turning sql trace off!!!!
you stand a GOOD CHANCE of losing stat records by turning sql trace off!!!!
you stand a GOOD CHANCE of losing stat records by turning sql trace off!!!!

have you investigated dbms_monitor? It is designed to do this selective tracing you ask - it can enable tracing by sqlnet connection string, by user, by individual physical session or even logical (application server) session ids.

If you use a tnsconnect string for example that connects using a service name (your ETL processes would use this connect string) you could have tracing enabled (or not) for them automagically - no code.

(a) you should get trace data for most all things unless the subroutine causes a cursor to be closed. plsql caches open cursors using session_cached_cursors/open_cursors. If you exceed the plsql cursor cache, it'll close a cursor on you for real (normally, plsql does NOT close them - the stat records are NOT written out until they are REALLY closed - and if tracing is not on when they are really closed, you'll lose that)


    l_sql long := 'create or replace procedure p2 as begin ';
    l_sql := l_sql || 'execute immediate ''alter session set sql_trace=false'';';
    for i in 1 .. 500
        l_sql := l_sql || ' for x in (select * from dual d' || i || ') loop null; end loop;';
    end loop;
    l_sql := l_sql || 'execute immediate ''alter session set sql_trace=true''; end;';
    execute immediate l_sql;
create or replace procedure p1( p_call_p2 in boolean default true )
    for x in (select * from all_users)
    end loop;
    if ( p_call_p2 ) then
    end if;
connect /
alter session set sql_trace=true;
exec p1;
@tk "sys=no"

alter session set sql_trace=true;
exec p1(false);
@tk "sys=no"

my @tk script is:

column trace new_val TRACE

select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  from v$process a, v$session b, v$parameter c, v$instance d
 where a.addr = b.paddr
   and b.audsid = userenv('sessionid')
   and = 'user_dump_dest'

!tkprof &TRACE ./tk.prf &1
connect /
edit tk.prf

it disconnects (closes all cursors) and then tkprofs.

The first run has a tkprof of:



call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0        149          0          46
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0        149          0          46

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 563     (recursive depth: 1)

notice - no stat records, no row source operation. That is because P2 opened so many cursors that P1's cursor was really closed and tracing was off

however, the second run tkprof is:



call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0        149          0          46
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0        149          0          46

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 563     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
     46  HASH JOIN  (cr=149 pr=0 pw=0 time=7575 us)
     46   HASH JOIN  (cr=78 pr=0 pw=0 time=3577 us)
     46    TABLE ACCESS FULL USER$ (cr=7 pr=0 pw=0 time=601 us)
     67    TABLE ACCESS FULL TS$ (cr=71 pr=0 pw=0 time=2707 us)
     67   TABLE ACCESS FULL TS$ (cr=71 pr=0 pw=0 time=750 us)


NOTE: this is true for 10gR2 and before only, in 11g - the tracing is different - the stat records get written out for each execution of the cursor - the stat records would be there in 11gR1 and above for the above example....

(b) have tracing on the entire time, just exit.

(c) see (b), same answer.

Flushing of STAT lines by turning off sqltrace prematurely

Another Reader, November 02, 2009 - 2:06 pm UTC

Hi Tom,

thank you for your great site.

I found by testing with,, and both on Windows and different flavours of Unix and talking to Oracle support that

alter session set session_cached_cursors = 0;

flushes out STAT lines.

Anything else may loose (and lost in my past experience) STAT lines if you have session_cached_cursors > 0 *and* you do not use all your cache slots.
Tom Kyte
November 09, 2009 - 12:39 pm UTC

... flushes out STAT lines....

not in all cases. Not in many cases. If you have a well written application outside of plsql, it'll cache cursors itself and it doesn't obey session cached cursors (and I would think that session cached cursors would only work with plsql in and above - in and before it used open_cursors to size the cache it used)

I would not do this, we've been down this path

"(b) have tracing on the entire time, just exit. "

that is, will be, my answer.

application_info customized vs MONITOR

Andreas, November 04, 2009 - 8:08 am UTC

Dear Tom,

First off - thank you for making me aware about loosing STAT records from the trace when LOGOFF trigger is used.

As far as dbms_monitor -
- Yes I have researched and used dbms_monitor and worked with it.

And in the interim period to get some packages traced I we deploy the dbms_monitor passing as a parameter a unique key as identifier so I can track back to the code easily.

My motivation for coding instrumentation into the existing packaged ETL procedures was to achieve better granularity.

Here is a link to a white paper that I had found:

and this approach had caught my interest.

When I use MONITOR it will be for the entire session which is kicked off from a UNIX sh scheduled by TIVOLI.

I have tested setting an indentifier for the session at this script - so that dbms_session sets it as the first process and then the ETL procedure is executed and the identifier is cleared.
Meanwhile MONITOR had already been configured to catch a session with such an identifier and all works.


with this set-up I am unable to limit the scope of the trace by skipping the procedure that is executing very frequently (number of executions is in millions).

Ideally I would want to run the trace as a test to measure frequencies of operations ONLY.

Then enter names of sub-programs into a control table that I want to skip while tracing everything else within the packaged procedure. To do this I developed a customized version of APPLICATION_INFO package that loads plsql tables with procedure names that I want to trace and subs that I want to SKIP. the first step - process frequencies count will give me info which procs to mark OFF as to SKIP and then instrumented code will function this way:

1. module/action are set and tracing is ON as they are picked up by application_info
2. sub-program enters into execution and goes in a loop 200,000,000 times - before it starts instrumented - first line calls APIN (modified application_info) and clears the mod/act pair with different values while saving the ones from the main program and reinstates those upon EXIT

This way I end up with much smaller trace and reduce its impact.

Your thoughts...?

Andreas wishing you all the best
Tom Kyte
November 11, 2009 - 9:21 am UTC

I wish you would look at AWR and ASH

It samples
It gets you more than you need
It is done (it works, it exists, it is not a hacked together thing)

That is my final thought here - you are reinventing something that already exists.

More to Explore


Complete documentation on Securefiles and Large Objects here