Jim, August 06, 2001 - 9:11 am UTC
Excellent !
The stored procedure method is exactly how I allow
my developers to tkprof and obtain their trace results.
Trace files location
Dave Brys, April 08, 2002 - 2:40 am UTC
I like the idea , but I ran into some practical problems.
I used your procedure load_file that you gave as an example to load in an ascii file.
When executing load_file
ERROR at line 1:
ORA-20007: ORA-01461: une valeur 'LONG' ne peut être liée que dans une colonne
de type 'LONG'
ORA-06512: à "GDC.LOAD_FILE", ligne 74
ORA-06512: à ligne 4
I suspect this is because in the database l_comments is a long field but in the
table it is a varchar2(4000).
Then I modified field comments in the table trace to a long field.
My funtion load_file is
function load_file( p_dir in varchar2 ,
p_filename in varchar2 )
return number
is
l_input utl_file.file_type;
l_buffer long;
l_comments long;
l_id number;
l_cnt number default 0;
begin
p('for loop');
l_input := utl_file.fopen( p_dir, p_filename, 'r' );
loop
p('in loop');
begin
utl_file.get_line( l_input, l_buffer );
exception
when no_data_found then exit;
end;
if length(l_comments) + length(l_buffer) > 32000 then
insert into trace ( id, comments ) values ( l_id,rtrim(ltrim(l_comments,'~'),'~'));
l_comments := '';
end if;
l_comments := l_comments || l_buffer || chr(10);
end loop;
p(l_comments);
insert into trace (comments ) values (l_comments);
utl_file.fclose( l_input );
return l_cnt;
exception
when utl_file.invalid_path then
raise_application_error(-20001,
'INVALID PATH: File location or filename was invalid.');
when utl_file.invalid_mode then
raise_application_error(-20002,
'INVALID MODE: The open_mode parameter in FOPEN was invalid');
when utl_file.invalid_filehandle then
raise_application_error(-20003,
'INVALID OPERATION:The file could not be opened or operated on as requested.');
when utl_file.read_error then
raise_application_error(-20004,
'READ_ERROR:An operating system error occured during the read operation.');
when utl_file.write_error then
raise_application_error(-20005,
'WRITE_ERROR: An operating system error occured during the write operation.');
when utl_file.internal_error then
raise_application_error(-20006,
'INTERNAL_ERROR: An unspecified error in PL/SQL');
when others then
raise_application_error(-20007,SQLERRM);
end;
This works ok, and my table trace is filled up with one line. But then you say "spool" the contents of trace to a file. I used the spool command in sql*plus for this.
If I then use tkprof on the "spooled" trace file, it gives me a DR watson error. I can still use tkprof on the original trace file.
I can send you an example on the "spooled" trace file if you want me too.
Do you have any clue ?
Best regards
Dave Brys
April 08, 2002 - 8:44 am UTC
You made the load program far too complex. You must be corrupting the trace file in some fashion. You just want to read a line, insert a line. Don't bother trying to glue them together into a big line 32k at a time.
Here is a complete example showing this at work:
ops$tkyte@ORA817DEV.US.ORACLE.COM> create global temporary table trace_tab
2 ( seqno int primary key,
3 text varchar2(4000)
4 )
5 on commit delete rows
6 /
Table created.
ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace procedure load_file( p_dir in varchar2 ,
2 p_filename in varchar2 )
3 is
4 l_input utl_file.file_type;
5 l_buffer long;
6 l_comments long;
7 l_id number;
8 l_cnt number default 0;
9 begin
10 l_input := utl_file.fopen( p_dir, p_filename, 'r', 4000 );
11 for recno in 1 .. 1000000
12 loop
13 begin
14 utl_file.get_line( l_input, l_buffer );
15 exception
16 when no_data_found then exit;
17 end;
18 insert into trace_tab values ( recno, l_buffer );
19 end loop;
20 utl_file.fclose( l_input );
21 end;
22 /
Procedure created.
ops$tkyte@ORA817DEV.US.ORACLE.COM> column path new_value path
ops$tkyte@ORA817DEV.US.ORACLE.COM> column file_name new_value file_name
ops$tkyte@ORA817DEV.US.ORACLE.COM> select c.value path,
2 instance || '_ora_' ||
3 ltrim(to_char(a.spid,'fm99999')) || '.trc' file_name
4 from v$process a, v$session b, v$parameter c, v$thread c
5 where a.addr = b.paddr
6 and b.audsid = userenv('sessionid')
7 and c.name = 'user_dump_dest'
8 /
PATH
-----------------------------------------------------------------------------------------------------------------------------------
FILE_NAME
------------------------------
/export/home/ora817/admin/ora817dev/udump
ora817dev_ora_27318.trc
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter session set sql_trace=true;
Session altered.
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from dual;
D
-
X
ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from scott.emp;
COUNT(*)
----------
14
ops$tkyte@ORA817DEV.US.ORACLE.COM> disconnect
Disconnected from Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
With the Partitioning option
JServer Release 8.1.7.2.0 - Production
ops$tkyte@ORA817DEV.US.ORACLE.COM> connect /
Connected.
ops$tkyte@ORA817DEV.US.ORACLE.COM> exec load_file( '&path', '&file_name' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV.US.ORACLE.COM> set heading off
ops$tkyte@ORA817DEV.US.ORACLE.COM> set feedback off
ops$tkyte@ORA817DEV.US.ORACLE.COM> set trimspool on
ops$tkyte@ORA817DEV.US.ORACLE.COM> set linesize 255
ops$tkyte@ORA817DEV.US.ORACLE.COM> set embedded on
ops$tkyte@ORA817DEV.US.ORACLE.COM> spool &file_name
<b>you should be a SET TERMOUT OFF here as well -- so you don't have to watch the tracefile go by...</b>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select text from trace_tab order by seqno;
Dump file /export/home/ora817/admin/ora817dev/udump/ora817dev_ora_27318.trc
Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
With the Partitioning option
JServer Release 8.1.7.2.0 - Production
ORACLE_HOME = /export/home/ora817
System name: SunOS
Node name: aria-dev
Release: 5.7
Version: Generic_106541-19
Machine: sun4u
Instance name: ora817dev
Redo thread mounted by this instance: 1
Oracle process number: 8
Unix process pid: 27318, image: oracle@aria-dev (TNS V1-V3)
*** SESSION ID:(18.400) 2002-04-08 08:41:09.366
APPNAME mod='01@ test.sql' mh=2821176787 act='' ah=4029777240
=====================
PARSING IN CURSOR #1 len=33 dep=0 uid=216 oct=42 lid=216 tim=3042719131 hv=3732290820 ad='81c0d8e4'
alter session set sql_trace=true
END OF STMT
EXEC #1:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=3042719131
=====================
PARSING IN CURSOR #1 len=19 dep=0 uid=216 oct=3 lid=216 tim=3042719131 hv=3499509676 ad='81409b80'
select * from dual
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=3042719131
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=3042719131
FETCH #1:c=0,e=0,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=4,tim=3042719131
FETCH #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=3042719131
STAT #1 id=1 cnt=1 pid=0 pos=0 obj=195 op='TABLE ACCESS FULL DUAL '
=====================
PARSING IN CURSOR #1 len=31 dep=0 uid=216 oct=3 lid=216 tim=3042719131 hv=3580614502 ad='813f2f44'
select count(*) from scott.emp
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=3042719131
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=3042719131
FETCH #1:c=0,e=0,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=4,tim=3042719131
FETCH #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=3042719131
STAT #1 id=1 cnt=1 pid=0 pos=0 obj=0 op='SORT AGGREGATE '
STAT #1 id=2 cnt=14 pid=1 pos=1 obj=28771 op='TABLE ACCESS FULL EMP '
XCTEND rlbk=0, rd_only=1
ops$tkyte@ORA817DEV.US.ORACLE.COM> spool off
<b>now, to show it works:</b>
ops$tkyte@ORA817DEV.US.ORACLE.COM> host tkprof &file_name x.txt
TKPROF: Release 8.1.7.2.0 - Production on Mon Apr 8 08:41:20 2002
(c) Copyright 2000 Oracle Corporation. All rights reserved.
ops$tkyte@ORA817DEV.US.ORACLE.COM> host cat x.txt
TKPROF: Release 8.1.7.2.0 - Production on Mon Apr 8 08:41:20 2002
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Trace file: ora817dev_ora_27318.trc
Sort options: default
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************
alter session set sql_trace=true
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.03 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.03 0 0 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 216
********************************************************************************
select *
from
dual
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 1 4 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 1 4 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 216
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL DUAL
********************************************************************************
select count(*)
from
scott.emp
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 1 4 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 1 4 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 216
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
14 TABLE ACCESS FULL EMP
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 3 0.00 0.03 0 0 0 0
Fetch 4 0.00 0.00 0 2 8 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 0.00 0.03 0 2 8 2
Misses in library cache during parse: 0
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
3 user SQL statements in session.
0 internal SQL statements in session.
3 SQL statements in session.
********************************************************************************
Trace file: ora817dev_ora_27318.trc
Trace file compatibility: 8.00.04
Sort options: default
1 session in tracefile.
3 user SQL statements in trace file.
0 internal SQL statements in trace file.
3 SQL statements in trace file.
3 unique SQL statements in trace file.
44 lines in trace file.
<b>and that is it, tkprof without ever leaving sqlplus and without getting on the server</b>.
Reader
A reader, April 08, 2002 - 1:38 pm UTC
Tom,
I ran a join of few dictionary views , the query took
~ 5 minutes. I turned trace 10046, level 8. The trace file
does not show any WAIT: statistic at all. How to find why
the query takes as long.
From the trace file:
====================
PARSING IN CURSOR #3 len=629 dep=0 uid=4670 oct=3 lid=4670 tim=2014767 hv=4025224304 ad='36647998'
select fs.tablespace_name ,count(bytes) FS_T, max(bytes)/1024/1024 FS_MAX, min(initial_extent)/1024/1024 IE,
min(next_extent)/1024/1024 NE from dba_free_space fs,dba_tablespaces ts
where fs.tablespace_name = ts.tablespace_name and
fs.tablespace_name in
(
select tablespace_name from dba_tables where table_name in (select table_name from dba_tab_columns where
column_name = 'NRI')
union
select tablespace_name from dba_indexes where index_name in
(select index_name from dba_indexes
where table_name in
(select table_name from dba_tab_columns where column_name = 'NRI'))
) group by fs.tablespace_name
PARSE #3:c=34007,e=34431,p=9,cr=362,cu=1,mis=1,r=0,dep=0,og=4,tim=2014768
EXEC #3:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=2014768
FETCH #3:c=381,e=411,p=37,cr=178775,cu=76,mis=0,r=1,dep=0,og=4,tim=2015179
FETCH #3:c=6,e=5,p=0,cr=0,cu=0,mis=0,r=7,dep=0,og=4,tim=2015208
The elapsed Time = 2015179 - 2014768 = 411 seconds?
Thanks
April 08, 2002 - 4:07 pm UTC
I believe you must have messed up somewhere -- and did not set the trace properly.
RE: I turned trace 10046...
Mark A. Williams, April 08, 2002 - 4:20 pm UTC
Another discussion on AskTom (found using the amazing Search/Archives tab) that may be relevant....
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2202891332282 <code>
Especially the quote from MetaLink:
"We do not guarantee the performance against our dba views with joins. These views are designed to have the optimal performance when queried directly not for joins. In most cases we tell customers that if they don't like our views that are provided wiht [sic] the database, then they can go ahead and create their own."
Reader
A reader, April 09, 2002 - 7:34 am UTC
Tom,
Thanks very much.
I re-created the trace file and found WAIT: stats.
Mostly 'db file sequential read'
Thanks
PLS-00307 too many declarations of 'to_char' match this call
Robert, November 14, 2002 - 10:20 am UTC
Tom, when I try to put the SQL you provided into a stored function, the compiler complains about the TO_CHAR, I have
seen this before, what gives ?
select c.value || '/' || instance || '_ora_' ||
ltrim(to_char(a.spid,'fm99999')) || '.trc'
INTO l_return
...<snip>
PLS-00307 too many declarations of 'to_char' match this call
November 14, 2002 - 7:28 pm UTC
ops$tkyte@ORA817DEV.US.ORACLE.COM> begin
2 for x in ( select c.value || '/' || instance || '_ora_' ||
3 ltrim(to_char(<b>to_number(a.spid)</b>,'fm99999')) || '.trc'
4 from v$process a, v$session b, v$parameter c, v$thread c
5 where a.addr = b.paddr
6 and b.audsid = userenv('sessionid')
7 and c.name = 'user_dump_dest' )
8 loop
9 null;
10 end loop;
11 end;
12 /
PL/SQL procedure successfully completed.
Set "_Trace_file_public=true " to make them read access for all
A reader, November 23, 2002 - 6:25 pm UTC
I use to set "_TRACE_FILE_PUBLIC=TRUE " in init.ora to make the trace files ,read access to all developers.So its upto the developer to identify his session SID and locate his/her trace files ,which are usally in format "<ORACLE_SID>_ora_<process ID>" .
Reason DBAs won't give access to trace files,becuase they are created by default file permissions,which only user "oracle" and uses in group "dba" can access them.ether they can run some scripts ( via cron job) to make others access these files OR DBA can set the above hidden parameter ,which make default file permission of files to others with read-access.
November 23, 2002 - 6:37 pm UTC
beware the hidden unknown side effects of that _ init.ora parameter. (that and it won't do much for this windows client who doesn't have access to the unix box)
RE: public trace files...
Mark A. Williams, November 23, 2002 - 7:31 pm UTC
Well, you just may find setting "_trace_files_public=true" is a better way to make them public, since "_trace_file_public" isn't a valid parameter. Plus you still need to fiddle with UMASK on UNIX or directory permissions on NT or WinMM... However, why would you want to open up a security hole like that? About five seconds of looking at trace files with bind values, and I could have some interesting information that slipped right through the old security hole, right? Here's an interesting snippet from Oracle Support:
"This is a security feature that can be bypassed by setting the hidden parameter "_trace_files_public = true" in the "init<sid>.ora" file. This is not a secure thing to do because many trace files include hex dumps of potentially sensitive data."
As far as "Reason DBAs won't give access to trace files"... The reason I don't do it is listed right above. SECURITY. I'm pretty sure the security folks at my place of work wouldn't look too kindly on bypassing security features... (I work in an FDA regulated environment...) How can you ensure that only authorized people are looking at the trace files they should be? Do think I would set this parameter on our credit card authorization system? No way.
Curious what others do or how they feel about it. (Don't want Tom to get flooded by follow-ups though...)
Anyway, another $0.02 into the bit bucket.
- Mark
Client How
A reader, December 26, 2002 - 1:03 pm UTC
Hi, TOm,
I'm using SQLPLUS in client machine, now how
can execute:
host tkprof &file_name x.txt;
and
host cat x.txt
cause I'm not on the server? Please clarify.
Thanks
December 26, 2002 - 1:37 pm UTC
well, you cannot - you need access to the tracefiles.
Now, if you use the above mentioned trick -- you could because you can "select * from your tracefile" and spool the results locally to run tkprof against.
forget that message
A reader, December 26, 2002 - 1:07 pm UTC
Hi, Tom,
I got it so please ignore the above question.
I really hope we could eleminate the quesion asked by ourself which we don't want ask any more and of course before you answer it.
Thanks again
How can we know soon after a new trace file is generated?
A Reader, January 13, 2003 - 11:32 am UTC
Hi Tom,
Is there a way by which I can be informed immediatly whenever a new trace file is generated in background_dump_dest or user_dump_dest folder. For example, when a new trace file is generated, I will get an email telling me about that. Now, I check these two folders once in a while and sometimes find it's already too late to handle the errors. Is there a signal calling statement such as "whenever new trace" (like in SQLPLUS, there is a WHENEVER SQLERROR, or WHENEVER OSERROR) I can use to ask Oracle (8i) sending me an email using UTL_SMTP? Or is there a better way to inform me (via email or other ways) about the new trace file soon?
Thank you very much for you help in advance.
Perhaps It's Time for Old Sparky
Terry, January 14, 2003 - 7:42 am UTC
Our Friend (Cary Millsap) at Hotsos.com has provided a very nifty tool to collect and view trace files and put those trace files to a client. The tool is called Sparky. It uses Perl (don't everyone throw up just yet) on the server side and while I wouldn't be a big fan of this on production systems I have used it successfully in dev, test environments, it is free and relatively easy to install.
How do we find the name for a JOB process
Basil, April 17, 2003 - 10:53 am UTC
My application has a switch to allow the batch component to turn tracing on. I want to report the file name in our application's log. However, the batch component runs as an Oracle job. The file name doesn't match for these processes. Do you know offhand how the job-based filename is generated? It appears to include the job process number.
April 17, 2003 - 11:12 am UTC
it'll be in the background dump destination -- in the snpNN files -- intermingled with whatever other traces that snp process ran.
What to query?
Basil, April 18, 2003 - 12:19 am UTC
More specifically (referring to my question immediately preceding), what would I query to determine the filename? Early in the thread, you give the following code, which, from experience, provides a filename for non-job trace files:
select c.value || '/' || instance || '_ora_' ||
ltrim(to_char(a.spid,'fm99999')) || '.trc'
from v$process a, v$session b, v$parameter c, v$thread c
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
How would I generate the same sort of thing for a procedure running as a job? How could the procedure KNOW if it's running as a job?
April 18, 2003 - 11:45 am UTC
it'll be hard -- the a.spid part is correct -- however, the job queues handlers will inject their name into the mix (into the filename - j000, j001, ... in 9i and snp0, snp1, snp2, ... in 8i).
The best you'll get reliably will be the spid -- which will be in the filename -- so ls -l $ORACLE_HOME/admin/$ORACLE_SID/bdump/*_{a.spid}.trc will find the file for you.
You can discover if you are in a job by looking at the program that is running you- the backgrounds identify themselves:
ops$tkyte@ORA920> drop table log;
Table dropped.
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table log
2 as
3 select b.SID, b.SERIAL#, b.USERNAME, b.COMMAND, b.OWNERID,
4 b.STATUS, b.SERVER, b.SCHEMANAME, b.PROCESS, b.MACHINE,
5 b.TERMINAL, b.PROGRAM, b.TYPE, b.MODULE, b.ACTION,
6 b.CLIENT_INFO, a.PROGRAM aprogram, a.BACKGROUND abackground
7 from v$process a, v$session b, v$parameter c
8 where 1=0
9 /
Table created.
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace procedure jobproc
2 as
3 begin
4
5 execute immediate 'alter session set sql_trace=true';
6
7 insert into log
8 select b.SID, b.SERIAL#, b.USERNAME, b.COMMAND, b.OWNERID,
9 b.STATUS, b.SERVER, b.SCHEMANAME, b.PROCESS, b.MACHINE,
10 b.TERMINAL, b.PROGRAM, b.TYPE, b.MODULE, b.ACTION,
11 b.CLIENT_INFO, a.PROGRAM aprogram, a.BACKGROUND abackground
12 from v$process a, v$session b, v$parameter c
13 where a.addr = b.paddr
14 and b.sid = (select sid from v$mystat where rownum=1)
15 and c.name = 'background_dump_dest' ;
16
17 execute immediate 'alter session set sql_trace=false';
18
19 end;
20 /
Procedure created.
ops$tkyte@ORA920> pause
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from log;
no rows selected
ops$tkyte@ORA920> variable n number
ops$tkyte@ORA920> exec dbms_job.submit( :n, 'jobproc;' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> commit;
Commit complete.
ops$tkyte@ORA920>
ops$tkyte@ORA920> exec dbms_lock.sleep(5)
PL/SQL procedure successfully completed.
ops$tkyte@ORA920>
ops$tkyte@ORA920> set echo off
SID : 15
SERIAL# : 50
USERNAME : OPS$TKYTE
COMMAND : 2
OWNERID : 2147483644
STATUS : ACTIVE
SERVER : DEDICATED
SCHEMANAME : OPS$TKYTE
PROCESS :
MACHINE :
TERMINAL :
PROGRAM :
TYPE : USER
MODULE :
ACTION :
CLIENT_INFO :<b>
APROGRAM : oracle@tkyte-pc-isdn.us.oracle.com (J000)</b>
ABACKGROUND :
-----------------
PL/SQL procedure successfully completed.
that shows I was using using J000 (job queue process 0) in 9i. in fact, using that knowledge -- we can see:
ops$tkyte@ORA920> drop table log;
Table dropped.
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table log
2 as
3 select c.value || '/' || instance || '_' ||
4 lower(rtrim(substr( a.program, instr( a.program, '(' )+1 ), ')' ))
5 || '_' || ltrim(to_char(a.spid,'fm99999')) || '.trc' msg
6 from v$process a, v$session b, v$parameter c, v$thread
7 where 1=0
8 /
Table created.
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace procedure jobproc
2 as
3 begin
4
5 execute immediate 'alter session set sql_trace=true';
6
7 insert into log
8 select c.value || '/' || instance || '_' ||
9 lower(rtrim(substr( a.program, instr( a.program, '(' )+1 ), ')' ))
10 || '_' || ltrim(to_char(a.spid,'fm99999')) || '.trc'
11 from v$process a, v$session b, v$parameter c, v$thread
12 where a.addr = b.paddr
13 and b.sid = (select sid from v$mystat where rownum=1)
14 and c.name = 'background_dump_dest' ;
15
16 execute immediate 'alter session set sql_trace=false';
17
18 end;
19 /
Procedure created.
ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from log;
no rows selected
ops$tkyte@ORA920> !ls -l $ORACLE_HOME/admin/$ORACLE_SID/bdump
total 4588
-rw-r--r-- 1 ora920 ora920 4682981 Apr 18 11:36 alert_ora920.log
ops$tkyte@ORA920>
ops$tkyte@ORA920> variable n number
ops$tkyte@ORA920> exec dbms_job.submit( :n, 'jobproc;' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> commit;
Commit complete.
ops$tkyte@ORA920>
ops$tkyte@ORA920> exec dbms_lock.sleep(15)
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> select * from log;
MSG
-----------------------------------------------------------------------------------------------------------------------------------
/usr/oracle/ora920/OraHome1/admin/ora920/bdump/ora920_j000_2488.trc
ops$tkyte@ORA920> !ls -l $ORACLE_HOME/admin/$ORACLE_SID/bdump
total 4596
-rw-r--r-- 1 ora920 ora920 4683070 Apr 18 11:37 alert_ora920.log
-rw-rw---- 1 ora920 ora920 5773 Apr 18 11:37 ora920_j000_2488.trc
you'll have to customize that per version and OS of course...
That should do it. Is SYS_CONTEXT of any help?
Basil, April 18, 2003 - 12:39 pm UTC
Thanks for the previous example. As I was working through implementing this myself, I considered putting this job info into a context, for quick retrieval in a proc that would want to know if it's running as a job or not. As I was looking at the SYS_CONTEXT function, I didn't realize there were BG_JOB_ID and FG_JOB_ID values in the USERENV namespace.
For FG_JOB_ID, I didn't get anything that looked very helpful:
SQL> select sys_context('USERENV','FG_JOB_ID') from dual;
SYS_CONTEXT('USERENV','FG_JOB_ID')
-------------------------------------
0
SQL> create table xlog (val varchar2(500));
Table created.
SQL>
SQL> create or replace procedure jobproc
2 as
3 begin
4
5 insert into xlog
6 select sys_context('USERENV','FG_JOB_ID') from dual;
7
8
9 end;
10 /
Procedure created.
dc@SMT92> variable n number
dc@SMT92> exec dbms_job.submit( :n, 'jobproc;' );
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL> select * from xlog;
VAL
-----------------
1921
I couldn't get BG_JOB_ID to be anything but NULL, both interactively and as a job. What exactly does this do?
April 18, 2003 - 3:46 pm UTC
bg_job_id does appear to be a "no op"...
tkprof..
vj, April 22, 2003 - 3:29 am UTC
tom...
assume i got a complex query like below and got the trace file...it it taking lot of time..
SELECT a.distributor,a.p_investor_id,a.investor_id,
b.name_fapp,b.name_sapp,b.name_tapp,b.id_new_no_fapp,b.id_old_no_fapp ,NVL(a.folio_lock,'N') folio_lock,
SUM(a.units-NVL(d.units_locked,0)) units
FROM
ISS_INVESTOR_HOLDINGS_BAL A,
ISS_INVESTOR B ,
ISS_PLAN C,
/*(SELECT FOLIO_ID,SUM(UNITS) UNITS
FROM ISS_LOCK_TRAN
WHERE UNTAG_REF IS NULL
GROUP BY FOLIO_ID)*/
ISS_VWLOCKEDRECS D
WHERE
NVL(A.ACCOUNT_STATUS,'O')='O' AND
a.investor_id='0000097' AND
--A TO B CONDITION
A.INVESTOR_ID = B.INVESTOR_ID AND
A.P_INVESTOR_ID = B.P_INVESTOR_ID AND
B.RECTYPE = 'L' AND
A.DISTRIBUTOR=B.DISTRIBUTOR AND
--A TO C CONDITION
C.DISTRIBUTOR=A.DISTRIBUTOR AND
C.AMC=A.AMC AND
C.SCHEME=A.SCHEME AND
C.SCHEME_TYPE ='LIP' AND
C.PLAN=A.PLAN AND
C.RECTYPE='L' AND
--A TO D CONDITION
A.FOLIO_ID=D.FOLIO_ID
GROUP BY
A.DISTRIBUTOR,A.P_INVESTOR_ID,A.INVESTOR_ID,
B.NAME_FAPP,B.NAME_SAPP,B.NAME_TAPP,B.ID_NEW_NO_FAPP,B.ID_OLD_NO_FAPP ,NVL(A.FOLIO_LOCK,'N')
now from the trace file the output i am looking at is
Rows Row Source Operation
------- -------------------------------------------------- 1 SORT GROUP BY
1 NESTED LOOPS
2 NESTED LOOPS
2 NESTED LOOPS
2 TABLE ACCESS BY INDEX ROWID ISS_INVESTOR
2 INDEX RANGE SCAN (object id 82156)
2 TABLE ACCESS FULL ISS_BAL
2 TABLE ACCESS FULL ISS_PLAN
1 VIEW ISS_VWLOCKEDRECS
31113 FILTER
31114 SORT GROUP BY
152709 TABLE ACCESS BY INDEX ROWID ISS_TRAN
152710 INDEX RANGE SCAN (object id 83065)
now...can you suggest me what is that i need to look to make this query efficient..
i am looking at this and trying to analyze..
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.72 0 0 0 0
Execute 2 0.01 4.10 0 0 2 0
Fetch 2 5.44 371.15 45817 147360 146 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 5.46 375.97 45817 147360 148 1
rgds
April 22, 2003 - 7:46 am UTC
looks like the view isn't appropriate for this query. looks like the view is being entirely materialized AND THEN that temp result merged into the rest of the query.
so, what does the view look like.
Vj, April 23, 2003 - 1:08 am UTC
the view looks like this tom...
CREATE OR REPLACE VIEW ISS_VWLOCKEDRECS ( FOLIO_ID,
UNITS_LOCKED ) AS SELECT /*+ INDEX(ISS_INVESTOR_HOLDINGS_TRAN TMPSRINI) */ folio_id,SUM(units_locked) units_locked
FROM ISS_INVESTOR_HOLDINGS_TRAN
WHERE units_balance>0
GROUP BY folio_id
HAVING NVL(SUM(units_locked),0)=0
rgds
April 23, 2003 - 7:11 am UTC
well, two things
-- whats up with that hint, lose it. it is actually a major cause of your problem here.
-- the aggregate will prevent view merging
well -- three things.
unless I'm reading that wrong -- units_locked is always ZERO or NULL in that view.
So,
SUM(a.units-NVL(d.units_locked,0)) units
is the same as
nvl(sum(a.units),0)
and you don't need to join, you just need to "in"
..
A.FOLIO_ID in ( select FOLIO_ID from iss_vwlockedrecs )
for tracing what background processes?
A reader, April 26, 2003 - 6:05 pm UTC
I have the trace file in the bdump, but I can not figure out for What process (background)? What s000 stand for which process?
Thanks
/u01/oradb32a/oracle/product/admin/tol32db1/bdump/xxxdb1_s000_248.trc
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
ORACLE_HOME = /u01/oradb32a/oracle/product
System name: SunOS
Node name: xxx
Release: 5.8
Version: Generic_108528-16
Machine: sun4u
Instance name: xxxdb1
Redo thread mounted by this instance: 1
Oracle process number: 10
Unix process pid: 248, image: xxxx (S000)
*** SESSION ID:(58.10616) 2003-02-07 16:30:39.234
Invalid Login Using Trusted Authentication
*** 2003-02-07 16:30:54.255
*** SESSION ID:(105.12213) 2003-02-07 16:30:54.254
Invalid Login Using Trusted Authentication
*** 2003-02-07 16:31:10.270
*** SESSION ID:(112.17714) 2003-02-07 16:31:10.270
Invalid Login Using Trusted Authentication
April 26, 2003 - 8:04 pm UTC
s000 is a shared server
A reader, April 26, 2003 - 9:58 pm UTC
I also have seen the s001, s002 tracefiles in the bdump. Does that mean the DB has the MTS setup?
Thanks
April 27, 2003 - 8:20 am UTC
yes, s000 was sufficient to detect that fact...
A reader, April 27, 2003 - 9:01 am UTC
Tom,
Thanks you so much for the help on the tracefiles...
How can I confirm the MTS has been setup? I mean from the SYS views, init.ora... ?
Thanks,
Mike
April 27, 2003 - 4:28 pm UTC
show parameter MTS
in sqlplus.
Trace File... I can't see it!
Richard, May 28, 2003 - 3:53 am UTC
Hi,
I have used the
select c.value || '/ORA' || to_char(a.spid, 'fm00000') || '.trc'
from v$process a, v$session b, v$parameter c
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
script from Beginning Oracle Programming, and it gives me the following:
f:\oracle\admin\bip\udump/ORA00165.trc
However, when I examine the contents of f:\oracle\admin\bip\udump I cannot see ORA00165.trc.
What am I doing wrong?
May 28, 2003 - 7:03 am UTC
so, what files do you see there, maybe from that you can figure out what needs to be tweaked to get this right. do you see a file with the number 165 in it -- if so, what is its format.
if there is nothing in udump, maybe you are using shared server and everything is in bdump (meaning using sql_trace=true isn't really going to be something you are doing, you need a dedicated connection)
Still Looking!
Richard, May 28, 2003 - 12:01 pm UTC
There is no file with "165" in its name in either BDUMP or UDUMP.
We did recently have an ORA-00600 situation (we use NT and Oracle Support tell me that this error is only associated with NT), and I do wonder whether all is now well, even though the DB is running and everyone can access it OK.
May 28, 2003 - 7:22 pm UTC
so, did you see a *new* trace file -- more importantly did you see YOUR trace file.
as I recall, windows trace file numbers could wrap around easily.... find your trace file (which is being generated)
sort the directory by date.
Richard, May 29, 2003 - 11:50 am UTC
The latest trace file that can be seen in the NT directory listing is ORA00496.trc, with a Creation Date of 25 March 2003, 16:08:50 and a Modification Date of 5 March 2003, 16:08:51. The next latest trace file is ORA00615.trc, but that has a Modification Date of 21 November 2002, 12:13:14.
The trace file name generated by the script today (29-MAY-2003) is called ORA00345.trc, and I can't see that either!
Thanks for your help with this increasingly tedious thread.
May 29, 2003 - 1:36 pm UTC
a creation date of 25 mar, but modified 5 mar, hmmm...
anyway -- i think you are not tracing then. Run this for me:
show parameter dump
select server from v$session where sid = (select sid from v$mystat where rownum=1);
in sqlplus
Richard, May 30, 2003 - 6:21 am UTC
Mod. date should have been 25, not 5!
=========================
Requested output follows:
=========================
SQL> show parameter dump
NAME TYPE VALUE
-------------------------- ------- --------------------
background_dump_dest string f:\oracle\admin\bip\
max_dump_file_size integer 10240
user_dump_dest string f:\oracle\admin\bip\
SQL>
and
SQL> select server from v$session where sid = (select sid
from v$mystat where rownum=1);
SERVER
---------
DEDICATED
SQL>
May 30, 2003 - 8:19 am UTC
the files are in f:\oracle\admin\bip, not f:\oracle\admin\bip\udump according to that.
Richard, May 30, 2003 - 10:09 am UTC
No trace files (only folders) in f:\oracle\admin\bip!
May 30, 2003 - 10:22 am UTC
then either
o you are doing something really wrong (eg: wrong server, wrong f:\ drive something)
o you might have max dump file size at 0
o oracle doesn't have the permissions to write to that directory.
this stuff works, really it does. It is used millions of times a day, every day.
Thanks for helping me.
Richard, May 30, 2003 - 10:36 am UTC
Thanks for all your help. I suspect that you are right when you suggest:
o you are doing something really wrong (eg: wrong server, wrong f:\ drive
something)
I'll persevere!
trace files
Venkat, July 23, 2003 - 8:17 am UTC
Tom,
We get the following error while trying the method suggested. What is the reason?
SQL> exec load_file( '&path', '&file_name' );
BEGIN load_file( '/orbicore/oradata/orbicore/admin/udump', 'orbicore_ora_3181.tr
c' ); END;
*
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.UTL_FILE", line 98
ORA-06512: at "SYS.UTL_FILE", line 192
ORA-06512: at "ORBIOASIS.LOAD_FILE", line 11
ORA-06512: at line 1
Thanks,
Venkat
July 23, 2003 - 8:37 am UTC
you don't have utl_file set up properly.
just try this -- create a small block that does nothing other then call utl_file.fopen.
until that works -- this won't work.
UTL_FILE
A reader, September 09, 2003 - 3:19 pm UTC
Hi Tom,
I faced the same error.
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.UTL_FILE", line 98
ORA-06512: at "SYS.UTL_FILE", line 192
ORA-06512: at "SYS.LOAD_FILE", line 9
ORA-06512: at line 1
You said I need to set UTL_FILE properly. I recompiled that and it was ok. I checked grants that was fine too. What else do I need to do to fix the error?
Thanks for your time.
how can you get user_dump_dest directory value from database?
A reader, January 27, 2004 - 2:06 pm UTC
SQL> select name, value from v$parameter
2 where name = 'user_dump_dest';
user_dump_dest
?/rdbms/log
I understand the "?" perhaps stands for Oracle home. How
can I get the absolute path of this parameter
from sql?
Thanx!
January 28, 2004 - 7:58 am UTC
You cannot -- it is known in the server processes itself. You have to be on "the server" and looking at the environment to get that information.
Thanx!
A reader, January 28, 2004 - 1:15 pm UTC
alter session set sql_trace = false; doesn't release the trace file..
Justin, February 06, 2004 - 11:44 am UTC
Tom,
Could you give a list of steps you perform to issue several benchmarks of some code using sql_trace and tkprof in the same session?
For example: I am running a packaged function -
var x clob
BEGIN
:x := test_pkg.VMGETVENDORLIST;
END;
/
where VMGETVENDORLIST is returning XML from the DB.
When I issue sql_trace=true and run that command, I get the trace file I want, and I tkprof it, and check out the report. *BUT*, if I issue sql_trace=false, and try to delete the trace file for a second run (so that the run results don't "stack up" in the trace file) Windose doesn't let me delete the file because it sees it as still being open. I have even allowed some time to pass thinking that it would be released, but it is not. It isn't until I terminate my connection that the trace file "goes away" on it's own, and I reconnect and do the steps over to "start from scratch" so my trace file is clean.
So.. . to restate the question, do you delete your trace file for a second run of a benchmark, or do you let the results "stack up" in the trace file or ...?
Thanks much
February 07, 2004 - 1:26 pm UTC
you have to exit the session and start a new one. we keep the trace file open for that session.
use separate sessions. I always exit sqlplus to get the trace file closed (else the STAT records with really important data might not be there!)
so, i
set sql trace on
do something
exit sqlplus, the application, whatever
analyze the results.
v$session.username
A reader, February 07, 2004 - 11:35 am UTC
In what situations will v$session.username will
appear in the trace file
February 07, 2004 - 3:15 pm UTC
i don't have an exhaustive list.
v$session.username
A reader, February 07, 2004 - 9:19 pm UTC
Can you give me just one situation
February 08, 2004 - 7:25 am UTC
a dump for a deadlock has usernames in it.
...
----------------------------------------
SO: 0x532b8850, type: 4, owner: 0x5328ff88, flag: INIT/-/-/0x00
(session) trans: 0x53afecd0, creator: 0x5328ff88, flag: (10100041) USR/- BSY/-/-/-/-/-
DID: 0001-000B-00000050, short-term DID: 0000-0000-00000000
txn branch: (nil)
oct: 2, prv: 0, sql: 0x57da9188, psql: 0x57da9188, user: 76/OPS$TKYTE
O/S info: use
.....
Thanks
A reader, February 08, 2004 - 6:58 pm UTC
Is user_dump_dest session modifiable?
Arun Gupta, September 02, 2004 - 1:12 pm UTC
Tom,
I am reading the sept/oct issue of Oracle magazine. In the article "Tracing SQL in Oracle Database 10g", the author has given command:
alter session set user_dump_dest=<path>
I looked up in the 10g documentation and it says that user_dump_dest is "alter system" modifiable. Which is correct? I cannot test since I do not have 10g database.
Thanks
September 02, 2004 - 1:47 pm UTC
ops$tkyte@ORA10G> alter session set user_dump_dest = '/tmp';
alter session set user_dump_dest = '/tmp'
*
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this
option
Trace file
Laxman Kondal, December 20, 2004 - 10:57 am UTC
Hi Tom
I found few trace files and not sure how these files are generated. Copy/paste is:
/data/admin/ops1/udump/ops1_ora_16117.trc
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
ORACLE_HOME = /opt/oracle/product/9.2.0.1.0
System name: Linux
Node name: mrm-db
Release: 2.4.9-e.12smp
Version: #1 SMP Tue Feb 11 02:24:10 EST 2003
Machine: i686
Instance name: ops1
Redo thread mounted by this instance: 1
Oracle process number: 17
Unix process pid: 16117, image: oracle@mrm-db (TNS V1-V3)
*** SESSION ID:(15.17029) 2004-12-10 12:10:32.576
----- Probe TARGET session ----
Probe version 2.4, debugID "000F42850001", timeout 180
Probe:set_diagnostic_level: setting to 1
Probe:write: "<init>2"
Probe:read: "<continue>0 2"
Probe:write: "1000"
Probe:write: "<init>2"
.
.
.
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Ptobe:write:
---
and another trace file in other instance is:
/data/admin/ops2/udump/ops2_ora_10282.trc
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
ORACLE_HOME = /opt/oracle/product/9.2.0.1.0
System name: Linux
Node name: mrm-db
Release: 2.4.9-e.12smp
Version: #1 SMP Tue Feb 11 02:24:10 EST 2003
Machine: i686
Instance name: ops2
Redo thread mounted by this instance: 1
Oracle process number: 24
Unix process pid: 10282, image: oracle@mrm-db (TNS V1-V3)
*** SESSION ID:(26.2) 2004-12-16 11:24:58.622
----- Probe DEBUG/CONTROL session ----
Probe version 2.4, debugID "001807060001", timeout 180
Probe:set_diagnostic_level: setting to 1
Probe:read: "<init>2"
Probe:write: "<break line>11 77 1 DATASET_NEW OPOS "
.
.
.
Probe:write: "<continue>0 514"
Probe:read: "[OK]1000"
Probe:read: "<knl exit>25"
Probe:R:enqueue: queued 20 1
Probe:R:enqueue: queued 20 2
Probe:read: "<init>2"
Probe:R:send_pending_requests: 20 1
Probe:write: "<delete bpt>20 1"
Probe:read: "[OK]1000 0"
Probe:R:send_pending_requests: 20 2
Probe:write: "<delete bpt>20 2"
Probe:read: "[OK]1000 0"
Probe:write: "<continue>0 0"
Probe:read: "[OK]1000"
Probe:read: "<knl exit>25"
*** 2004-12-16 11:26:38.364
Probe: removing pipes
--------
one is Probe TARGET session and another one is Probe DEBUG/CONTROL session
What is the source for these file and what do they mean?
sql_trace parameter is FALSE in both instance
opo2@OPS2> SHOW PARAMETER TRACE
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_trace integer 0
oracle_trace_collection_name string
oracle_trace_collection_path string ?/otrace/admin/cdf
oracle_trace_collection_size integer 5242880
oracle_trace_enable boolean FALSE
oracle_trace_facility_name string oracled
oracle_trace_facility_path string ?/otrace/admin/fdf
sql_trace boolean FALSE
trace_enabled boolean TRUE
tracefile_identifier string
opo2@OPS2>
Usually these files are 4KB to 45KB size and I can not figure out which parameter is causing these file to be created.
We do lot of debugging with TOAD and Pl/Sql Developer.
Thanks for your help.
December 20, 2004 - 1:46 pm UTC
do you run java stored procedures?
Trace file
Laxman Kondal, December 21, 2004 - 4:03 pm UTC
Hi Tom
We are not having any java store procedure all proc are pl/sql but TAPI were created by our senior DBA with designer.
Thanks.
December 21, 2004 - 7:11 pm UTC
does the data in this stuff:
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Ptobe:write:
look like "your data" -- "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)" for example.
this doesn't look like anything "oracle"
can you see if any of your developed code relies on dbms_system?
select * from user_dependencies where referenced_name = 'DBMS_SYSTEM';
believe your developers might be using an undocumented api to create trace files.
Trace file
Laxman Kondal, December 22, 2004 - 8:09 am UTC
Hi Tom
This data is our data and while loading this data this trace file was generated. None of the developer is using dbms_system in any code.
opo2@OPS2> select * from user_dependencies where referenced_name = 'DBMS_SYSTEM';
no rows selected
opo2@OPS2>
I am not sure if this has any thing to do our method of creating schema. I have no clue whats the purpose of delete cascade in foreign key and I cant find any document to understand this.
alter table MAINT_REQ
add constraint MNR_DTS_FK foreign key (DTS_ID)
references DSETS (DTS_ID) on delete cascade;
alter table MAINT_REQ
add constraint MNR_SYS_FK foreign key (SYS_ID)
references SYSTEMS (SYS_ID) on delete cascade;
Once you mentioned delete cascade is down ward but this seems to be no way working then why is this created and why no error ever seen. Some fk are like 'on delete cascade enabled'
Can you please help me to understand if we failed to create some thing or messed up some where?
Is there any way to find out if any undocumented api is used to create trace file?
Thanks
December 22, 2004 - 9:57 am UTC
that alter on MAINT_REQ for example is gonig to delete rows in maint_req when rows in DSETS are deleted.
can you ask your programmers where these messages are coming from?
Trace file
Laxman Kondal, December 22, 2004 - 10:20 am UTC
Hi Tom
Thanks for reply.
None of our programmers have any clue what is trace file. They are all for Java and we do their proc in sql/plsql.
Regarding delete cascade in fk, you mean if we do not have delete cascade in pk then we can do it in fk and still have same functionality?
December 22, 2004 - 11:08 am UTC
show them this:
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Ptobe:write:
and ask them "where do you generate those strings please"
i don't know what that last paragraph means.
delete cascade is a FK option, not a PK option.
if you do not have delete cascade AND you try to delete a parent with children records, it (the delete) will fail.
Looks like the trace file is from a debugger
Gary, December 22, 2004 - 4:18 pm UTC
DBMS_DEBUG has a couple of procedures (initialize and attach_session) with a 'DIAGNOSTICS' parameter which "Indicates whether to dump diagnostic output to the tracefile."
The bits about PROBE and DIAGNOSTIC LEVEL in your file suggests that this may be related.
You could try logging the logons and match those with the trace file to try to work out who is generating these, and then which debugger they are using. Maybe the debugger is some beta version where they've left the diagnostics turned on.
Trace file
Laxman Kondal, December 27, 2004 - 4:14 pm UTC
Hi Tom
Merry Christmas and Happy New Year
All in this trace file starting with 'Probe:write' like
'Probe:write: "1000 Det Cord Assemblies (SMDC)(MD44/M614/MU60)'
is data load from external table.
And all that start with 'Probe:read:' and 'Probe:S:get_scalar:' is not our data from any table.
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Probe:write: "1000 Det Cord Assemblies (SMDC) (MD44/M614/MU60)"
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
Ptobe:write:
What are these:
Probe:read: "<get scalar>38 subsys_name 0 "
Probe:S:get_scalar: subsys_name frame:0 format:
and all these are related to external table?
What's 'Probe:S:get_scalar:'and 'Probe:read: "<get scalar>38'?
There are 'l_subsys_name' and 'v_subsys_name' variable in procedure and not 'subsys_name'.
Thanks
December 27, 2004 - 4:49 pm UTC
sorry -- I've never seen it, I've never heard of it, a search on the problem database returns nothing (meaning others haven't called in saying "I've got this tracefile with this stuff in it")
does it happen for a new external table?
all external tables?
what events/non-standard init.ora parameters do you have set.
what is your version and OS and other relevant "system" details you might think of
Trace file
Laxman Kondal, December 28, 2004 - 9:25 am UTC
Hi Tom
We have some data coming from access/spreadsheet into flat file and this flat file is loaded into external table. From external table data is loaded into regular tables through procedure and while loading data into regular table this trace file is generated.
No non-standard init.ora parameters, its Oracle R 9.2.0.4.0 and operating systems Linux Redhat 9.
We have 96 heap tables, 36 views, 9 MVs, and 3 external tables. And each about 85 tables has 9+ triggers generated by designer. And these triggers are calling about 250+ packages generated by designer. I am not good at designer and dont know whats the purpose behind these triggers and packages.
Since no one had it before and one thing I am not good at, or I never wanted to be, is designer and our schema was created by designer, it makes me to go in more details about our schema.
Thanks.
Maybe
Invisible, April 11, 2005 - 8:40 am UTC
On my system (Oracle 8i), when I do a 'backup controlfile to trace', I get a file called 'ORA#####.TRC'. Any idea where the number part comes from?
(I've got this in a backup script. Would be nice to issue an OS command to rename the file to something more sensible - like the date of the backup?)
April 11, 2005 - 9:13 am UTC
It is the normal "trace file name", the number is process id generally.
You can 'tag' the file
ops$tkyte@ORA9IR2> alter session set tracefile_identifier = 'look_for_me';
Session altered.
ops$tkyte@ORA9IR2> alter database backup controlfile to trace;
Database altered.
ops$tkyte@ORA9IR2> !ls -l $ORACLE_HOME/admin/$ORACLE_SID/udump/*look_for_me*
-rw-rw---- 1 ora9ir2 ora9ir2 5941 Apr 11 08:10 /home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_6308_look_for_me.trc
Or, even easier:
ops$tkyte@ORA9IR2> alter database backup controlfile to trace as '/tmp/hey.trc';
Database altered.
ops$tkyte@ORA9IR2> !ls -l /tmp/hey.trc
-rw-rw-r-- 1 ora9ir2 ora9ir2 4573 Apr 11 08:12 /tmp/hey.trc
Thanks!
Invisible, April 11, 2005 - 10:29 am UTC
That last one looks like the exact thing I need. :-D
Thanks for that...
April 11, 2005 - 10:41 am UTC
tkyte@ORA8IW> alter database backup controlfile to trace as 'c:\hey.trc';
alter database backup controlfile to trace as 'c:\hey.trc'
*
ERROR at line 1:
ORA-00933: SQL command not properly ended
sorry, just noticed the 8i'ism there. The tracefile identifier will work, but not "as...." in that release.
Thanks again! :-)
Invisible, April 11, 2005 - 10:48 am UTC
LOL. Thanks TK. You probably just preempted my next post in a few hours' time. ;-)
OK, well, it would have been nice to put the trace file in the same directory as all the other backed up files, but even if I just "tag" the filename, I can do a "copy path\*_TAG newpath" or something like that...
Heh - good job I checked back here! ;-)
PS. We'll looking at upgrading the application we use Oracle for. And wouldn't you know it - it wants v9 or higher. Mmm, maybe I'll be able to get them to buy me something newer now? ;-)
reader
A reader, December 05, 2005 - 10:41 am UTC
Your book refers to setting trace_identity (in init.ora)
for the trace files. Is there anything similar to this for
.aud file identification
December 06, 2005 - 5:07 am UTC
I think you mean "tracefile_identifier".
audit files are named after the server process id
sys@ORA9IR2> !ls
ora_19370.aud
sys@ORA9IR2> @getspid
DEDICATED_SE CLIENTPID
------------ ------------
19370 19369
sys@ORA9IR2> l
1 select a.spid dedicated_server,
2 b.process clientpid
3 from v$process a, v$session b
4 where a.addr = b.paddr
5* and b.sid = (select sid from v$mystat where rownum=1)
you do not have any control over their name as far as I know of.
hoe to remove the locks
sudheer reddy, June 17, 2006 - 6:52 am UTC
Sir,
I am a SAP consutant. my DBA has left the job.( while leaving the job he has informed about your site and the Oracle reference books.) now boss sys to look after the database till some one i appointed in his place.though I able to manage manage a bit.by going through your site.
my problem is how do i remove the locks?
thanks and regards
June 17, 2006 - 7:15 am UTC
"commit" in the application - that is how locks are removed? not sure what else to say.
trace file standby
mal, June 27, 2006 - 12:32 pm UTC
Hi Tom,
I have data guard configure with maximim performance mode. For every switch I am getting a trace file with following enteries
under background_dump_dest
Destination LOG_ARCHIVE_DEST_3 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_3 is in MAXIMUM PERFORMANCE mode
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
Destination LOG_ARCHIVE_DEST_3 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_3 is in MAXIMUM PERFORMANCE mode
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
Destination LOG_ARCHIVE_DEST_3 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_3 is in MAXIMUM PERFORMANCE mode
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
I dont want these trace files . How to avoid it?
June 27, 2006 - 2:44 pm UTC
please contact support for something like this.
tkprof in production environment
Suhail Ahmad, November 23, 2006 - 11:36 am UTC
Tom,
I was reading Pete Finnigan book entitled "Oracle Security Step-by-Step"- a survival guide for Oracle security. In the book Pete mentione dthat tkprof utility should be removed from production environment.
Any comment, what could go wrong if a hacker get acces to the trace file?
Thanks
November 24, 2006 - 6:30 pm UTC
tkprof removed? why? it is a program that reads an input and writes an output.
All I need is the trace file, I can use my own tkprof. I'd ask Pete why "tkprof" itself should be removed, it is just a little reporting program. I cannot think of any reason why it would be removed.
Now, the privilege to enable trace, that might be something to consider, but tkprof itself? I cannot imagine a reason (but always willing to learn)
A reader, December 04, 2006 - 2:56 pm UTC
Is there anyway I can trace query result that sent to client application ?
I want to see following lines in a trace if possible.
Thank you,
V
select * from dual;
D
-
X
December 05, 2006 - 9:38 pm UTC
not really, not on the server - the client network trace could, but not the server.
Associate slaves's traces with the master trace.
Philippe, February 11, 2007 - 5:26 am UTC
Hi tom,
When i use tkprof on PQ i've 5 traces, the master trace in udump and slaves's traces in bdump.
I've made backup of several traces of distinct's SQL, and now i don't know
how to associate the slaves with master (i've loosed the time on file).
How i can to associate the slaves's traces with master ?
Thank you.
load temp table vs frename
A reader, June 07, 2017 - 12:10 pm UTC
That's a nice workaround - clever.
However, what about new methods like utl_file (instead of utl_dump_dest). There's an internal Performance team here that would naturally need access to traces of trace'd sessions (not all tracefiles).
By creating a staged directory, representing it logically in the ORDBMS, and using utl_file.frename - I can move large generated trace-files into the Performance Teams directory space; and allow them to generate tkprof reports, as well as inspect the raw traces.
By using a Pl/Sql procedure, I could put in standards and checks that constraint their security - so don't allow <background_process> trace files to move (as an example).
In this way, there's no loading process, no temp segments of (potentially) large tracefiles - and easier segregation away from the Diag directory.
Your thoughts please?
June 08, 2017 - 1:38 am UTC
Or just put some appropriate views on top of
V$DIAG_TRACE_FILE
V$DIAG_TRACE_FILE_CONTENTS
A reader, June 08, 2017 - 9:20 am UTC
Hi,
There doesn't seem to be V$DIAG_TRACE_FILE_CONTENTS on the 11.2.0.4 dictionary - unless it's built via some explicit rdbms script?
Thanks
June 09, 2017 - 2:17 am UTC
All the more reason to upgrade :-)
V$DIAG_TRACE_FILE_CONTENTS empty but V$DIAG_TRACE_FILE has trace files
Narendra, March 05, 2023 - 12:11 am UTC
Hello Connor/Chris,
Oracle 19.16 Exadata Cloud @ Customer
Would you know what setup would cause me to not see any data while querying V$DIAG_TRACE_FILE_CONTENTS from a PDB even after being able to view trace file references in V$DIAG_TRACE_FILE view?
I tried generating few 10046 and 10053 traces in a PDB and I could see records for the trace files in V$DIAG_TRACE_FILE view but no data at all in V$DIAG_TRACE_FILE_CONTENTS view?
Thanks in advance
March 06, 2023 - 2:54 pm UTC
I do not, sorry. Please take this up with support.