Skip to Main Content
  • Questions
  • DBMS_PROFILER shows UTL_FILE is performance issue

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Andrew.

Asked: November 09, 2009 - 4:01 pm UTC

Last updated: November 10, 2009 - 7:48 am UTC

Version: 11.1.0.7

Viewed 1000+ times

You Asked

Tom,
We have a plsql program that was noticed to be taking an excessive amount of time. We ran it in conjunction with dbms_profiler to find if there can be anything tuned.
The profsum report shows that UTL_FILE is taking most of the time ~93%.
However no details are shown for the supplied package even though O7_DICTIONARY_ACCESSIBILITY=true.


UNIT_OWNER                       UNIT_NAME                 SECONDS   PERCENTAG
-------------------------------- ------------------------- --------- ---------
SYS                              UTL_FILE                     946.42     92.79

Unit #4: SYS.UTL_FILE - Total time:    946.42 seconds



sql->show parameter dictionary

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
O7_DICTIONARY_ACCESSIBILITY          boolean     TRUE



Is there any way to determine why UTL_FILE is the problem here?
The other code that actually calls utl_file does not show up as problematic in the report, but I am not sure if that is correct or if the excess time is just aggregated into UTL_FILE's time.
The plsql code is heavily instrumented utilizing UTL_FILE via a wrapper package but I would hope the instrumentation is not the sole cause of the performance issue.

Any help with DBMS_PROFILER would be appreciated. Or should I abandon profiler and move to using DBMS_HPROF?

Thanks
Andrew

and Tom said...

... DBMS_PROFILER shows UTL_FILE is performance issue...

no, it shows you are spending time there - but that may well be unavoidable. Please remember when looking at performance related metrics:

THERE WILL ALWAYS BE SOMETHING AT THE TOP

always - in statspack/awr - there will ALWAYS be a top five times events - ALWAYS.


... Is there any way to determine why UTL_FILE is the problem here? ...
you spend most of your time calling it - that is why. You'd need to look at your code to ask "am I doing this most efficiently"

for example:

for x in ( select * from t ) 
loop
   utl_file.put( x.c1 );
   utl_file.put( ' ' );
   utl_file.put_line( x.c2 );
end loop;

that would be horribly inefficient, this:

for x in ( select * from t ) 
loop
   utl_file.put_line( x.c1 || ' ' || x.c2 );
end loop;


would be better and:

l_buffer := null;
for x in ( select * from t ) 
loop
   l_buffer := l_buffer || chr(10) || x.c1 || ' ' || x.c2;
   if (length(l_buffer) >= 32760 - (max_len_c1+max_len_c2+5))
   then
       utl_file.put_line( l_buffer );
       l_buffer := null;
   end if;
end loop;
if (l_buffer is not null)
then
   utl_file.put_line( l_buffer );
end if;


would probably be "the best"



but basically, in your case, since UTL_FILE is wrapped and you have no insight into the code, you have what you have as far as metrics there. what if you knew that you spend 50% of your time on line 1323 in UTL_FILE - would that be of any use to you? (rhetorical question....) You are in fact getting the details for that supplied package - at least as many as makes sense to give you, more detail would provide you nothing.

... The plsql code is heavily instrumented utilizing UTL_FILE via a wrapper package but I would hope the instrumentation is not the sole cause of the performance issue. ...

that would depend on how you instrumented. Most of the times, you would have code that looks like:


if ( debug_flag )
then
utl_file stuff
end if;


but it sounds like you might be profiling with instrumentation turned on - which you shouldn't be doing - you should be running the code as you would in production.

So, switch off (DO NOT REMOVE - SWITCH OFF and if you cannot switch off - fix that, make it switchable) your instrumentation and run it as you would in production.


If yours is not switchable - there are many that are:

http://tylermuth.wordpress.com/2009/11/03/logger-a-plsql-logging-and-debugging-utility/

for example.


(if you have access to dbms_hprof, you'll be happier with that - yes, it'll give you a hierarchy - you'll see what units of yours spent the most time calling utl_file stuff - get sqldeveloper and use it)




Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Andrew Markiewicz, November 10, 2009 - 9:37 am UTC

Thanks Tom.
That was the issue. After we had all the instrumentation shut off the program ran in an acceptable time. The programmer forgot that his sqlplus logon script automatically turned the instrumentation on.

We just got on 11g so I will take a look at dbms_hprof with SQLDeveloper.

Unfortunately I think our instrumentation technique is more along the lines of the first "horribly inefficient" variety. But we do have a debugging package as a wrapper for utl_file.
Would it be beneficial to build a buffer within that debugging package that flushes to utl_file after it reaches a certain size or temporal threshold? Or is that not worth the effort since instrumentation of that degree would only be on during troubleshooting?

Andrew

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library