Skip to Main Content
  • Questions
  • logging errors to os file in pl/sql without utl_file

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Manoj.

Asked: February 13, 2009 - 2:12 pm UTC

Last updated: April 28, 2011 - 7:58 am UTC

Version: 9.2.0

Viewed 1000+ times

You Asked

1.dba has told to discontinue utl_file usage in pl/sql due to fail over issues below :
want to failover only database to the second node but keep file system
in the first node (or unplanned DB failover to second node), UTL_FIL
programs will fail. use any other alternatives to UTL_FIL.

2.What is the simplest alternative in 9i to handle exceptions and log exception data to unix file system error log files .
assuming sun solaris is the os and korn shell script is used for calling procedures.
3.There are insert stmts in cursor loops for which any error has to be ignored and only successful inserts will be commited and some dmls outside the cursor loop also.
How can I log errors into os for these exceptions without utl_file package.
Pl suggest simplest technique.






and Tom said...

1) tell dba - hey, just give us a directory to write to that is in the same place as the log files (alert log, trace files, etc). They are just like what we are using - and you need the alert/trace files to be available from the failed node - so you MUST have someplace for this.


Think about this please - whether you use UTL_FILE to write a file or you use "SPECIAL_MAGIC_PKG" to write the file - you would have the same exact PROBLEM, the problem is you need a file system available from both nodes in the cluster, unless you have that - you will have the same thing happening with special magic pkg as you would with UTL_FILE - anything that "writes files" would have this issue.


2) don't use files, use tables - why use a file? I would not consider a file for this, this screams "log me to a table so that I'm available to everyone - not just people that have OS file system access"

3) As I said - utl_file or not, it would not MATTER - anything that writes to a file would suffer the same issues (that should be pretty evident, this is not an issue with UTL_FILE, it is an issue with your file system setup)


But that said, you shouldn't be logging to a file - log to a table. This is the ONE valid use of an autonomous transaction - to log and commit errors into an error logging table.

Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

log errors

A reader, February 16, 2009 - 5:16 pm UTC


Bill, February 18, 2009 - 4:57 am UTC

How about writing the trace information to a flat file and 'front ending' the file with an external table?
Tom Kyte
February 18, 2009 - 3:28 pm UTC

still it would be a file on a file system and the root 'problem' is that don't have (apparently) a file system that is available to all of the nodes in the cluster - so only one of the nodes would be able to read the file...

that is what they are trying to overcome.

regarding utl_file

Carbas, February 19, 2009 - 12:54 pm UTC

hi, we have a similar problem writing the result of an SQL in a flat file || delimited.

we used to have many SQL files and now we're converting them into packages (sometimes with dynamic SQL)

I'm not sure if it's the best way to write flat files, I'm pretty sure it's better that hundred of sql files in the SO... but is there a "best way of doing this"?

Thanks

Tom Kyte
February 19, 2009 - 1:54 pm UTC

if you are asking "what is the best way to dump data out into a flat file" my standard response is "to not do that, don't dump it out, find out why you think you need to and change something so that you dont"


If you want the slowest way - plsql + utl_file
If you want something faster - use SQLPlus
If you want the fastest, write a 3gl (pro*c, c, java, whatever) program

http://asktom.oracle.com/tkyte/flat/index.html

regarding utl_file

Carbas, February 20, 2009 - 2:59 pm UTC

Hi tom, well I'm running this shell script
..
..
filename=carbas_test.txt
touch /landing/temp/$filename
sqlplus -s $SF_ID >/landing/temp/$filename <<!
set newpage none
set space 0
set linesize 32767
set pagesize 50000
set echo off
set heading off
set colsep ''
set tab off
set feedback off
SELECT col1||col2||col3...etc
FROM table
WHERE ROWNUM < 1000000;
;
exit
!

and it's 6 times slower than a procedure with UTL_FILE... so... what am I doing wrong, it should be faster right?

Tom Kyte
February 21, 2009 - 9:11 pm UTC

are you running it

a) on your pc over a slow link with tons of data and getting a file created on YOUR machine

or

b) on your server, without using the network.


I'll bet (a) and in that case, just do (b)

debug.f

Charlie MuJiang, February 20, 2009 - 4:39 pm UTC

Regardless of RAC or standalone database, what will happen to debug.f in the future?
It calls utl_file to write the instrumentation(application trace) information.
Tom Kyte
February 21, 2009 - 9:11 pm UTC

so?

RAC or standalone.... so what? it uses utl_file - so?



extract data to flat file

Charlie MuJiang, February 20, 2009 - 4:43 pm UTC

To Carbas,

You may try bulk fetching, increase the arraysize in SQL*Plus .


SQL> show arraysize

SQL> set arraysize 1000

debug.f

Charlie MuJiang, February 23, 2009 - 5:36 pm UTC

You suggested to write the instrumentation trace data to a table in this post, not a file. but debug.f write to a file.
Will you publish a new version of debug.f write to a heap table?

I've started loving debug.f anyway, I used it everywhere.
Tom Kyte
February 23, 2009 - 5:50 pm UTC

for this user, because they did not have a universally accessible file system. Do you have that specific problem?

and since you use debug.f - you see how it is implemented. How hard would it be to make it write to a table instead of a file (it takes about five minutes to switch it over, we've done that). Or a database pipe to see 'real time output' in another window (about five minutes). Or <any output method you want> - about 5 minutes.

Because the entire IO bit is so modularized - you can replace it with whatever means you want in a very short period of time.

debug.f

Charlie MuJiang, February 23, 2009 - 6:12 pm UTC

Great! Thanks for the tips.

I'm kind of lazy, just want to grab a well tested production code if you have one.
OK, I'll try to build some overload procedure/function inside the debug.f package, or add one more parameter to specify the logging destination: file/table/queue/pipe...etc.

Writting to table versus file, which is faster

Vinu, April 20, 2009 - 5:04 am UTC

Are you saying that writing to a table from PL/SQL is faster as compared to writing to a file using UTL_FILE ? For long running batch programs I have noticed that the table inserts tend to slow down over a period of time.

I am in the process of developing a archiving utility in PL/SQL to move static data from active tables to history tables on a nightly basis and was researching for options to logging from the PL/SQL. (I cam across some open source utility also) Could you please elaborate of various logging options available from PL/SQL.

-Vinu
Tom Kyte
April 21, 2009 - 1:12 pm UTC

... For long running batch programs I have
noticed that the table inserts tend to slow down over a period of time.
....

that is a really "not smart" comment.

If that is happening, you are waiting for something. Perhaps you have filled the cache and dbwr cannot keep it clean enough (you are waiting on buffer busy waits, free buffer waits). Fix that (tune dbwr, tune the cache, use direct path IO, etc etc tec)

Perhaps you are experiencing "checkpoint not complete, cannot allocate new log", fix that.

Perhaps you are experiencing "X", diagnose "X", fix "X".



... I am in the process of developing a archiving utility in PL/SQL to move static
data from active tables to history tables on a nightly basis and was
researching for options to logging from the PL/SQL. ...

if you have a lot of procedural code in this implementation - you have DONE IT WRONG. This should all be direct path inserts using insert /*+ APPEND */ as select followed by deletes to purge. If you do this slow by slow - you have failed.

For this, I would probably just have a custom set of very very very small tables where by I would insert a few records after doing the bulk insert+delete operation to simply say "we moved X rows from T to T_HISTORY". I would record that in a table with timestamps and other relevant information. I would keep that information *forever*. I would want that information in the database (not in a file) so I could query it, trend from it, use it.
Inserts do not "slow down over time", something is happening - you've undersized SOMETHING in your system for the work you do, find it, fix it.

Link is broken

Eddie, April 28, 2011 - 7:40 am UTC

Hello Tom,

http://asktom.oracle.com/tkyte/flat/index.html

given link is broken. Can you please upload it again ?

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library