Skip to Main Content
  • Questions
  • Application logging and error handling

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Scott.

Asked: June 21, 2003 - 3:51 pm UTC

Last updated: May 14, 2008 - 3:50 pm UTC

Version: 9.2.0

Viewed 1000+ times

You Asked

Hi Tom

Firstly I'd like to thank you for providing such a helpful and informative site for all of us.

Next I'd like to get your "two cents worth" on a debate that is currently going on at the site where I'm working. The subject of the debate covers the spectrum of application logging, error handling, and a development framework.

One school of thought (the developers) like the idea of having a whole lot of generic api functions and procedure to handle application event logging, error logging, and so on. The merits of this approach they say is that it enables them to create standard code templates and methodologies which make developing new parts of applications quicker and easier because many of the "house keeping" functions are already in existence. Therefore the one "new" code required is what is required to implement the new features or requirements in the application.

The other school of thought on this (mainly the dbas) is that this approach is a terrible idea. They argue that a piece of code should (such as the code for a feed from a source system in to a datawarehouse) should be largely stand alone, so that all the actions that the code is taking are clearly visible and understandable to the person who is supporting the code. The idea is to prevent the support person needing to jump around through a bunch of other procedures and functions to try and work out what is going on.

To give some context to the debate the application developers work inhouse solely on this system, while the DBA function has been outsourced to a large faceless outsourcer. This means that the DBA support is provided by a generic pool of DBAs - who aren't always all that skilled, conscientious, or interested - and who aren't necessarily going to be familiar with the system or the code running within it.

So I was wondering out of interest what your thoughts were on this debate. Do you favour code that stands alone (which may mean repetition of things like logging functions and so on) and is easily understood by someone who is looking at it for the first time? Or do you favour a modular, somewhat more generic approach that allows for more rapid development, but a potentially steeper learning curve for support people?

From reading this site I get the impression you've seen the full spectrum of development methodoligies, theories, frameworks, and so on. Based on your experiences what general recommendations would you make with regard to application logging (eg the logging of application events), debug information, and error handling? Are generic templates and standard api calls a good idea, or do you favour a bespoke approach to problems that would mean solving each problem on it's merits even if this meant having to reinvent the wheel on occasion?

Thanks in advance for any advice or thoughts you might have.
Cheers
Scott

and Tom said...

why are DBA's digging into code?

I am totally into modular code.

I am totally into reuse of code.

Tell me -- do you avoid the use of DBMS_SQL because it is a utility package?
Or UTL_FILE?
Or DBMS_LOB?
Or <any dbms_*, utl_* package here>?

They are the same concept.

Especially for something as straightforward as error logging and such -- it would be so much better to be modularized. Suppose 6 months from now, instead of logging to a FILE (assuming they are now -- doesn't matter) they need to log to a table? You want to dig through the code?

Suppose they are logging to a file using a naming convention and directory path today. Next week, it must change to a different convention/location. You want to dig through that code?


I push things like this -- I've got a cool debug package they can start with if you like. I centralized all of that stuff myself and use it over and over.

Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Application logging and error handling

Scott, June 21, 2003 - 7:08 pm UTC

I agree with pretty much all you are saying. I've been on the modular/templated/code reuse side of the debate.

The debug package you refer to sounds like a good place to start, so if you wouldn't mind posting it I'm sure myself and all the reader of your site certainly wouldn't mind borrowing the ideas from it.

Why are the DBAs digging into the code you ask? Long story... it all starts with an incredibly badly negotiated SLA for an outsourcing arrangement, and lets just say that nobody but the outsourcer lives happily ever after.

Thanks for your thoughts
Scott

Tom Kyte
June 21, 2003 - 9:05 pm UTC

get this presentation

</code> http://asktom.oracle.com/pls/ask/z?p_url=http%3A%2F%2Fasktom.oracle.com%2F%7Etkyte/NOUG.tgz&p_cat=NOUG_SLIDES&p_company=10 <code>

the debug.f stuff is in there. (use dbms_application_info too!)

Error logging

Darren, June 22, 2003 - 10:43 am UTC

Hi,

i use a object (type) throughout my code to doing error logging..

eg:

begin

myLogger.RecordPoint(8000, 'any optional notes');

insert into blah
(cols)
values (values..);

...and so on..

and (only) in the top level caller:

exception
when others
then
myLogger.writeLog;
raise; -- or redirect to an error screen (if web based);
end top_level;

the object recordPoint just keeps the ID in memory and if i have a full debug flag on it does format_call_stack() too.
The writeLog writes out to an error table (autonomous) i have defined with that number i remembered and other relevent data.
I have a view sitting on that table and a parent table that matches the errorPoint number with pacakge/procedure information for our App support guys to view that tells them where the fault occured and the sqlerrm etc.

I find it works really well in practice, and i can change the way the error logger works without affecting the main procedural code.

Something that may be useful to someone

malcolm, June 24, 2003 - 8:26 am UTC

</code> http://jakarta.apache.org/log4j http://jakarta.apache.org/log4j/docs/manual.html <code>

An open source java logging framework from the folks at the apache jakarta project. I haven't tried it yet, but I've been meaning to load the classes into Oracle and create a pl/sql api.

A reader, August 15, 2003 - 2:17 pm UTC


Table logging Vs. File Logging

A reader, March 31, 2005 - 11:35 am UTC

Hi Tom, regarding your statement:
'Suppose 6 months from now, instead of logging to a FILE (assuming they are now -- doesn't matter) they need to log
to a table?'

What is your opinion on applications doing error logging to a db table? Doesn't this belong to a os file system? Currently one of our apps logs to a table but I'm asking them to change it to os-based logging since sometimes they can set the log level to very high, which produces about 1 million rows/day. I do a nightly purge to delete rows older than x days. I'm just concerned about deleting 1 mil rows at a stretch - is this OK to do?

Thanks as always.

Tom Kyte
March 31, 2005 - 11:41 am UTC

i like stuff in a table.

file systems were born to store files.

databases to store data.

is this data or files.

All of my stuff goes into tables.

Now, there is "debug" stuff -- like Oracle trace information. That can go into a file, but things that need to be recorded like audit trails, database. Every click on this site goes there.

perhaps you need to ask them to partition their log table so you can purge by day of week, or week, or month or whatever by simply truncating?

Alberto Dell'Era, March 31, 2005 - 2:22 pm UTC

> What is your opinion on applications doing error logging
> to a db table? Doesn't this belong to a os file system?

For what is worth, i too log to tables, and i've found it immensely useful ... since i can use the full power of sql to mine the log. A bit of substr/instr, group by, some analytics ... and i've answered in minutes any question (about "trend" infos, average time to process, etc etc).
Sure you can do it with sed and awk, but it's more work (and what if you're on WinZozz ?)

Not to mention that it's probably more performant to insert instead of using utl_file to log to the filesystem - and less secure (who monitors the directory you log - what if it fills ? who can access it ?) - and you can't read the log unless you get access to the instance machine account - and ...

autonomous logging

ali, May 13, 2008 - 6:12 pm UTC

I've flip-fopped on the log files or log tables argument in the past, but have been back on the tables side for a while now.

I like write to log tables using a autonomous transactions.
...
code
...
log(...)
...
code
...
log(...) - etc

This is all fine and dandy when I am doing a few rows at a time, but ideally I'd like to bulk (forall) insert, but cannot with an autonomous transaction.

something like

...
code
for i...
log_tab_index(i).str:= 'my log info';
log_tab_index(i).time:=systimestamp;
code
end loop;
bulk_log(log_tab_index);
etc

where bulk_log takes the pre-populated log tab and whacks them into my logging table.

any suggestions for alternative options to keep autonomous transactions and to get better performance than looping around the tab_index with individual insert?
Tom Kyte
May 14, 2008 - 10:31 am UTC

this should be rather easy to do?


you have a procedure "buffered_log_write", it takes a string and adds it to a plsql table variable in your log package.

you have a procedure "write_buffered_log", it takes that array, array inserts it, commits and empties the array.

array inserts it

ali, May 14, 2008 - 11:24 am UTC

exactly which syntax where you thinking of for you 'array insert' here, with autonomous trans

I'm using 9iR2 if that makes any difference
Tom Kyte
May 14, 2008 - 3:50 pm UTC

the only array insert there is in plsql? forall??

ops$tkyte%ORA9IR2> create table log ( id number primary key, msg varchar2(80) );

Table created.

ops$tkyte%ORA9IR2> create sequence s;

Sequence created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> create or replace package log_pkg
  2  as
  3          procedure buffered_log_write( p_msg in varchar2 );
  4          procedure flush_buffered_log;
  5  end;
  6  /

Package created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> create or replace package body log_pkg
  2  as
  3
  4  type array is table of varchar2(80) index by binary_integer;
  5
  6  g_data array;
  7
  8  procedure buffered_log_write( p_msg in varchar2 )
  9  is
 10  begin
 11          g_data( g_data.count+1 ) := p_msg;
 12  end;
 13
 14
 15  procedure flush_buffered_log
 16  is
 17          pragma autonomous_transaction;
 18  begin
 19          forall i in 1 .. g_data.count
 20                  insert into log(id,msg) values (s.nextval,g_data(i));
 21          commit;
 22          g_data.delete;
 23  end;
 24
 25  end;
 26  /

Package body created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> begin
  2     log_pkg.buffered_log_write( 'hello' );
  3     log_pkg.buffered_log_write( 'world' );
  4     log_pkg.flush_buffered_log;
  5  end;
  6  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> select * from log;

        ID
----------
MSG
-------------------------------------------------------------------------------
         1
hello

         2
world

forall

ali, May 15, 2008 - 4:22 am UTC

thanks tom - got myself sidetracked when I first looked at this as I was trying to use forall with individual members of my array.

quick read of this example has made me go back and get mine working how I wanted in the first place.

thanks

logging

A reader, May 15, 2008 - 4:34 pm UTC


More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here