log errors
A reader, February 16, 2009 - 5:16 pm UTC
Bill, February 18, 2009 - 4:57 am UTC
How about writing the trace information to a flat file and 'front ending' the file with an external table?
February 18, 2009 - 3:28 pm UTC
still it would be a file on a file system and the root 'problem' is that don't have (apparently) a file system that is available to all of the nodes in the cluster - so only one of the nodes would be able to read the file...
that is what they are trying to overcome.
regarding utl_file
Carbas, February 19, 2009 - 12:54 pm UTC
hi, we have a similar problem writing the result of an SQL in a flat file || delimited.
we used to have many SQL files and now we're converting them into packages (sometimes with dynamic SQL)
I'm not sure if it's the best way to write flat files, I'm pretty sure it's better that hundred of sql files in the SO... but is there a "best way of doing this"?
Thanks
February 19, 2009 - 1:54 pm UTC
if you are asking "what is the best way to dump data out into a flat file" my standard response is "to not do that, don't dump it out, find out why you think you need to and change something so that you dont"
If you want the slowest way - plsql + utl_file
If you want something faster - use SQLPlus
If you want the fastest, write a 3gl (pro*c, c, java, whatever) program
http://asktom.oracle.com/tkyte/flat/index.html
regarding utl_file
Carbas, February 20, 2009 - 2:59 pm UTC
Hi tom, well I'm running this shell script
..
..
filename=carbas_test.txt
touch /landing/temp/$filename
sqlplus -s $SF_ID >/landing/temp/$filename <<!
set newpage none
set space 0
set linesize 32767
set pagesize 50000
set echo off
set heading off
set colsep ''
set tab off
set feedback off
SELECT col1||col2||col3...etc
FROM table
WHERE ROWNUM < 1000000;
;
exit
!
and it's 6 times slower than a procedure with UTL_FILE... so... what am I doing wrong, it should be faster right?
February 21, 2009 - 9:11 pm UTC
are you running it
a) on your pc over a slow link with tons of data and getting a file created on YOUR machine
or
b) on your server, without using the network.
I'll bet (a) and in that case, just do (b)
debug.f
Charlie MuJiang, February 20, 2009 - 4:39 pm UTC
Regardless of RAC or standalone database, what will happen to debug.f in the future?
It calls utl_file to write the instrumentation(application trace) information.
February 21, 2009 - 9:11 pm UTC
so?
RAC or standalone.... so what? it uses utl_file - so?
extract data to flat file
Charlie MuJiang, February 20, 2009 - 4:43 pm UTC
To Carbas,
You may try bulk fetching, increase the arraysize in SQL*Plus .
SQL> show arraysize
SQL> set arraysize 1000
debug.f
Charlie MuJiang, February 23, 2009 - 5:36 pm UTC
You suggested to write the instrumentation trace data to a table in this post, not a file. but debug.f write to a file.
Will you publish a new version of debug.f write to a heap table?
I've started loving debug.f anyway, I used it everywhere.
February 23, 2009 - 5:50 pm UTC
for this user, because they did not have a universally accessible file system. Do you have that specific problem?
and since you use debug.f - you see how it is implemented. How hard would it be to make it write to a table instead of a file (it takes about five minutes to switch it over, we've done that). Or a database pipe to see 'real time output' in another window (about five minutes). Or <any output method you want> - about 5 minutes.
Because the entire IO bit is so modularized - you can replace it with whatever means you want in a very short period of time.
debug.f
Charlie MuJiang, February 23, 2009 - 6:12 pm UTC
Great! Thanks for the tips.
I'm kind of lazy, just want to grab a well tested production code if you have one.
OK, I'll try to build some overload procedure/function inside the debug.f package, or add one more parameter to specify the logging destination: file/table/queue/pipe...etc.
Writting to table versus file, which is faster
Vinu, April 20, 2009 - 5:04 am UTC
Are you saying that writing to a table from PL/SQL is faster as compared to writing to a file using UTL_FILE ? For long running batch programs I have noticed that the table inserts tend to slow down over a period of time.
I am in the process of developing a archiving utility in PL/SQL to move static data from active tables to history tables on a nightly basis and was researching for options to logging from the PL/SQL. (I cam across some open source utility also) Could you please elaborate of various logging options available from PL/SQL.
-Vinu
April 21, 2009 - 1:12 pm UTC
... For long running batch programs I have
noticed that the table inserts tend to slow down over a period of time.
....
that is a really "not smart" comment.
If that is happening, you are waiting for something. Perhaps you have filled the cache and dbwr cannot keep it clean enough (you are waiting on buffer busy waits, free buffer waits). Fix that (tune dbwr, tune the cache, use direct path IO, etc etc tec)
Perhaps you are experiencing "checkpoint not complete, cannot allocate new log", fix that.
Perhaps you are experiencing "X", diagnose "X", fix "X".
... I am in the process of developing a archiving utility in PL/SQL to move static
data from active tables to history tables on a nightly basis and was
researching for options to logging from the PL/SQL. ...
if you have a lot of procedural code in this implementation - you have DONE IT WRONG. This should all be direct path inserts using insert /*+ APPEND */ as select followed by deletes to purge. If you do this slow by slow - you have failed.
For this, I would probably just have a custom set of very very very small tables where by I would insert a few records after doing the bulk insert+delete operation to simply say "we moved X rows from T to T_HISTORY". I would record that in a table with timestamps and other relevant information. I would keep that information *forever*. I would want that information in the database (not in a file) so I could query it, trend from it, use it.
Inserts do not "slow down over time", something is happening - you've undersized SOMETHING in your system for the work you do, find it, fix it.
Link is broken
Eddie, April 28, 2011 - 7:40 am UTC
April 28, 2011 - 7:58 am UTC