Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Marc.

Asked: January 29, 2001 - 9:27 am UTC

Last updated: August 04, 2017 - 6:17 am UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

Dear Tom,

I would like to monitor ORA- messages on a Db whithout having access to the filesystem (to the alert file...)
Is there a view which allows that or do you have an idea ?
Thanks for your help

Marc

and Connor said...

There is no view but if you have an account with:

o select on v_$parameter
o select on v_$thread
o create any directory

(you could remove the dependencies on v$parameter and v$thread but you would have to supply the alert log name to this routine) you could use a setup like the following:

drop table alert_log;

create global temporary table alert_log
( line int primary key,
text varchar2(4000)
)
on commit preserve rows
/

create or replace procedure load_alert
as
l_background_dump_dest v$parameter.value%type;
l_filename varchar2(255);
l_bfile bfile;
l_last number;
l_current number;
l_start number := dbms_utility.get_time;
begin
select a.value, 'alert_' || b.instance || '.log'
into l_background_dump_dest, l_filename
from v$parameter a, v$thread b
where a.name = 'background_dump_dest';

execute immediate
'create or replace directory x$alert_log$x as
''' || l_background_dump_dest || '''';


dbms_output.put_line( l_background_dump_dest );
dbms_output.put_line( l_filename );

delete from alert_log;


l_bfile := bfilename( 'X$ALERT_LOG$X', l_filename );
dbms_lob.fileopen( l_bfile );

l_last := 1;
for l_line in 1 .. 50000
loop

dbms_application_info.set_client_info( l_line || ', ' ||
to_char(round((dbms_utility.get_time-l_start)/100, 2 ) )
|| ', '||
to_char((dbms_utility.get_time-l_start)/l_line)
);
l_current := dbms_lob.instr( l_bfile, '0A', l_last, 1 );
exit when (nvl(l_current,0) = 0);

insert into alert_log
( line, text )
values
( l_line,
utl_raw.cast_to_varchar2(
dbms_lob.substr( l_bfile, l_current-l_last+1,
l_last ) )
);
l_last := l_current+1;
end loop;

dbms_lob.fileclose(l_bfile);
end;
/


It'll use a bfile to load up a temporary table with each line of your alert log. On my particular system, this loaded up 50,000 lines in about 1.5 minutes so its not the speediest but it works.

Enhancements:

- this should be a package. the package would remember where I left off bytewise so that I could call a refresh that would load just NEW lines instead of the entire file again. That way, if I kept a session open for a while -- i could load it once and just refresh during the day.

- this should let me pass in some percentage of the file to load -- eg: load the last 10% of the alert log. That would make this apparently faster as well. you would just use dbms_lob.getlength to figure out what 10% of the file represents and start at that offset (l_last := 1 at the top of the loop would be changed).

Addenda 2017
============
The alert log became exposed via the SQL interface directly in more recent versions of Oracle

SQL> desc V$diag_alert_ext
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ADDR                                               RAW(8)
 INDX                                               NUMBER
 INST_ID                                            NUMBER
 CON_ID                                             NUMBER
 ADR_PATH_IDX                                       VARCHAR2(445)
 ADR_HOME                                           VARCHAR2(445)
 ORIGINATING_TIMESTAMP                              TIMESTAMP(9) WITH TIME ZONE
 NORMALIZED_TIMESTAMP                               TIMESTAMP(9) WITH TIME ZONE
 ORGANIZATION_ID                                    VARCHAR2(67)
 COMPONENT_ID                                       VARCHAR2(67)
 HOST_ID                                            VARCHAR2(67)
 HOST_ADDRESS                                       VARCHAR2(49)
 MESSAGE_TYPE                                       NUMBER
 MESSAGE_LEVEL                                      NUMBER
 MESSAGE_ID                                         VARCHAR2(67)
 MESSAGE_GROUP                                      VARCHAR2(67)
 CLIENT_ID                                          VARCHAR2(67)
 MODULE_ID                                          VARCHAR2(67)
 PROCESS_ID                                         VARCHAR2(35)
 THREAD_ID                                          VARCHAR2(67)
 USER_ID                                            VARCHAR2(131)
 INSTANCE_ID                                        VARCHAR2(67)
 DETAILED_LOCATION                                  VARCHAR2(163)
 UPSTREAM_COMP_ID                                   VARCHAR2(103)
 DOWNSTREAM_COMP_ID                                 VARCHAR2(103)
 EXECUTION_CONTEXT_ID                               VARCHAR2(103)
 EXECUTION_CONTEXT_SEQUENCE                         NUMBER
 ERROR_INSTANCE_ID                                  NUMBER
 ERROR_INSTANCE_SEQUENCE                            NUMBER
 MESSAGE_TEXT                                       VARCHAR2(2051)
 MESSAGE_ARGUMENTS                                  VARCHAR2(515)
 SUPPLEMENTAL_ATTRIBUTES                            VARCHAR2(515)
 SUPPLEMENTAL_DETAILS                               VARCHAR2(515)
 PARTITION                                          NUMBER
 RECORD_ID                                          NUMBER
 FILENAME                                           VARCHAR2(515)
 LOG_NAME                                           VARCHAR2(67)
 PROBLEM_KEY                                        VARCHAR2(553)
 VERSION                                            NUMBER
 CON_UID                                            NUMBER
 CONTAINER_ID                                       NUMBER
 CONTAINER_NAME                                     VARCHAR2(33)

SQL> select originating_timestamp, message_text
  2  from V$diag_alert_ext
  3  where originating_timestamp > cast(sysdate-3/24 as timestamp);

ORIGINATING_TIMESTAMP
---------------------------------------------------------------------------
MESSAGE_TEXT
---------------------------------------------------------------------------
04-AUG-17 12.00.04.965000000 PM +08:00
Thread 1 advanced to log sequence 527 (LGWR switch)

04-AUG-17 12.00.04.971000000 PM +08:00
  Current log# 2 seq# 527 mem# 0: C:\ORACLE\ORADATA\DB122\REDO02.LOG

04-AUG-17 12.00.05.350000000 PM +08:00
Archived Log entry 526 added for T-1.S-526 ID 0x33fe90fc LAD:1



Rating

  (60 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

It's improper to manage the alert files within the database

Jianhui (Frank), February 24, 2003 - 6:19 pm UTC

Tom,
Although you gave a method to load alert file into DB, but the goal of alert file is to log the errors when the DB goes wrong, so it's not right to manage the alert file on top of the DB layer. For example, when your DB goes down or halt, how could you get the alert message from the DB?
It should be up to the OS level to check the alert files.
I use the shell script to rename the alert file for backup, then scan the important keywords in the file to generate the alert message (email or pager) to DBAs. On the other hand, the Oracle will generate a new alert file which will be backuped and scaned when the next scheduled time comes.


Tom Kyte
February 25, 2003 - 8:59 am UTC

guess what -- if you cannot log into the database to view the alert log -- hmm, maybe that would be enough of an indication that the DB is down?

and if you read the question:

I would like to monitor ORA- messages on a Db whithout having access to the
filesystem (to the alert file...)


.......

Alert log in external table?

Will, February 25, 2003 - 12:22 pm UTC

Would you have any issues with using an external table to access an alert log from the DB?

Tom Kyte
February 25, 2003 - 8:13 pm UTC

nope, that would work nicely.

Renaming alert log

Martin Guillen, March 01, 2004 - 12:14 pm UTC

Hi Tom: Is it bad to rename the alert log (e.g. mv alert_sid.log alert_sid.old) in a regular basis?
I want to do it for my check alert log script.
Oracle creates a new one if the file doesn't exists but I want to be safe.
Thank You,
Martin.

Tom Kyte
March 01, 2004 - 12:32 pm UTC

you can do that, yes, most people do over time to roll the log.

it is "safe"

clarification

amit poddar, March 01, 2004 - 2:23 pm UTC

Hi,
You mentioned that we can rename alert log without a problem.
I thought that oracle keeps the file open throught out its life (until it is shut down). Hence that would mean even if we rename the file it would keep writing to the renamed file since the renamed file would just be a hard link to the orignal inode, since the running oracle process woule never know that the file was renamed.

Could you please tell me where am I going wrong , since I know I am wrong since I have tested it. It works as you mentioned.

Following is the output from my test:

HOP7) fuser alert_HOP7.log1
alert_HOP7.log1: 44884 47740 48510 51344 51602 51860 52118 52376 52634 52892 53150
HOP7) fuser alert_HOP7.log
alert_HOP7.log:
HOP7)

You see the old file still has process attached to it but the new file does not have any process attached to it.




Tom Kyte
March 01, 2004 - 3:41 pm UTC

no, it opens for append, writes, and closes that file each time.


background trace files (like an smon, pmon, snp, etc) behave as you describe, but not the alert.

more clarification

amit poddar, March 01, 2004 - 3:59 pm UTC

Hi,

Then
why does fuser show the old alert log file still used by the processes

HOP7) fuser alert_HOP7.log1
alert_HOP7.log1: 44884 47740 48510 51344 51602 51860 52118 52376 52634 52892 53150
HOP7)

HOP7) ps -ef|grep 44884
oracle 44884 1 0 Feb 29 - 0:21 ora_pmon_HOP7
oracle 194860 181500 2 16:00:07 p6 0:00 grep 44884
HOP7) ps -ef|grep 47740
oracle 47740 1 0 Feb 29 - 0:15 ora_dbw0_HOP7
oracle 75384 181500 3 16:00:12 p6 0:00 grep 47740
HOP7) ps -ef|grep 48510
oracle 48510 1 1 Feb 29 - 3:29 ora_lgwr_HOP7
oracle 106416 181500 1 16:00:17 p6 0:00 grep 48510
HOP7) ps -ef|grep 51344
ps oracle 51344 1 0 Feb 29 - 0:51 ora_ckpt_HOP7
oracle 208240 181500 2 16:00:22 p6 0:00 grep 51344
-eHOP7) ps -ef|grep 51602
oracle 51602 1 0 Feb 29 - 0:51 ora_smon_HOP7
oracle 208242 181500 1 16:00:27 p6 0:00 grep 51602
HOP7) ps -ef|grep 51860
oracle 51860 1 0 Feb 29 - 0:00 ora_reco_HOP7
oracle 296136 181500 2 16:00:32 p6 0:00 grep 51860
HOP7) ps -ef|grep 52118
oracle 52118 1 0 Feb 29 - 0:00 ora_snp0_HOP7
oracle 310896 181500 2 16:00:37 p6 0:00 grep 52118
HOP7)

and so on

Tom Kyte
March 01, 2004 - 4:43 pm UTC

i'm not familar with fuser

Alert log file size

A reader, March 01, 2004 - 7:49 pm UTC

Does alert log file size have any impact on performance. For example, if alert log file size is so huge oracle process has to open a BIG file and write the content, then close it. Right?

Tom Kyte
March 02, 2004 - 7:18 am UTC

I've never measured it, it would have to be really huge -- a seek to the end isn't that expensive (it does not have to read the entire file to find the end or anything)

OEM, maybe?

Dan, March 02, 2004 - 11:45 pm UTC

Getting back to the original question... OEM is kind of made for this kind of thing. We have a standard "Production DB" event we register against our production databases, one of the tests is for Alert log activity. I can't recall what the criteria are, but basically if something boring gets cut to the alert log, you get no notification, an "interesting" event makes a nice little yellow flag pop up and a "very interesting" (like an ORA-600) makes a red flag pop up (and email/page you if you want it to).

I used to be kind of down on OEM ("GUI - phooey!") but hey, it works, you're already paying for it, and most of us probably have better things to do with our life (...or at least the part of our life our employeers are paying us for...) than reinvent this wheel.

P.S. lots of other cool event test in there as well - broken jobs, full disks etc. you can even get it to watch for corrupt blocks.

Or how about this approach

Dominic, January 18, 2005 - 12:21 pm UTC

I found this on the web. Is this a sensible alternative? IT certainly looks neat and tidy. Obviously needs amending for background_dump_dest etc I am using 8.1.6
Thanks

CREATE TYPE Alert_Row_Type AS OBJECT
(line NUMBER(10),
text VARCHAR2(2000)
);
/

CREATE TYPE Alert_Type IS TABLE OF Alert_Row_Type;
/
CREATE OR REPLACE FUNCTION Get_Alert RETURN Alert_Type IS
Alert_Tab Alert_Type := Alert_Type();
l_file UTL_FILE.file_type;
l_line NUMBER(10) := 1;
l_text VARCHAR2(2000);
BEGIN
l_file := UTL_FILE.fopen('C:\oracle\admin\TSH1\bdump', 'tsh1ALRT.LOG', 'r');
BEGIN
LOOP
utl_file.get_line(l_file, l_text);
Alert_Tab.Extend;
Alert_Tab(Alert_Tab.Last) := Alert_Row_Type(l_line, l_text);
l_line := l_line + 1;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
UTL_FILE.fclose(l_file);
RETURN Alert_Tab;
END;
/
CREATE OR REPLACE VIEW diy$alert AS
SELECT *
FROM Table(Cast(Get_Alert() As Alert_Type));





Tom Kyte
January 18, 2005 - 1:01 pm UTC

variation on a theme.

sure it works -- not sure if I'd want my entire alert log in a plsql collection variable.

in 9i, it would work as a pipelined function -- but then an external table would be the sensible approach -- making this "moot"

forgot a bit

Dominic, January 18, 2005 - 12:23 pm UTC

then you can select from the view diy$alert.



Another possibility...

Piotr, January 20, 2005 - 8:46 am UTC

Starting with Oracle 8i there is another possibility to have the errors logged out of alert log. Could you please comment SERVERERROR event? I had used AFTER SERVERERROR triggers, and they worked well, but I had to consider switching the trigger off during e.g. migration procedures.

Tom Kyte
January 20, 2005 - 10:39 am UTC

another possibility of what?



Good one

Jay, February 15, 2005 - 3:41 pm UTC


Reader

A reader, April 22, 2005 - 8:51 am UTC

Is it possible to query the table alert_log for ORA-
errors and if found, to read few lines ahead of it
to print the timestamp of the error

Thanks

Tom Kyte
April 22, 2005 - 10:43 am UTC

reader

A reader, June 07, 2005 - 2:45 pm UTC

Is there a simple way to compare the dates in alert log
in character format and sysdate-15/24/60 


SQL> select to_date(substr(text,1,24), 'Dy Mon DD HH24:MI:SS YYYY'), to_date(substr(text,1,24), 'Dy Mon DD HH24:MI:SS YYYY'), sysdate-90
  2  from alert_log where text like '___ ___ __ __:__:__ 20__%' and rownum < 100;
 
TO_DATE(SUBSTR(TEXT,1,24 TO_DATE(SUBSTR(TEXT,1,24 SYSDATE-90
------------------------ ------------------------ ------------------------
Wed Mar 02 11:46:34 2005 Wed Mar 02 11:46:34 2005 Wed Mar 09 14:33:44 2005
Wed Mar 02 11:47:53 2005 Wed Mar 02 11:47:53 2005 Wed Mar 09 14:33:44 2005
Wed Mar 02 11:47:53 2005 Wed Mar 02 11:47:53 2005 Wed Mar 09 14:33:44 2005
Wed Mar 02 11:48:11 2005 Wed Mar 02 11:48:11 2005 Wed Mar 09 14:33:44 2005
Wed Mar 02 11:48:11 2005 Wed Mar 02 11:48:11 2005 Wed Mar 09 14:33:44 2005
Wed Mar 02 11:48:11 2005 Wed Mar 02 11:48:11 2005 Wed Mar 09 14:33:44 2005
Thu Mar 03 07:07:18 2005 Thu Mar 03 07:07:18 2005 Wed Mar 09 14:33:44 2005


SQL> r
  1  select to_date(substr(text,1,24), 'Dy Mon DD HH24:MI:SS YYYY'), to_date(substr(text,1,24), 'Dy Mon DD HH24:MI:SS YYYY'), sysdate-90
  2* from alert_log where text like '___ ___ __ __:__:__ 20__%' and to_date(substr(text,1,24), 'Dy Mon DD HH24:MI:SS YYYY') < sysdate-90
from alert_log where text like '___ ___ __ __:__:__ 20__%' and to_date(substr(text,1,24), 'Dy Mon DD HH24:MI:SS YYYY') < sysdate-90
                                                                       *
ERROR at line 2:
ORA-01846: not a valid day of the week
 

Tom Kyte
June 07, 2005 - 2:58 pm UTC



select *
from alert_log
where case when text like '....' then to_date( ...) end < sysdate-90;


you cannot control the order of evaluation of things in a predicate, so use case to only TO_DATE when it makes sense to TO_DATE.

reader

A reader, June 08, 2005 - 7:13 am UTC

The following script seems to work.
It however prints all the timestamp along with
ORA-,timestamp pair. I like to list only the 
<ORA-,timestamp> pair. Is this simple to do


SQL> l
  1  select text,
  2         lag(text,1) over (order by r) start_time
  3    from
  4    (
  5    select rownum r, text
  6    from alert_log
  7    where
  8      (
  9      text like 'ORA-%'
 10      or
 11      text like '___ ___ __ __:__:__ 20__%'
 12      )
 13      and line
 14      >
 15      (
 16      select min(line) from alert_log where
 17       case
 18        when text like '___ ___ __ __:__:__ 20__%' then
 19        to_date(substr(text,1,25),'Dy Mon DD HH24:MI:SS YYYY')
 20       end
 21       > sysdate-90
 22      )
 23*   )
 

Tom Kyte
June 08, 2005 - 9:07 am UTC

don't know what you mean by "ora-,timestamp" vs "<ora-,timestamp>" pairs really.

if you are looking for the ora- line and the closest timestamp to it, just lag, don't see the need for the subquery.



reader

A reader, June 08, 2005 - 10:37 am UTC

I meant ORA- and the closest timestamp as the <ora-,timestamp> pair.

I am getting a output like

Wed May 25 15:50:42 2005
ORA-1507 signalled during: ALTER DATABASE CLOSE NORMAL...

Wed May 25 15:50:43 2005
Wed May 25 15:50:42 2005

Wed May 25 15:50:47 2005
Wed May 25 15:50:43 2005

Wed May 25 15:50:47 2005
Wed May 25 15:50:47 2005

I like to trim the output so that I get only the
ORA- and the closest timestamp. I like not to
print timestamps without any ORA- line. Can this be done

Tom Kyte
June 08, 2005 - 11:25 am UTC

so select the rows that have timestamps and ora- on them (the above). Call that q

then

select text, lag(text) over (order by line) last_text
from (Q) q;


and then

select *
from (that_query)
where text like 'ora%';



Great Thanks It works

A reader, June 08, 2005 - 12:48 pm UTC


Chronology of Events in Alert.log

Richard, August 03, 2005 - 6:45 am UTC

Hi,

The following is the tail-end of my Alert.log:

Wed Aug 3 09:05:25 2005
ARC0: Beginning to archive log# 4 seq# 506005
ARC0: Completed archiving log# 4 seq# 506005
Wed Aug 3 09:29:30 2005
Thread 1 advanced to log sequence 506007
Current log# 2 seq# 506007 mem# 0: /shared/tagtrak/database/redo/online/redo02.log
Wed Aug 3 09:29:30 2005
ARC0: Beginning to archive log# 1 seq# 506006
ARC0: Completed archiving log# 1 seq# 506006
Wed Aug 3 09:53:29 2005
Thread 1 advanced to log sequence 506008
Current log# 3 seq# 506008 mem# 0: /shared/tagtrak/database/redo/online/redo03.log
Wed Aug 3 09:53:29 2005
ARC0: Beginning to archive log# 2 seq# 506007
ARC0: Completed archiving log# 2 seq# 506007
Wed Aug 3 10:18:08 2005
Thread 1 advanced to log sequence 506009
Wed Aug 3 10:18:08 2005
ARC0: Beginning to archive log# 3 seq# 506008
Wed Aug 3 10:18:08 2005
Current log# 4 seq# 506009 mem# 0: /shared/tagtrak/database/redo/online/redo04.log
Wed Aug 3 10:18:14 2005
ARC0: Completed archiving log# 3 seq# 506008
Wed Aug 3 10:43:16 2005
Thread 1 advanced to log sequence 506010
Wed Aug 3 10:43:16 2005
ARC0: Beginning to archive log# 4 seq# 506009
Wed Aug 3 10:43:16 2005
Current log# 1 seq# 506010 mem# 0: /shared/tagtrak/database/redo/online/redo01.log
Wed Aug 3 10:43:22 2005
ARC0: Completed archiving log# 4 seq# 506009
Wed Aug 3 11:11:18 2005
Thread 1 advanced to log sequence 506011
Current log# 2 seq# 506011 mem# 0: /shared/tagtrak/database/redo/online/redo02.log
Wed Aug 3 11:11:18 2005
ARC0: Beginning to archive log# 1 seq# 506010
ARC0: Completed archiving log# 1 seq# 506010


Does each timestamp in the log apply to the message or messages that follow it? e.g. for the final 3 lines in the above, does Wed Aug 3 11:11:18 2005 apply to both ARC0: Beginning to archive log# 1 seq# 506010 AND ARC0: Completed archiving log# 1 seq# 506010 ?

I ask, as I would have thought that the beginning and completion of archiving a redo log would be slow enough to have a different beginning and completion timestamp.

Tom Kyte
August 03, 2005 - 11:42 am UTC

The time stamp is printed out for the line that immediately follows it.

Ok, but...

Richard, August 04, 2005 - 5:21 am UTC

So, does

Wed Aug 3 11:11:18 2005
ARC0: Beginning to archive log# 1 seq# 506010
ARC0: Completed archiving log# 1 seq# 506010

mean that the 2 ARC0 messages were emitted so close together that Oracle thought, "No point putting the timestamp twice; the 2 ARCOs happened, too close together to measure."?

Tom Kyte
August 04, 2005 - 9:47 am UTC

it would seem that that, not 100% scientific, but:


Thu Aug 4 09:23:57 2005
Thread 1 advanced to log sequence 23
Current log# 4 seq# 23 mem# 0: /home/ora9ir2/oradata/ora9ir2/redoA.log
Thu Aug 4 09:23:57 2005
ARC0: Evaluating archive log 5 thread 1 sequence 22
ARC0: Beginning to archive log 5 thread 1 sequence 22
Creating archive destination LOG_ARCHIVE_DEST_1: '/tmp/1_22.dbf'
ARC0: Completed archiving log 5 thread 1 sequence 22



Thu Aug 4 09:29:31 2005
Thread 1 advanced to log sequence 24
Current log# 5 seq# 24 mem# 0: /home/ora9ir2/oradata/ora9ir2/redoB.log
Thu Aug 4 09:29:31 2005
ARC1: Evaluating archive log 4 thread 1 sequence 23
ARC1: Beginning to archive log 4 thread 1 sequence 23
Creating archive destination LOG_ARCHIVE_DEST_1: '/tmp/1_23.dbf'
ARC1: Completed archiving log 4 thread 1 sequence 23




Thu Aug 4 09:35:24 2005
Thread 1 advanced to log sequence 25
Current log# 4 seq# 25 mem# 0: /home/ora9ir2/oradata/ora9ir2/redoA.log
Thu Aug 4 09:35:24 2005
ARC0: Evaluating archive log 5 thread 1 sequence 24
ARC0: Beginning to archive log 5 thread 1 sequence 24
Creating archive destination LOG_ARCHIVE_DEST_1: '/tmp/1_24.dbf'
Thu Aug 4 09:35:48 2005
ARC0: Completed archiving log 5 thread 1 sequence 24



Now how the first two do not have any time information -- in fact, the messages popped out all at one (simple alter system switch logfiles on an idle system)

The last one however, the creating message popped out and a while later, the completed one did with the timestamp.


[tkyte@xtkyte-pc tkyte]$ ls -l /tmp/1*.dbf
-rw-rw---- 1 ora9ir2 ora9ir2 157184 Aug 4 09:23 /tmp/1_22.dbf
-rw-rw---- 1 ora9ir2 ora9ir2 17408 Aug 4 09:29 /tmp/1_23.dbf
-rw-rw---- 1 ora9ir2 ora9ir2 189558272 Aug 4 09:35 /tmp/1_24.dbf

so, if arch is able to create and complete more or less at the same time, the messages are getting flushed together and no extra timestamps are there.



ORA-01847 in alert.log

Anto, October 06, 2005 - 11:00 am UTC

Hi Tom,

We are getting these ORA-01847 error messages in alert.log. Is there any way we can prevent these errors from being recorded in alert.log since they are user data entry errors for date fields. And these errors are not recorded always, they are recorded in alert.log only in some cases

Thu Oct 6 00:12:25 2005
Errors in file /sbclocal/app/oracle/admin/INMECP1/udump/inmecp1_ora_28295.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-01847: day of month must be between 1 and last day of month Thu Oct 6 00:12:26 2005 Errors in file /sbclocal/app/oracle/admin/INMECP1/udump/inmecp1_ora_28295.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-01847: day of month must be between 1 and last day of month Thu Oct 6 02:07:34 2005 Errors in ....



Tom Kyte
October 06, 2005 - 12:05 pm UTC

are you sure you don't have an event enabled explicitly asking for that to be recorded?

A reader, October 06, 2005 - 11:12 am UTC

Is it being recorded in alert.log since the error is being generated at by a recursive SQL executed as part of the orignal SQL query ?

Because when I tried this

alter session set nls_date_format='mm/dd/yyyy hh:mi:ss AM'

select * from issue a where a.VALID_TO = '01/00/2005 12:00:00 AM'
The above Ora-1847 error in not being recorded in alert.log, it is only returned to the user

Is there any way we can avoid this error from being recorded in alert.log ?

Anto


Tom Kyte
October 06, 2005 - 12:07 pm UTC

I believe your application has set and event asking this to be recorded (sql in plsql is recursive sql too)

Anto, October 06, 2005 - 2:02 pm UTC

> show parameter event

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
event string 10499 trace name context forev
er, level 1

The only event which is set is event 10499

Event 10499
Event 10499 was introduced in release 9.2.0.3 to allow customers to revert to the earlier behavior of NUMBER datatypes that have no explicit setting of their scale value. Any non-zero setting of the event level will enable the old behavior.

Those errors are logged from users running reports in Business Objects against Oracle 9204 Database. Most of these reports use star transformation, even though that may not be relevant here.



Tom Kyte
October 06, 2005 - 2:51 pm UTC

but if it is only happening sometimes as you said in your followup, I believe the application is setting this for their session (as if they had a problem in the past and wanted the error stack to be dumped and now it is "there")


is that possible?

Anto, October 06, 2005 - 4:49 pm UTC

What I meant is that I am not able to reproduce that error either from sqlplus or BO . But we are getting this error in alert.log once in a while, while users are running reports using BO(business objects) against Oracle database. But there is no oracle setting either at the BO level or session level

Since those errors are getting logged to alert.log, dbservices micromuse alert goes off due to that and they in turn contact us for the same, even though the error is a USER error and can be safely ignored. So was wondering why these type of errors are getting logged to alert.log and whether there is any way to prevent that from happening.

Maybe I will be better off, creating a TAR with oracle, for the same.

Anto

Tom Kyte
October 06, 2005 - 6:31 pm UTC

yes you will be - because we do not log these to the alert log, I'm pretty sure you'll discover someone is setting that event to dump the error stack.

Alert Log

Yoav, November 02, 2005 - 5:39 am UTC

Hi Tom.
At the beginning of this page you wrote:
"this should be a package. the package would remember
where I left off bytewise so that I could call a refresh
that would load just NEW lines instead of the entire file
again.
That way, if I kept a session open for a while --
i could load it once and just refresh during the day."

My Question :
I would liketo load the file only once per day, and i want
to see only the NEW lines that had been added to that
file. how can i make it "remember" the last line ?

Thank You.


Tom Kyte
November 03, 2005 - 4:50 am UTC

if you are doing this across sessions - you would have to use something that can persist data across sessions....

Like a database table :)

How to find out the errored SQL

A reader, January 25, 2006 - 2:56 pm UTC

Hi Tom,

From the alert.log

ARC0: Evaluating archive log 5 thread 1 sequence 13334
ARC0: Beginning to archive log 5 thread 1 sequence 13334
Creating archive destination LOG_ARCHIVE_DEST_1: '/sbclocal/app/oracle/admin/SMLICP1/arch/SMLICP1_0000013334.arc'
Wed Jan 25 09:29:01 2006
ARC0: Completed archiving log 5 thread 1 sequence 13334
Wed Jan 25 14:18:54 2006
ORA-1652: unable to extend temp segment by 640 in tablespace TEMP1

Obviously some bad SQL using maybe a merge join cartesian join must have consumed the whole TEMP1 tablespace

TEMP1 tablespace is quite big for us - around 30 GB, since ours is a datawarehouse

Is there any way we can find out this offending SQL which is using a bad plan - from V$SQL or any other dictionary views like that ?

thanks
Anto


Tom Kyte
January 25, 2006 - 5:06 pm UTC

well, i very much disagree with "obviously", I can come up with lots of other reasons - none of which are "big query ate it all"


but for the future:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:374218170986#6821401045030 <code>

remember though, the query that gets HIT with the 1652 might be a teeny tiny query - it could well be other "monster" queries eating it all (and they didn't get hit)

A reader, January 25, 2006 - 3:39 pm UTC

I was able to get the sql and other user details of the SQL which gave this TEMP1 error, from v$SQL and V$SQLTEXT based on the timings, number of rows(0 in this case, since it errored out) and other big numbers in statistics

thanks
Anto

A reader, January 26, 2006 - 10:20 am UTC

Hi Tom,

I know the word 'obviously' was not the right word there.

But it was almost certain in our case/project - from past similar experiences - whenever we had a issue with such big TEMP tablespace running out of space, it was due to a merge join cartesian due to a missing join, by the business user

thanks
Anto

A reader, January 26, 2006 - 10:30 am UTC

Thanks for the link. That was very useful.

But v$SQL provides a wealth of information(without any extra overhead) from which, one can narrow down the offending SQL - in particular, the last_load_time,sql_text,rows_processed,disk_reads,elapsed_time,sorts,parsing_user_id etc gives enough clue to narrow down the SQL(may not be always true for a new user of the database/project). Even in this case as well, it was indeed a merge cartesian join, due to a missing join

thanks
Anto

Tom Kyte
January 26, 2006 - 10:48 am UTC

depends - if you execute queries once - sure, maybe. But in general, no, it would not. And if you execute queries only once - I would not expect the query to be there tomorrow when you go to do the analysis.

Since the error trigger is triggered only during, well, an error - I do not see any discussion of "overhead" being very relevant really - it is not impacting you day to day during normal operations.


Yes- your approach was pretty cool given you had nothing else in place.

A reader, January 30, 2006 - 12:43 pm UTC

Hi Tom,

In our environment, there are a lot of big queries - SELECT statements that are doing huge sorts using 'group by' and which uses temp_space(from v$sql_plan) of the order of hundreds of MegaBytes and sometimes even a few gigabytes. Those have not given any error, so far.

As you pointed out before - almost every time, it is the smallers SELECT SQLs which are using a CARTESIAN join which is failing, even though from V$SQL_PLAN it looks as if, these consumed only around 50 or 60 MB of TEMP_SPACE

Is there any reason for this behaviour - why big GROUP BY works fine even after consuming big temp space, whereas smaller cartesian joins fail even after consuming 50 or 60 MB of temp_space ?

thanks
Anto

Tom Kyte
January 30, 2006 - 3:31 pm UTC

v$sql_plan shows the ESTIMATED temp space that will be used.

A guess.
An estimate.

It does not show the actual space - it could be that the estimate was *way* off.

It could be that your small sorts are just really unlucky as well (less likely).

Probably that the cartesian join turned out to be much larger than expected - perhaps many many times larger.

A reader, January 30, 2006 - 2:50 pm UTC

Database version is 9204

Anto

A reader, January 31, 2006 - 10:22 am UTC

Thanks for that info

I should have checked oracle metalink first. I have got into the habit of first asking in asktom.oracle.com rather than first checking in oracle metalink


SELECT b.tablespace, b.segfile#, b.segblk#, b.blocks, a.sid, a.serial#,
a.username
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr
ORDER BY b.tablespace, b.segfile#, b.segblk#, b.blocks;

gives TEMP space usage by various sessions while sort is going on

I guess there is no way to know the TEMP space usage, once SQLs are completed

Anto


Tom Kyte
January 31, 2006 - 3:24 pm UTC

once the temp space is release (the cursor is closed), nope

Ora-01847 in alert.log

A reader, June 16, 2006 - 11:42 am UTC

Hi Tom,

As posted before - regarding Ora-01847 errors being logged into alert.log - which in normal circumstances should not have happened unless we have some event set

We are actually getting a Ora-604 always preceding this ora-1847 and may have been caused by this ora-1847

ORA-00604: error occurred at recursive SQL level 1
ORA-01847: day of month must be between 1 and last day of month

Is this due to this "ORA-00604" error, that these errors are getting logged to alert.log ?

Does all ORA-006XX errors get logged to alert.log ?

thanks
Anto

Tom Kyte
June 16, 2006 - 7:16 pm UTC

let's see the entire thing in context here.

can you

a) post a bit of the alert with the entire sequence of events

b) describe to us where this is coming from.


eg: is this a JOB maybe, which logs all uncaught exceptions to the alert.

A reader, June 19, 2006 - 3:19 pm UTC

Hi,

We get those errors logged into alert.log when Business Objects Webi users running reports against the oracle database enters date in the wrong format - (for eg: >12 for month)

Those errors are not logged into alert.log when the same input date is entered wrongly from sqlplus - so maybe BO webi is setting something for each webi connection which is forcing those errors to be logged to alert.log

Those 2 lines are the only 2 lines in alert.log wrt this error. The first few lines of the trace file :

Dump file /sbclocal/app/oracle/admin/SMCMISP1/udump/smcmisp1_ora_24195.trc
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
ORACLE_HOME = /sbclocal/app/oracle/product/9.2.0.4.0.1
System name: SunOS
Node name: sstm2155por
Release: 5.8
Version: Generic_117350-12
Machine: sun4us
Instance name: SMCMISP1
Redo thread mounted by this instance: 1
Oracle process number: 98
Unix process pid: 24195, image: oracle@sstm2155por (TNS V1-V3)

*** SESSION ID:(153.6174) 2006-06-13 05:42:42.219
ORA-00604: error occurred at recursive SQL level 1
ORA-01847: day of month must be between 1 and last day of month
Dump of memory from 0x0000000528D753B4 to 0x0000000528D7579E
528D753B0 494E5345 5254202F 2A2B2041 [INSERT /*+ A]
528D753C0 5050454E 44204259 50415353 5F524543 [PPEND BYPASS_REC]
528D753D0 55525349 56455F43 4845434B 202A2F20 [URSIVE_CHECK */ ]
528D753E0 494E544F 20225359 53222E22 5359535F [INTO "SYS"."SYS_]
528D753F0 54454D50 5F304644 39443639 42365F31 [TEMP_0FD9D69B6_1]
528D75400 31464332 42333222 2053454C 45435420 [1FC2B32" SELECT ]
528D75410 2F2A2B20 53454D49 4A4F494E 5F445249 [/*+ SEMIJOIN_DRI]
528D75420 56455220 2A2F2022 5441475F 4C564C5F [VER */ "TAG_LVL_]
528D75430 42555345 4E54222E 22425553 454E545F [BUSENT"."BUSENT_]
528D75440 4C455645 4C5F4944 22202243 30222C22 [LEVEL_ID" "C0","]
528D75450 5441475F 4C564C5F 42555345 4E54222E [TAG_LVL_BUSENT".]
...
...

thx
Anto

Tom Kyte
June 19, 2006 - 6:49 pm UTC

it is generating a trace, things that generate traces get logged to alert.

Now, you can use the trace with support to determine why the trace is generated.

A reader, June 20, 2006 - 9:50 am UTC

Thanks

Anto

A reader, June 20, 2006 - 10:04 am UTC

Hi,

Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 48 5103
RECURSIVE EXECUTION SYS_LE_3_0
RECURSIVE EXECUTION SYS_LE_3_1
TEMP TABLE TRANSFORMATION <<<<<<<<<<
SORT GROUP BY 48 15 K 5103
FILTER
NESTED LOOPS OUTER 48 15 K 5101 :Q1066015007 P->S QC (RANDOM)
NESTED LOOPS 48 14 K 5100 :Q1066015007 PCWP
NESTED LOOPS 48 12 K 3900 :Q1066015007 PCWP
HASH JOIN 48 11 K 2700 :Q1066015007 PCWP
HASH JOIN 48 9 K

Before sending the trace to support, a question

The above is the initial few lines of the explain plan when dates are given in the right format, for the problem query.

Is it the fact that this error is generated during the process of TEMP table transformation for this query, that is forcing this error to get logged to alert.log ?

I guess that is the case since, after giving

alter session set star_transformation_enabled=false

and giving dates in the wrong format for the SQL query - this error is not getting logged to alert.log

btw - This error was getting logged not only from Webi reports(as incorrectly mentioned by me before) but even from TOAD sessions - so it is not the issue with BO Webi reports, but I guess is happening since the error is getting generated during the TEMP table transformation phase

thx
Anto

Tom Kyte
June 20, 2006 - 10:14 am UTC

not before, right now - use support, you have a trace file. go for it.

this is not "support" here - this is "how do i accomplish this ...."

A reader, June 20, 2006 - 11:36 am UTC

Thanks Tom for your help

The response from Oracle Support from opening TARs have been NOT that encouraging so far - not from my current client at least. I have stopped lodging TARs with Oracle support for the past 2 years or so - normally that is a waste of time

Maybe it depends on the client and the type of support




A reader, June 20, 2006 - 12:24 pm UTC

I have created a TAR - will let you know if we manage to find the cause and workaround from Oracle support

thanks
Anto

question about the code

A reader, July 12, 2006 - 6:55 pm UTC

Hi

this line in your code

l_current := dbms_lob.instr( l_bfile, '0A', l_last, 1 );

what is '0A' ???

cheers

Tom Kyte
July 12, 2006 - 7:08 pm UTC

'0A' is hex for \n (newline) - like chr(10)

Thread 1 advanced to log sequence

Shyam, August 11, 2006 - 5:02 pm UTC

Hi,

This one seems to be a basic question but i am expecting an detailed explanation for what is a thread and what role it plays with ref to database.

Tom Kyte
August 11, 2006 - 6:15 pm UTC

you have a redo "thread"

like a "thread of thought"
like a "related group of things, a thread"
like a "newsgroup forum thread, a series of connected things"


a redo thread is a collected series of redo logs, one following the other. A thread.

View on alertlog

Marijn Menheere, September 05, 2006 - 5:24 am UTC

The following creates an external table and a view which selects lines concerning ORA- errors. Works for me.

CREATE OR REPLACE DIRECTORY BDUMP AS 'D:\oraadmin\<SID>\bdump'

CREATE TABLE ALERTLOG
(
MSG_LINE VARCHAR2(1000 BYTE)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY BDUMP
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
nobadfile nologfile nodiscardfile
skip 0
READSIZE 1048576
FIELDS LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
(
MSG_LINE (1:1000) CHAR(1000)
)
)
LOCATION (BDUMP:'alert_<SID>.log')
)
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING

CREATE OR REPLACE FORCE VIEW ALERT_LOG_ERRORS
(LINENO, MSG_LINE, THEDATE, ORA_ERROR)
AS
select "LINENO","MSG_LINE","THEDATE","ORA_ERROR"
from (
select *
from (
select lineno,
msg_line,
thedate,
max( case when ora_error like 'ORA-%'
then rtrim(substr(ora_error,1,instr(ora_error,' ')-1),':')
else null
end ) over (partition by thedate) ora_error
from (
select lineno,
msg_line,
max(thedate) over (order by lineno) thedate,
lead(msg_line) over (order by lineno) ora_error
from (
select rownum lineno,
substr( msg_line, 1, 132 ) msg_line,
case when msg_line like '___ ___ __ __:__:__ ____'
then to_date( msg_line, 'Dy Mon DD hh24:mi:ss yyyy' )
else null
end thedate
from alertlog
)
)
)
)
where ora_error is not null
order by thedate

Dean Chan, February 22, 2007 - 2:55 am UTC

Dear Tom,

Are there any dbms packages that can send messages to alert log file?

regards,
Dean
Tom Kyte
February 22, 2007 - 8:49 am UTC

dbms_system has an undocumented procedure ksdwrt

Alex, August 14, 2007 - 1:47 pm UTC

Hi Tom,

Recently there was a memory upgrade to on of our production boxes. Our shutdown and startup scripts didn't work, so when the machine came back up the database didn't. I logged in and started it and everything is fine.

However, ever since then, we are seeing this is the alert log

ARC1: Thread not mounted

Do you have any idea what this means? We don't know if this is a problem or not. Thanks.
Tom Kyte
August 20, 2007 - 12:37 pm UTC

that is fine, it is not a problem, it is normal.

how to analyze a alertlog file

Srikanth, November 07, 2007 - 3:01 am UTC

Hi Tom,
How to analyze the alert log files...?generally on what type of errors we need to concentrate..?

Thanks and Regards
Srikanth
Tom Kyte
November 07, 2007 - 5:52 pm UTC

all of them - if you have an error in your alert log, that is a "problem".

Not sure what else to say - alert logs should be pretty "clean"

backing up alertlog file and other trace files and other log files

abz, November 12, 2007 - 6:01 am UTC

We are planning a strategy for backing up alert log files,
trace files, and other log files such as listener.log, sqlnet.log etc.

Can you point to any links where help can be found to develop strategy.

After reading the below link:-

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1352202934074#16054145222006

I have following questions.

1- What if Oracle is in between writing the alert log and at the same time we "mv" the alert log.

2- How can we backup trace files cleanly, because we dont know when oracle closes them? Same question is for listener.log and sqlnet.log files.

We need these backups for our record purposes, and in case of any S.R. at metalink, if they require an old trace file , we would be able to provide. Also, we need to avoid disk full messages on the disks where these files are stored.

Tom Kyte
November 16, 2007 - 12:44 pm UTC

there are no strategies that I'm aware of. You just do what you want to do, what you need to do with these files.

1) you might lose a record or two in it.

2) you cannot, trace files associated with processes might well never be closed. the alert log is opened and closed for each message.


The structure the alert log?

Kim Anthonisen, January 11, 2008 - 9:29 am UTC

Hi Tom

I would like to monitor the alert log, and would like to have a detailed documentation of the structure of the alert log.
Do You know of any?

Br
Kim
Tom Kyte
January 13, 2008 - 10:51 pm UTC

Not until 11g when it is available in XML... Prior to that, it is by observation.

Usefullness of ORA- monitoring

Anirban Das, January 23, 2008 - 4:11 pm UTC

What we have learnt the hard way is that monitoring ORA- errors does not guarantee database integrity. Corrupt blocks do not result in any ORA- errors but are critical. So now we check for keyword "corrupt". There could be other error messages that we could still not be looking for because Oracle has poor error reporting standards. Unless Oracle standardizes, we will always be at risk.

Alert log timestamps

Jeeva Konar, April 04, 2008 - 9:26 am UTC

I am facing a rather peculiar problem here with timestamps between instance and the alert logs. The alert log timestamps are 5 hours behind. The OS timestamp, the instance timestamp are all same. But the alert log shows a different timestamp. 
Can you help me in identifying the reason behind it or how to resolve it, please ?


SQL> ! tail -5 /dborafiles/PRAM/PRAMU02/admin/bdump/alert_PRAMU02.log
Use ALTER SYSTEM SET db_recovery_file_dest_size command
to add space. DO NOT manually remove flashback log files
to create space.
*************************************************************
SUCCESS: diskgroup PRAMU_A06615_BACKUP_ASM was mounted
SQL>
SQL>
SQL>
SQL> alter system switch logfile;

System altered.
SQL>
SQL>
SQL>
SQL>  ! tail -5 /dborafiles/PRAM/PRAMU02/admin/bdump/alert_PRAMU02.log
SUCCESS: diskgroup PRAMU_A06615_BACKUP_ASM was mounted
Fri Apr  4 09:06:32 2008
Thread 2 advanced to log sequence 6714
  Current log# 16 seq# 6714 mem# 0: +PRAMU_A06615_DATA1_ASM/PRAMU/onlinelog/redo16a.rdo
  Current log# 16 seq# 6714 mem# 1: +PRAMU_A06615_DATA1_ASM/PRAMU/onlinelog/redo16b.rdo
SQL>
SQL>
SQL>
SQL> select SYSTIMESTAMP from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
04-APR-08 15.06.48.250600 +01:00
SQL>
SQL>
SQL>
SQL> ! date
Fri Apr  4 15:06:52 BST 2008
SQL>
SQL>
SQL>

Tom Kyte
April 04, 2008 - 12:07 pm UTC

well, the 'instance' time comes from your dedicated server process. That in turn gets it's timezone from the environment that spawned it.

it would be EASY to issue:

select sysdate from dual;

in three sessions connected to the same server at the same time and get three different answers.


say you connect:

a) sqlplus u/p (directly, sqlplus spawns your dedicated server)
b) sqlplus u/p@dedicated_server_connection (listener spawns your dedicated server)
c) sqlplus u/p@shared_server_connect (you are connected to a server spawned dispatcher that will queue messages to a server spawned shared server)


A, B and C could each have different timezones.

So, what I suspect is you are connecting via method A or B (and your timezone is set 'correctly' or the listeners timezone is set correctly)

but the server (the instance) has a different TZ set.

check the TZ setting of the environment that started the instance.

Monitoring listener.log

Stuart, September 02, 2009 - 5:26 pm UTC

Hi Tom,

When I look at the listener.log for an application such as SQL Developer, (another one we have is Aqua Data Studio), the Program= is null, and the first Host=__jdbc__.

i.e.
03-SEP-2009 07:44:50 * (CONNECT_DATA=(SID=TEST)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=sure999))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.99)(PORT=2454)) * establish * TEST * 0

I like Toad better, as it populates these:
03-SEP-2009 08:47:08 * (CONNECT_DATA=(SID=TEST)(CID=(PROGRAM=C:\Toad\TOAD.exe)(HOST=CDL73)(USER=sure999))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.99)(PORT=2777)) * establish * TEST * 0

While both of these applications populate the v$session variables such as machine, terminal, client_info, it would be nice to have the listener.log properly populated as well, as the listener.log contains some valuable auditing information. Short of a new feature request to the vendor, is there a way of populating the listener.log fields on connection to the database?
Tom Kyte
September 03, 2009 - 7:28 am UTC

... Short of a new feature request to the vendor, is there a way of
populating the listener.log fields on connection to the database? ...

the client populates it and the client can say anything they want to - the information in there is strictly "whatever the client feels like saying to us"

Monitoring listener.log

Stuart, September 03, 2009 - 4:26 pm UTC

Is there a command to populate the listener.log entries?

For V$SESSION it is dbms_application_info.
Is it some library call within the compiled client binary, or is there a way of manually setting it via a command?
Tom Kyte
September 04, 2009 - 3:51 pm UTC

I don't know what you mean by "populate the listener.log entries" - we do that?

Monitoring listener.log

Stuart, September 06, 2009 - 10:45 pm UTC

I'm referring to the PROGRAM and the (first) HOST entry in the listener.log.

For example, you can see from the below that I'm connecting to the database via sqlplus.exe, sqlplusw.exe, and TOAD (all from my PC). It also shows my PC name (CDL73), and my IP address (the second HOST=), which is really useful for auditing.


SQLPlus from Windows C:\ prompt
-------------------------------
07-SEP-2009 11:30:41 * (CONNECT_DATA=(SID=TXT2STOP)(CID=(PROGRAM=C:\Oracle\product\10.1.0\bin\sqlplus.exe)(HOST=CDL73)(USER=sure006))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.66)(PORT=2059)) * establish * TXT2STOP * 0


SQLPlus for Windows (sqlplusw)
-------------------
07-SEP-2009 11:33:47 * (CONNECT_DATA=(SID=TXT2STOP)(CID=(PROGRAM=C:\Oracle\product\10.1.0\bin\sqlplusw.exe)(HOST=CDL73)(USER=sure006))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.66)(PORT=2068)) * establish * TXT2STOP * 0


TOAD
----
07-SEP-2009 11:36:06 * (CONNECT_DATA=(SID=TXT2STOP)(CID=(PROGRAM=C:\Toad\TOAD.exe)(HOST=CDL73)(USER=sure006))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.66)(PORT=2070)) * establish * TXT2STOP * 0


Even logged in on the server locally and invoking SQLPlus populates this with program name. The HOST= value is set to our server name, which is also helpful.


Locally from Unix prompt
------------------------
07-SEP-2009 11:35:04 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=TXT2STOP.ctru.auckland.ac.nz)(CID=(PROGRAM=sqlplus)(HOST=txt2stop2.ctru.auckland.ac.nz)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.109)(PORT=29477)) * establish * TXT2STOP.ctru.auckland.ac.nz * 0


However, when I log in to an application that use the JDBC thin client to connect to the database, the PROGRAM is blank (PROGRAM=), and the (first) HOST=__jdbc__. This isn't good... I want to know the program and the computer/server name as above.


For example:


SQL Developer (from my PC)
-------------
07-SEP-2009 11:36:52 * (CONNECT_DATA=(SID=TXT2STOP)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=sure006))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.66)(PORT=2072)) * establish * TXT2STOP * 0


Aqua Data Studio (from my PC)
----------------
07-SEP-2009 11:37:43 * (CONNECT_DATA=(SID=txt2stop)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.66)(PORT=2073)) * establish * txt2stop * 0


JBoss (running locally on the server)
-----
07-SEP-2009 11:59:31 * (CONNECT_DATA=(SID=TXT2STOP)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))) * (ADDRESS=(PROTOCOL=tcp)(HOST=130.216.120.109)(PORT=58705)) * establish * TXT2STOP * 0


Ideally, I'd like to see:
PROGRAM=SQLDeveloper HOST=CDL73 (my PC name)
PROGRAM=AquaDataStudio HOST=CDL73
PROGRAM=JBoss HOST=txt2stop2.ctru.auckland.ac.nz


You can also see from above that Aqua Data Studio and Jboss didn't even populate the USER= field of the listener.log entry, which is most annoying.

Which leads me back to my question,... is it possible to set the PROGRAM=, HOST= fields so they display correctly in the listener.log when a connection is made?

My SQL query against the listener.log works wonderfully well. I just wish the entries had all the information I wanted!

Maybe there is a better way of getting this information (i.e. user, ip address, program, time of connection, etc). The listener.log has a plethora of information that is often overlooked. It seems a very efficient place to start looking for it.

Hopefully this explains better what I'm trying to achieve.
Tom Kyte
September 07, 2009 - 1:58 am UTC

you cannot influence the contents of that log file to my knowledge, it is written by the listener, and you don't really "talk" to the listener.


... Maybe there is a better way of getting this information (i.e. user, ip address,
program, time of connection, etc).

..

yes, database auditing gets that.

most of the time, the log wouldn't even be on, especially if you connect frequently, writing to a file is pretty expensive.

Monitoring listener.log

Stuart, September 08, 2009 - 4:27 pm UTC


Alert Log

A reader, June 21, 2010 - 11:48 am UTC

Hi Tom,

This is related to an alert log issue that i faced few hours ago.

1) Today i realised that ORACLE had stopped writing to alert log. The last date in the alert log was June 20 10:00 PM.

2) We have an alert log renaming mechanism. This process renames the alert log as per date and creates a new one.

3) There was no new alert log for today's date.

4) We checked the database. Everything was fine. BDUMP directory is properly set.

5) Today at 9.00 AM we found a new alert log with today's date and with content starting at June 21 00:30 AM.

6) I did a switch logfile and checked that the data has been updated in the alert log.

My Questions are

1) What led oracle to stop writing to the alert log from June 20 10:00 PM to June 21 00:30 AM.

2) What made oracle create a new alert log at June 21 9.00 AM with content starting from June 21 00:30 AM. This should have happened at June 21 00:00 AM. But it didnt.

Why oracle behaved like this.

The DB is 9.2.0.4.0.

Kindly advice.

Error in Alert log

Fernando, December 20, 2013 - 4:40 pm UTC

Tom, I would like to know if all errors in the alert.log file start with the prefix 'ORA-' ... even errors associated with ASM PLS and others, would bring an ORA- message as well. I guess my question is can I search in the alert log for 'ORA-' and would that be all I need to capture issues within the DB (10g/11g).
Thanks

0, August 05, 2014 - 9:24 am UTC


Walter, December 16, 2014 - 11:09 pm UTC

Tom,

I am using your above procedure load_alert to accomplish something similar , however its failing at 
... insert into alert_log 
due to utl_raw.cast_to_varchar2 as I see the lines in my file are much longer than 4000 bytes. 

SQL> exec load_myfile;
BEGIN load_myfile; END;
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: raw variable length too long
ORA-06512: at line 1
ORA-06512: at "FUSER.LOAD_MYFILE", line 26
ORA-06512: at line 1

In my case I don't need the characters beyond 4k, can you please recommend a viable option.

Thanks.

Tom Kyte
December 17, 2014 - 7:01 pm UTC

you have the code... just change it?


l_current := dbms_lob.instr( l_bfile, '0A', l_last, 1 );
        exit when (nvl(l_current,0) = 0);

        insert into alert_log
        ( line, text )
        values
        ( l_line, 
          utl_raw.cast_to_varchar2( 
              dbms_lob.substr( l_bfile, l_current-l_last+1, 
                                                    l_last ) )
        );



the substr should only substr 4000 characters, so play with the second parameter there to make sure it is the least of what I'm currently passing and 4000....


Walter, December 18, 2014 - 12:11 am UTC

So i verified as per your feedback, each line returns about ~100 bytes or less, with the exception of line 12 of the file - returning about 12,356 characters ( l_current-l_last+1) on the utl_raw.cast_to_varchar2 function output.

I tried different options however I wasn't able to figure out just for that line how can i pass fewer than 4k through the substr. For now I am using a workaround, but it stinks and don't even wanna mention it :-)

As always your solutions greatly appreciated.


Tom Kyte
December 18, 2014 - 8:09 pm UTC

all you needed to do was add a variable l_length or some other name to the code and:


if ( l_current-l_last+1 > 4000 )
then
   l_length := 4000;
else
   l_length := l_current-l_last+1;
end if;

.....
 utl_raw.cast_to_varchar2( 
              dbms_lob.substr( l_bfile, l_length, l_last ) )
........




that is the only change you needed to make. not sure what you meant by:

...
I wasn't able to figure out just for that
line how can i pass fewer than 4k through the substr.
.......




you just want to dbms_lob.substr at most 4000 - it would be very simple to do that

Nowadays there is a view: V$DIAG_ALERT_EXT

Hoek, December 18, 2014 - 10:42 am UTC

...if you're on db version 11.2 (and onwards).
Not sure why it's undocumented.



Tom Kyte
December 18, 2014 - 8:22 pm UTC

that is from the log.xml files for all of the instances and listeners on that machine. a lot of stuff in there.

A reader, December 19, 2014 - 12:13 am UTC

Ohh, I was making it too complicated, thanks for clearing it up.

Walter.

In 11g, there is a view.....

Dan Grotjan, December 22, 2014 - 4:39 pm UTC

In 11g, I use sys.alert_log_view in a script which sends me email containing the errors encountered since the last time the script ran. The key columns I use are "originating_timestamp" and "message_text". No need to access the filesystem containing the actual alert log.

I filter out certain types of messages which I've deemed inconsequential. Works quite well.

Oracle 11g: Reading alert log via SQL

Charlie 木匠, January 12, 2015 - 8:12 pm UTC

select message_text from sys.X$DBGALERTEXT
where rownum < 5;

Should mention existing tables/views

Bernd Eckenfels, July 12, 2017 - 9:27 pm UTC

Would be good if a new answer could address
V$diag_alert_ext
or
x$dbgalertext
.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database