Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Arindom.

Asked: July 14, 2000 - 3:37 pm UTC

Last updated: February 18, 2010 - 7:25 pm UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

Hi,
We are trying to use logminer as an audit trail ; to list out
sqls issued agaist a table etc..
Apparently ; the logfilename has to be entered like :

exec dbms_logmnr.add_logfile(
LogFileName => '/opt/ora/oraback/arch/vpndw2404.arc',
Options => dbms_logmnr.NEW);


Is there a way to automate this? Would it be possible to
execute a procedure similar to above automatically on a log
switch?


Thanks.

and Tom said...

Yes there is but it wouldn't do anything for you anyway.

Adding a logfile adds the logfile to a v$ (in memory) table in your session only. Once that session exits -- its gone. It does not permanently put the data into a "real" table anywhere. For any session that wants to inspect the contents of a logfile -- they must run this command. If that session logs out -- you must rerun that command in the new session to reload that logfile.

So, even though we could do this with dbms_job -- it wouldn't do anything for us. You would not want to put it into a real table either inside the job -- that would create a really vicious "feedback" loop. The job to load logs generates log which will be loaded -- generating more log. Pretty soon the only thing the database is doing is loading log files that show you write to the table that has the logfiles in it lots....

Rating

  (26 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

DDL support in LogMiner

Andre Whittick Nasser, May 14, 2002 - 9:26 am UTC

On the DDL support in 9i's LogMIner:

DDL_DICT_TRACKING

If the dictionary in use is a flat file or in the redo log files, LogMiner ensures that its internal dictionary is updated if a DDL event occurs. This ensures that correct SQL_REDO and SQL_UNDO information is maintained for objects that are modified after the LogMiner dictionary is built.

This option cannot be used in conjunction with the DICT_FROM_ONLINE_CATALOG option.

NO_DICT_RESET_ONSELECT
This option is only valid if the DDL_DICT_TRACKING option is also specified. It prevents LogMiner from reloading its internal dictionary at the beginning of each select operation on V$LOGMNR_CONTENTS. This can be an advantage because it can be time consuming to refresh the dictionary if a DDL operation has updated the dictionary.

If you use this option, be aware that because the dictionary has not been refreshed for subsequent select operations, you may get incompletely reconstructed SQL_REDO and SQL_UNDO information for objects that are modified in the redo log files. Such incomplete reconstructions produce SQL that cannot be executed.

Q:

If I got it right, DDL_DICT_TRACKING "refreshes" either the offline dictionary or the online dictionary in the redo logs, in the case of a new DDL. What I'm supposing NO_DICT_RESET_ON SELECT is stating is that with with option not set, this refersh ocurs at each select. And when I set this option, the refresh doesn't happens at every new select on the dictionary.

After all, when is the dictionary refershed: at every new DDL or every new SELECT ?


More on LogMiner in 9i ...

Andre Whittick Nasser, May 14, 2002 - 10:01 am UTC

If I want to have PK columns appearing in my logminer contents, instead of only rowid, I can use "supplemental" log groups with, for example:

ALTER TABLE ... ADD SUPPLEMENTAL LOG GROUP ... ( ..., ...);

or

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

I find this terminology to be strange. It's not in a fact a "supplemental" log group in physical terms (like a redo log file group), but rather additional log data, mapping the additional columns.

Am I right ?

logminer on noarchivelog database

reader, December 27, 2003 - 8:11 pm UTC

Can Logminer be used on a NOARCHIVELOG database to analyze the current redolg files? thanks.

Tom Kyte
December 28, 2003 - 10:41 am UTC

the current -- not really. the unused ones -- yes.

the current one is one you don't really want to touch. it is being written to.

HOW CAN I USE LOGMINER

Nikunj, September 30, 2005 - 7:43 am UTC

Dear Tom,

I have Oracle 9i EE on windows with noarchivemode.

below is my logfile list.
sys@NIKUNJ> select * from v$logfile;

GROUP# STATUS TYPE
---------- ------- -------
MEMBER
------------------------------------
3 STALE ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO03.LOG
2 STALE ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO02.LOG
1 ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO01.LOG

sys@NIKUNJ>

I had try to do as below

sys@NIKUNJ> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'C:\ORACLE\ORADATA\NIKUNJ\REDO01.LOG',OPTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.

sys@NIKUNJ>
sys@NIKUNJ> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

PL/SQL procedure successfully completed.

sys@NIKUNJ>
sys@NIKUNJ> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => TO_DATE('30-Sep-2005 10:30:00', 'DD-MON-YYYY HH:MI:SS'),-
> ENDTIME => TO_DATE('30-Sep-2005 11:30:00', 'DD-MON-YYYY HH:MI:SS'));
BEGIN DBMS_LOGMNR.START_LOGMNR(STARTTIME => TO_DATE('30-Sep-2005 10:30:00', 'DD-MON-YYYY HH:MI:SS'),
ENDTIME => TO_DATE('30-Sep-2005 11:30:00', 'DD-MON-YYYY HH:MI:SS')); END;

*
ERROR at line 1:
ORA-01291: missing logfile
ORA-06512: at "SYS.DBMS_LOGMNR", line 53
ORA-06512: at line 1

Why its saying missing logfile ?


Tom Kyte
September 30, 2005 - 10:03 am UTC

trying to mine the "online" redo log is a bit sketchy there.... It is after all full of some stuff, and "other stuff".


[tkyte@localhost tkyte]$ oerr ora 1291
01291, 00000, "missing logfile"
// *Cause: Not all logfiles corresponding to the time or scn range specified
// have been added to the list.
// *Action: Check the v$logmnr_logs view to determine the missing scn
// range, and add the relevant logfiles.


You are giving it a time range that it says "you haven't fed me enough redo to cover that yet"

perhaps just use

sys.dbms_logmnr.start_logmnr;

to see what you have.

Oracle 8i LOGMINER : Primary Key in sql_redo/sql_undo

JLU, May 23, 2006 - 12:07 pm UTC

Hi Tom,

Is there a possibility to have PK columns appearing in my logminer contents, instead of only rowid ?

I try to execute the (9i) Following Command :

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

of course, it failed with ORA-01900 error ...

I didn't find anything on this subject in Oracle Documentation.

In fact, i want to be sure it's impossible to do that with Oracle 8i.

Thank in advance


Tom Kyte
May 23, 2006 - 3:51 pm UTC

that was a new feature of software written this century.

What about long redo info?

Mike Friedman, May 24, 2006 - 7:45 am UTC

Tom,

The Undo and Redo info from Logminer get returned in VARCHAR2 columns.

Obviously, they can be longer than 4K. In those cases Logminer returns multiple rows. If you paste them together you get the undo and redo DML.

Question: How can you know what order to paste them in?

We're doing complex joins with the Logminer view so we can't assume we can just query from it and get back results in the right order.

Tom Kyte
May 24, 2006 - 8:06 am UTC

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm#i1016535 <code>

...
For the reasons stated in the previous note, Oracle recommends that you create a table to temporarily hold the results from a query of V$LOGMNR_CONTENTS if you need to maintain the data for further analysis, particularly if the amount of data returned by a query is small in comparison to the amount of redo data that LogMiner must analyze to provide that data.
.....

you might consider using a global temporary table and populating your own "ordering" column

That makes filtering kind of hard

Michael Friedman, May 24, 2006 - 11:23 pm UTC

We're hoping to filter and transform in our initial query by doing a join against some other tables.

Then we will put the data into its final location where we will store it in a permanent table.

We really don't want the performance hit of getting all the rows we aren't interested in and putting them in a temporary table.

Do you know any way to order the rows so we can paste the redo data back together?

Tom Kyte
May 25, 2006 - 1:35 pm UTC

put it into a global temporary table and then use it.

Or

with log_contents as
(select rownum R, a.* from v$logmnr_contents a)
select ......


Or

select ....
from (select rownum R, a.* from v$logmnr_contents a), .....

Who's the culprit?

Apurva, January 24, 2007 - 8:58 am UTC

Tom,

I work in a large team; and someone truncated a table which took me 3 days to populate :(

Is there a way to find the culprit?

Does Oracle maintain any activity logs?

Thanks

can it be used to get the previous update

Umesh Kasturi, February 07, 2007 - 2:44 am UTC

Tom
We have a large DB and most of the tables have more that 30 million records. Daily transactions hit almost a 2million records.
The application will get a transaction log file which is a list of records in a csv or text file. These records are read one after the other and compared with the DB values. In all the case the present image and the previous image of a record needs to be compared and audited based on some logic. Data gets flushed out on some logic ( i.e DB has only 3 months of live data, rest is archived)

The DBA says we can use Logminer to get the previous image of the record. Can we use the Logminer to achive it.

It will be a great help if you can give a high level explanation of this with pros and cons

Thanks in advance
Tom Kyte
February 07, 2007 - 5:24 pm UTC

that would be entirely unreliable.

First, the dba MIGHT be able to get it, not everything is available via logminer

Second, if your dba has the spare cycles to do something like this, well - hmmm. It would be a full time job pulling this stuff out.

No, this would not be viable.

Logminer Audit

umesh_kasturi, February 13, 2007 - 6:16 am UTC

Tom,
Thanks for the response above. I did have the same opinion for NOT using the logminer to do the audit. Can you provide me some important points as to why we cannot use this.
The only point I see is that , sql_undo does have a equivalent statements rather than the exact copy of previous statements. But the DBA counteracts saying we can get all column values by supplemental log data.
Can you help me by providing some key points as to " Why we should not be using Logminer for this auditing operation "
Tom Kyte
February 13, 2007 - 9:58 am UTC

I cannot believe the DBA is fighting this.

do this:

ok smarty pants DBA, set it up - let us test your idea. I'll come back in a month with my requests for needed information.


And in a month, come back and ask them for some auditing information from the beginning of the time period, the middle, all over the place - maybe one audit record from each day - yeah, one audit event from each day.

Let us see them scramble to get the redo back on disk, figure out what log was current at the relevant time, sift through TONS AND TONS OF DATA...

this is just a silly idea, I cannot believe they really have thought about this from a practical standpoint.

do they really want to keep the redo for how many months or years you need to be able to mine back to? do they really want to test the ability to mine logs from way back when after upgrades? do they really want to complicate their lives so utterly as to create an entirely new job position?

Chief Log Miner - director of a staff of 50 whose sole purpose in life is to mine redo.


logminer does not support everything in the database
http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm#i1031090

It would be a horrible idea to go down this path.

I think I am not clear on this

umesh_kasturi, February 14, 2007 - 1:24 am UTC

Tom,
Forgive me for bugging you on this topic
Here is the business logic on high level

1) Audit will only be done by comparing the present image and the previous image of the record. We go only one level down , the present and the previous always

2) Our application runs on oracle 9i and above

3) Luckily,.....Luckily there are NO unsupported (by Logminer) data types used

DBA says we can use this because of the above scenario
Please,..Please comment

Tom Kyte
February 14, 2007 - 8:28 am UTC

tell you what


tell the DBA ok

they will hate themselves for totally messing up their lives.


I don't know what else to say - this is a horrible idea, a great way for the DBA to ensure they have a job for life - but a job that no human being would actually want.

that is all I'll have to say on the subject.

any substantiating facts

umesh_kasturi, February 22, 2007 - 2:44 am UTC

Tom,
I hope I was clear with the objective of the auditing above. Sorry I have come again on this as the DBA wants this , but I am the one who implements this!

1) With redo LOG-mining we get the **changed column ** values ( present + previous+ primary key column).

We are interested in present + previous + primary key columns **which are changed**

2) We take those values and do some logic and if "ok" we write to a file

This happens on 64 million records on a 10 hour window

Can you please give me some real reasons to defend myself to say NO to this approach

Thanks in advance
Tom Kyte
February 22, 2007 - 8:48 am UTC

do it, just try it, test it, go for it.

I'm done here, you asked for my opinion, logminer is not the right approach if you ask me - and I'll say it over and over and over again and again.

when you try to do it, you'll likely "see the light", you'll see a big light bulb turn on over the DBA's head. It'll be illuminating.


DBA says this

umesh_kasturi, February 27, 2007 - 1:34 am UTC

BDA -- Business dev audit is the name of the application
Please Please read this and reply

We are interested in present + previous + primary key columns **which are changed** on a daily basis

I want to get only the changed data and the primary columns of the records

Only The present values + previous values + Primary key columns


Here is what my DBA says in support of the LOGMINER



I would want to process one days worth of data at most and the requirements actually
specify a trickle mode that would require you to look at data in almost real-time.

If you ask it for just committed data then LogMiner performance is terrible (like streams).
From your comments, I pretty confident that this is what you are seeing with your tests.
Conversely, if you ask LogMiner for data in the original log order, then performance is terrific,
but the application must handle the data in the raw order (changes before commit/rollback).

The poor performance of Oracle Steams reflects it's use of "commit-order" queries from LogMiner.
We're in the process of using LogMiner to extract data from Oracle into Hermesto take advantage
of terrific raw-order selection speed available whenLogMiner is used in raw-order mode.

We've discussed use of Hermes as a "re-ordering" tool for feeds requireing the commit order.
It's clearly feasible
Tom Kyte
February 27, 2007 - 10:37 am UTC

so dba seems to have figured out "not the right tool"

Is Logminer the right tool ?

Jeeva, December 20, 2007 - 4:02 am UTC

Database :
----------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
--

We are building up a datawarehouse project with retrospective effect. ie. The client wants the baseline as of January 2007 and track changes in few selected tables and build downstream datamarts. A procedure using logminer was designed and tested to mine archived logs.
In a day we may be able to mine at most 150 logs. As of now we have around 16000 log files to be mined. By the time we mine the pending archived log files, we may have generated another 3000 files or so. Somewhere down the line we may come in sync with current data. Once we come in sync, we have planned to setup streams and capture the changes.

My question is :
1. Is this a sensible solution to capture changes using logminer utility ?
2. Is there any other way / technology to achieve the change-data-capture from archived log files ?

Regards
Jeeva
Tom Kyte
December 20, 2007 - 10:05 am UTC

If you mean "we are taking the last years archives and mining them to sort of catch up", sure this is a reasonable way (not entirely clear, but I think that is what you were saying).

Not sure that I've see someone with the last years worth of archives that often.

Thanks for your input...

Jeeva, December 21, 2007 - 5:39 am UTC

Thanks Tom,
Your assumption is correct. We are mining last year's archived logs are catching up.
The reason I posed the question was that the present one is painfully slow and was wondering is there any other tool.
We will proceed with the current solution. Thanks for your input.

ps.
"Not sure that I've see someone with the last years worth of archives that often"
I was working in a gov project which mandated us not to clear even udump / bdump / cdump files for three years!

A reader, March 30, 2009 - 11:13 am UTC

Hi,
In relation to the question asked below about automating a logswitch for logminer, I have tried also to automate logminer using a linux shell script as below:

My questions are how do i automate the logswitch since i cannot add manually all the archivelogs created?,

How do i extract the archivelogs from the directory and populate the v$logmnr_contents view in this shell script?

#!/bin/ksh
echo " Type the STARTTIME(DD-MM-YYYY HH24:MI:SS): "
read STARTTIME

echo "Type User ENDTIME(DD-MM-YYYY HH24:MI:SS): "
read ENDTIME

echo $STARTTIME

sqlplus "/as sysdba" << EOF
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/u02/arch/1_50662_581101112.
arc',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/u02/arch/1_50706_581101112.
arc');

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

set lines 100
set pages 10
col USERNAME format a10
col SQL_REDO format a10
col SQL_UNDO format a10

select username,sql_redo,sql_undo FROM v$logmnr_contents;

EXIT
EOF

kindly help out,

Thanks.




Tom Kyte
March 30, 2009 - 5:19 pm UTC

I'm not really sure what you are trying to do, typically - when using log miner, one *knows* what log file(s) they want?

Logminer automation

A reader, March 30, 2009 - 11:13 am UTC

Hi,
In relation to the question asked below about automating a logswitch for logminer, I have tried also to automate logminer using a linux shell script as below:

My questions are how do i automate the logswitch since i cannot add manually all the archivelogs created?,

How do i extract the archivelogs from the directory and populate the v$logmnr_contents view in this shell script?

#!/bin/ksh
echo " Type the STARTTIME(DD-MM-YYYY HH24:MI:SS): "
read STARTTIME

echo "Type User ENDTIME(DD-MM-YYYY HH24:MI:SS): "
read ENDTIME

echo $STARTTIME

sqlplus "/as sysdba" << EOF
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/u02/arch/1_50662_581101112.
arc',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/u02/arch/1_50706_581101112.
arc');

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

set lines 100
set pages 10
col USERNAME format a10
col SQL_REDO format a10
col SQL_UNDO format a10

select username,sql_redo,sql_undo FROM v$logmnr_contents;

EXIT
EOF

kindly help out,

Thanks.




A reader, March 31, 2009 - 3:28 am UTC

Hi,
I creating a loop in the shell script so that the archivelogs could be picked and generated automatically to populate the v$logmnr_contents




Thank you.
Tom Kyte
March 31, 2009 - 8:09 am UTC

and again, I don't see the point, you don't want to load all of your archives in general - you would have a goal, a point in time you are interested in looking at and then you query v$archived_log to figure out what you wanted.

If you want help writing a shell script - please find another resource - come here for "inside the database" help.

But probably, you want v$archived_log - with a TIME BASED filter

response

ayo, March 31, 2009 - 3:29 am UTC

Hi,
I creating a loop in the shell script so that the archivelogs could be picked and generated automatically to populate the v$logmnr_contents




Thank you.

response

ayo, March 31, 2009 - 9:33 am UTC

Hi,

Thanks for the reply
Using a oracle 10g database with the online archive mode option,
I actually want to load all the archivelogs generated automatically to populate the V$logmnr_contents view using a procedure or a shell script as you said,

I just need your help on a clue to go about it and 

2.why is it that when i run this below i get the following error, even though i have granted select to the user and also the role:


SQL> select * from v$logmnr_contents;
select * from     v$logmnr_contents
                  *
ERROR at line 1:
ORA-00942: table or view does not exist

Kindly help out

thanks



Tom Kyte
April 01, 2009 - 7:08 am UTC

1) you want me to write a shell script? No, I don't do that.

and no, you actually don't want to do what you suggest, it would PUT WAY TOO INFORMATION into the logminer contents table, making this pretty much not useful.

Suggestion: query v$archived_log and in a loop attempt to load whatever happens to execute in there, using proper error handling to STOP loading archives when you get an error about "file isn't there". Just query that table, order by the date DESC (current date to older dates), when you hit an error about "file not there", then you know you loaded them. I don't shell script unless I have to, and you rarely need to.

But, if you cannot figure out how to take a list of files from a directory and do something with them in shell - you need to backup and get some basics down on shell programming first.

2) User needs execute on dbms_logmnr to select from that particular view as well, since you need to call dbms_logmnr first - that would be a prerequisite to populate it.

ayo, April 20, 2009 - 2:12 pm UTC

Hi Tom,

Your script below has been very helpful, keep up the good work of knowledge sharing.

But how do spool name from  v$archived_log and populate v$logmnr_contents with it so that i can get my output in txt file, it is not working.

Kindly help out.
Below is my script:



#!/bin/ksh
echo " Type the STARTTIME(01-jan-2009 08:30:00): "
read STARTTIME



echo "Type User ENDTIME(31-mar-2009 08:45:00): "
read ENDTIME
sqlplus "/ as sysdba" << EOF

CURSOR c_logs IS
SELECT name
FROM v$archived_log
WHERE sequence# BETWEEN v_first_log_seq AND v_last_log_seq
ORDER BY sequence#;

BEGIN
FOR i IN c_logs LOOP
v_asm_logname := SUBSTR(i.name, 16);
v_log_seq := SUBSTR(v_asm_logname,4,5);
v_win_logname := 'orcl_arc'||v_log_seq||'.log';


ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MM-YYYY HH24:MI:SS';
SPOOL C:\Listing_41.log



EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '02/20/2009 06:00'-
        ,ENDTIME => '02/20/3000 12:00'-
        ,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);






set lines 100
set pages 10
col USERNAME format a10
col SQL_REDO format a10
col SQL_UNDO format a10

SELECT username,-
SQL_REDO, SQL_UNDO ,to_char(timestamp,'DD/MM/YYYY HH24:MI:SS') timestamp-
FROM V\$LOGMNR_CONTENTS;
EXIT;
EOF
01-
SPOOL OFF

attached below is the detail for my archive logs:

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u02/arch
Oldest online log sequence     55317
Next log sequence to archive   55326
Current log sequence           55326





Tom Kyte
April 21, 2009 - 2:33 pm UTC

I told you to stop using shell script and start using plsql.


I have no idea what you are trying to accomplish there.


this

...But how do spool name from v$archived_log and populate v$logmnr_contents with
it so that i can get my output in txt file, it is not working.
....

makes no sense to me at all.

perfomence issues

syed, February 08, 2010 - 4:22 am UTC

Hai,
We have a medical departement database in this all database users are complaints me about the perfomence issues like when the user try to connect the database it will very slow connect and when they are retriving data it also very slow.. please can you suggest how to tune like this issues.

Is dbms_logmnr.continuous_mine a good approach to change auditing?

A reader, February 17, 2010 - 4:32 pm UTC

Hello Tom,
If I'm interpreting your response above correctly, loading archive logs for LogMiner is a tedious, labor-intensive task.
Would your opinion be different if we used the "dbms_logmnr.continous_mine" option? If not, can you explain why this approach is not preferred.
Also, I'm trying to use LogMiner and the query to view v$logmnr_contents is hanging even after the "endtime" of the session. Is this a byproduct of the "dbms_logmnr.continuous_mine" option? If so, this doesn't seem to be the correct approach to providing near real-time auditing.
As always, thanks for the forum.
Tom Kyte
February 18, 2010 - 8:28 am UTC

why wouldn't you use streams or CDC (change data capture)???

Some customers use standard edition and can't create a staging db.

A reader, February 18, 2010 - 10:35 am UTC

Hi Tom,
Thanks for the alternatives.  Our customers are on standard edition and we want to implement asynchronous cdc.  Also, some of their db hosts don't have the capacity to create a staging db.  For the same standard/enterprise edition limitations, we can't implement streams.
Basically, we're looking to implement either triggers on each table or parse LogMiner data.  Triggers will probably cause a performance issue and LogMiner will allow us to schedule parsing of change data as well as parse past archive logs.
I'm now concerned that real-time capture of audit data using LogMiner may not be possible.  My query below on the v$logmnr_contents view has not returned for almost 24 hours:

13:03:03 SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-Mon-YYYY HH24:MI:SS';

Session altered.

Elapsed: 00:00:00.00
13:03:13 SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
13:03:13 >    STARTTIME => '17-Feb-2010 13:05:00', -
13:03:13 >    ENDTIME => '17-Feb-2010 13:30:00', -
13:03:13 >    OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
13:03:13 >    DBMS_LOGMNR.CONTINUOUS_MINE);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
13:03:13 SQL> select to_char(sysdate,'DD-Mon-YYYY hh24:mi:ss') from dual;

TO_CHAR(SYSDATE,'DD-
--------------------
17-Feb-2010 13:03:18

Elapsed: 00:00:00.03
13:03:17 SQL> select to_char(sysdate,'DD-Mon-YYYY hh24:mi:ss') from dual;

TO_CHAR(SYSDATE,'DD-
--------------------
17-Feb-2010 13:05:38

Elapsed: 00:00:00.01
13:05:38 SQL> select count(*) from v$logmnr_contents where rownum < 10;

Can you let me know how I can query the v$logmnr_contents view to populate a staging table and if my assumptions above are correct?

Thank you.


Tom Kyte
February 18, 2010 - 7:25 pm UTC

sorry, you'll find that you won't be able to do it with logminer unless you sit down and write all of Streams yourself.

Eg: you either pay a little and buy it (before you say "it costs a lot", I'd like you to take the total compensation package for 2 or 3 of your best developers - salary, any bonus, employer contributions to taxes, healthcare, etc, facility costs, etc - and compare).

Or you build it over the course of a couple of months/years and maintain it - lovingly of course - for the rest of your life - never upgrading anything (os, database, nothing) because you know that will involve yet more re-development.


Logminer is going to be tricky - not everything will be in there all of the time (you'll tend to miss things because we'll just say "sorry, cannot represent that right now). You'll have to do log gap resolution, you'll have to parse lots of stuff (logminer is mostly for diagnostic inspection, it does not package things up neatly).

All I can say is "good luck".



Standard edition supports materialized views over a database link, use that if you want to replicate. Use streams if you want to replicate. Do not rewrite something that was written some 20 years ago and has 20 years of incremental refinement put into it.

thank you

A reader, February 18, 2010 - 11:02 pm UTC

Tom,
Thanks again for the feedback. Your points (of course) are valid and to-the-point.

Fetching the data from v$logmnr_contents is very slow.

kiran, August 27, 2011 - 7:06 pm UTC

Hi Tom,

I understand while fetching the data via V$logmnr_contents,files are read sequentially and thats why the response will be very very slow. Is there any way that i can overcome this, as i will be using log miner almost every other day or another solution is also highly appreciated.

Thanks !

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library