Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Lavanya.

Asked: July 14, 2002 - 3:42 pm UTC

Last updated: August 29, 2012 - 1:27 pm UTC

Version: 81721

Viewed 10K+ times! This question is

You Asked

Greetings Tom,

I hava a few questions on LogMiner.

1) One of our developers want to use the Logminer utility. I have never worked on this utility till date. Is it true that the Logminer needs to be run as sys/internal only.

2) If "NO" are there any other security concerns(Like database passwords...etc) which i need to look into before i grant execute privileges on all those packages.

3) Can we also see the ddl scripts using the Lominer or only the DMLs??

Thanks in advance

and Tom said...

Well, if you are concerned I would set him up a mini-instance somewhere to play with. Logminer does not need to run on the machine that generated the logs.

1) no, I never run logminer as sys/internal.

The schema the will execute the DBMS_LOGMNR_D package has been granted execute on SYS.DBMS_LOGMNR_D or has a role that is able to execute this package. By default, the EXECUTE_CATALOG_ROLE has the privilege to run this package.

You also need to have utl_file set up for them as well -- in order to export the data dictionary.

2) not really -- I would suggest that you just use another database to do the analysis in, that way, you don't have to worry about things like this. You can export the data dictionary for them and away they go.

3) in 8i -- only the DML, later releases, the DDL can be recorded into the redo logs, permitting logminer to easily see it.

Rating

  (79 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

What is a LogMiner

A reader, July 14, 2002 - 8:58 pm UTC

Is it an Oracle Tool. Whats it used for?

LogMiner

Lavanya, July 15, 2002 - 10:11 am UTC

Excellent !!!
Thankyou Very much Tom.

error when building dictionary

ram, September 11, 2002 - 11:19 am UTC

hello Tom.

I get the following error when i am building the dictionary.
I am using 8.1.7. and here is the cut/paste from sqlplus:

ram@ORCL 8.1.7.0.0> begin
2 sys.dbms_logmnr_d.build ('miner_dictionary.dat','D:\RAM');
3 end;
4 /
LogMnr Dictionary Procedure started
LogMnr Dictionary File Opened
TABLE: OBJ$ recorded in LogMnr Dictionary File
TABLE: TAB$ recorded in LogMnr Dictionary File
TABLE: COL$ recorded in LogMnr Dictionary File
TABLE: SEG$ recorded in LogMnr Dictionary File
TABLE: UNDO$ recorded in LogMnr Dictionary File
TABLE: UGROUP$ recorded in LogMnr Dictionary File
TABLE: TS$ recorded in LogMnr Dictionary File
TABLE: CLU$ recorded in LogMnr Dictionary File
TABLE: IND$ recorded in LogMnr Dictionary File
TABLE: ICOL$ recorded in LogMnr Dictionary File
TABLE: LOB$ recorded in LogMnr Dictionary File
TABLE: USER$ recorded in LogMnr Dictionary File
TABLE: FILE$ recorded in LogMnr Dictionary File
TABLE: PARTOBJ$ recorded in LogMnr Dictionary File
TABLE: PARTCOL$ recorded in LogMnr Dictionary File
TABLE: TABPART$ recorded in LogMnr Dictionary File
TABLE: INDPART$ recorded in LogMnr Dictionary File
TABLE: SUBPARTCOL$ recorded in LogMnr Dictionary File
TABLE: TABSUBPART$ recorded in LogMnr Dictionary File
TABLE: INDSUBPART$ recorded in LogMnr Dictionary File
TABLE: TABCOMPART$ recorded in LogMnr Dictionary File
ERROR -6532 ORA-06532: Subscript outside of limit
begin
*
ERROR at line 1:
ORA-06532: Subscript outside of limit
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 793
ORA-06512: at line 2


Elapsed: 00:02:171.57

The utl_file directory is set properly. anything else i need to setup to avoid this error ?
Thanks in advance.

Tom Kyte
September 11, 2002 - 12:02 pm UTC

Article-ID:         <Note:124671.1>
Circulation:        PUBLISHED (EXTERNAL)
Folder:             server.Utilities.LogMiner
Topic:              Product Usage
Title:              Dictionary File Creation Fails with ORA-06532
Document-Type:      PROBLEM
Impact:             MEDIUM
Skill-Level:        NOVICE
Server-Version:     08.01.07.00.00 to 08.01.07.01.00
Updated-Date:       13-AUG-2002 11:25:32
Attachments:        NONE
Content-Type:       TEXT/PLAIN
Errors:             ORA-6532; 
Products:           5/RDBMS (8.1.7.0.0 to 8.1.7.1.0); 
Platforms:          GENERIC;  

Problem Description:  
====================

You attempt to create a LogMiner dictionary file in Oracle RDBMS version
8.1.7.0.0 or 8.1.7.1.0 using DBMS_LOGMNR_D.BUILD and it fails with an ORA-6532
"subscript outside of limit" error.

For example:

    SQL>execute sys.dbms_logmnr_d.build(dictionary_filename=>'xyz.txt',
        dictionary_location=>'/d03/oracle8i');

    ERROR at line 1:
    ORA-06532: Subscript outside of limit
    ORA-06512: at "SYS.DBMS_LOGMNR_D", line 777
    ORA-06512: at line 1


Solution Description:  
=====================

Do one of the following:

A. Obtain the fix for <bug:1529107> by upgrading to Oracle9i or applying the
   8.1.7.2.0 (or greater) patchset.


- OR -


B. If you are unable to apply the bug fix, you can workaround the problem by
   modifying the package body of DBMS_LOGMNR_D.BUILD as follows:

   ** Please note: this workaround involves making a small change to an 
      Oracle supplied script. You should only make changes to these scripts
      when directed to do so by Oracle Support


   1. Edit the file "dbmslmd.sql" located in the "$ORACLE_HOME/rdbms/admin"
      directory. 

      Change the line:

          TYPE col_desc_array IS VARRAY(513) OF col_description; 
          
      to 
           
          TYPE col_desc_array IS VARRAY(700) OF col_description; 

      Save the file.
       
   2. Run the newly modified script:

          SQLPLUS> Connect internal 
           
          SQLPLUS> @$ORACLE_HOME/rdbms/admin/dbmslmd.sql

   3. Recompile the package body DBMS_LOGMNR_D:

          SQLPLUS> alter package DBMS_LOGMNR_D compile body; 


Explanation:  
============

The error occurs due to <bug:1529107>, which is fixed in Oracle9i and for which
a fix exists starting in the 8.1.7.2.0 patchset.

The number of objects handled by the DBMS_LOGMNR_D.BUILD procedure goes beyond 
the currently defined limit for VARRAY.


 

Reader

A reader, March 29, 2003 - 1:41 pm UTC

Oracle 9.0.1 Manual

LogMiner has also been enhanced in Oracle9i to provide comprehensive log analysis for additional datatypes. LogMiner now supports the following:

Clustered tables

Chained and migrated rows

LOB and LONG datatypes

Direct loads

Scalar object types

Data definition statements (DDL)

How does LogMiner tracks direct loads. Are the direct
not bypass Database Cache and hence redo not generated

Tom Kyte
March 29, 2003 - 2:14 pm UTC

direct path loads always generated redo if the database is in archivelog mode.

bypassing the buffer cache doesn't affect redo generation at all.



Reader

A reader, June 01, 2003 - 12:48 pm UTC

From 8i concepts Guide
"
Advantages of Direct-Load INSERT
A major benefit of direct-load INSERT is that you can load data without logging redo or undo entries, which improves the insert performance significantly. Both serial and parallel direct-load INSERT have this performance advantage over conventional path INSERT.
"
Does this statement refer to no archive log mode only ?

Thanks

Tom Kyte
June 01, 2003 - 1:01 pm UTC

in archivelog mode, the table would have to be in "NOLOGGING" mode to avoid the redo (but the undo is skipped regardless).

in 9iR2 the database would have to also not be in the "force redo" mode as well.

conventional path

Reader, June 01, 2003 - 6:51 pm UTC

If i put my table in nologging mode in archivelog database, conventional path also does not generate any redo? right? In this situation the major difference between direct and conventional path is that users will not be able to access the table during the direct path load and also it is fast as direct path load does not have to check for the free (free list) space availability. is it correct? Thanks.

Tom Kyte
June 02, 2003 - 7:13 am UTC

conventional path always generates redo and undo.

Only "bulk" operations can be done nologging, things like

o insert /*+ APPEND */
o create table as select
o create index
o ....


users can access the table during a direct path load.

There are major differences between a direct path and conventional path load. check out the concepts guide and the Utilities (SQLLDR chapters) guide.

Logminer

Erik, June 02, 2003 - 5:08 am UTC

Hello,

I've done some test with logminer ... on a 8174 Compaq True 64, 4GB RAM, 4cpu 868Mhz, SAN cabinet with 48 disks...

We generate 200 archived redo's of 100MB each on one day.

Well I can tell you it's impossible to use logminer... it is so slow ....

So I should say, use it in an very exceptional situation ...

direct path load

Reader, June 02, 2003 - 12:35 pm UTC

Tom, you say
<quote>
users can access the table during a direct path load.
<quote>

But the oracle doc
</code> http://downloadwest.oracle.com/docs/cd/B10501_01/server.920/a96652/ch09.htm#1008816 <code>
says,
<quote>
A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load is finished.
<quote>

When you say users can access the table, did you mean query is allowed not DML.

Any comments.
Thanks.

Tom Kyte
June 02, 2003 - 12:38 pm UTC

query, yes. that is accessing the table.

conventional path loading

Reader, June 02, 2003 - 3:37 pm UTC

Please see below from oracle doc.
<quote>
Specify whether the creation of the table (and any indexes required because of constraints), partition, or LOB storage characteristics will be logged in the redo log file (LOGGING) or not (NOLOGGING).The logging attribute of the table is independent of that of its indexes.

This attribute also specifies whether subsequent Direct Loader (SQL*Loader) and direct-path INSERT operations against the table, partition, or LOB storage are logged (LOGGING) or not logged (NOLOGGING).

<quote>

To make sure i understand the concepts, if i had specified my table as nologging during creation itself in ARCHIVELOG database, does conventional path loading generates redo/undo or not? Thanks.

Tom Kyte
June 02, 2003 - 4:16 pm UTC

(see ABOVE where you asked this already ;)


conventional path loads ALWAYS
in ALL CASES
in EVERY CIRCUMSTANCE
generates redo and undo.

Got it finally!

Reader, June 02, 2003 - 4:36 pm UTC


Some log miner questions

Arun Gupta, November 12, 2003 - 10:26 am UTC

Tom,
I want to use log miner to try to debug some database issues. Since I have never used it before, I have some questions. We have Oracle 9.2.0.3.
a) Is is not possible to use log miner without turning on supplemental logging/minimal supplemental logging?

b) As I read from Oracle manual, minimal supplemental logging doesn't have a performance overhead on the database. Is the following the right command to turn on minimal supplemental logging?

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA

Once turned on, would it be preserved after database restart?

c) How can I enquire the status whether minimal supplemental logging is turned on in a database?

Thanks 

Tom Kyte
November 12, 2003 - 11:25 am UTC

1)
a) you don't need to, you get "more", but you get lots without it
b) yes, alter databases persist, they are not "init.ora" settings.
c) query v$database

Few Log Miner Questions

Sami, January 19, 2004 - 7:15 pm UTC

Dear Tom,

Always thankful to YOU coz I am one of the most beneficial guy from this forum.

1)"Logical Standby" is built-up on "Log Miner". Is there any other stuff which built based on(using) Log Miner?
How about Streams?

2)
In "10g", is there any change in the following restrictions?
The following are not supported (9i Log Miner):
– Simple and nested abstract datatypes (ADTs)
– Collections (nested tables and VARRAYs)
– Object Refs
– Index organized tables (IOTs)
– CREATE TABLE AS SELECT of a table with a clustered key

3) If we decided to implement Log Miner then it is better to avoid IOT and Cluster schema objects during design time itself. Am I right?

Expecting your response if time permits for you.


Tom Kyte
January 20, 2004 - 7:09 am UTC

1) streams......


2) these are the 10g restrictions:

Unsupported Datatypes and Table Storage Attributes LogMiner does not support these datatypes and table storage attributes:

BFILE datatype
Simple and nested abstract datatypes (ADTs)
Collections (nested tables and VARRAYs)
Object refs
XMLTYPE datatype
Index-organized tables (IOTs) with LOB columns
Tables using table compression

3) iot's are supported now (without lobs). yes, if log miner is something you want to use 100% of the time, you would have to avoid non-supported implementations.

missing data

A reader, May 11, 2004 - 4:29 pm UTC

Tom,
A good no. tables are missing data all together in a certain schema. Is there anyway to find out from the log/log miner how deleted the records from these tables . We don't have auditing on any of the tables.

If not is there any other way to find out who deleted the records.

Thank you


Tom Kyte
May 11, 2004 - 8:56 pm UTC

sure, logminer is a possibility. it is documented in the admin guide and I wrote alot about it in Expert One on One Oracle.

Output Confusing

Vinnie, May 28, 2004 - 8:58 am UTC

Tom I have the following output from LOGMINER:

GROUP 1
insert into "EVENT"("TIME","EVENT")
values ('17',EMPTY_BLOB());
Unsupported
delete from "EVENT where "TIME" = '17' and
ROWID = 'AAACIjAAQAAAB/aAAn';

GROUP 2
insert into "EVENT"("TIMELINE","EVENT")
values ('17',EMPTY_BLOB());

update "EVENT" set
"EVENT" = HEXTORAW('aced000573720028636f6d2e6a73696d732e636377732e63662e7265757361626c652e666f6d2e466f6d4f626a65637441dee70e90751ca
f0c000078707753000000010005312e382e310000006200000000010000002250454c4f5745525f4f52472e4c414e442e4551554950
5f47524f55502e3136313037000000280100000041ffffffff0000000040e5bd300000000078')
where "EVENT" = EMPTY_BLOB() and ROWID = 'AAACIjAAQAAAB/dAAc';

A JAVA app is executing this insert for update statment then
updating the empty blob. The 2nd group works fine, the 1st group in the one in question.
The question I have is concerning the DELETE.
Our code does not have any DELETES, so ORACLE must be doing the DELETE as a result of a error condition during the initial insert. But if this is true, why is the record in the EVENT table with the same ROWID? I searched for all rowid's using LOGMINER that match 'AAACIjAAQAAAB/aAAn' and only found the INSERT & DELETE.




Tom Kyte
May 28, 2004 - 11:28 am UTC

not sure what you are asking.

remember -- logminer is a logical interpretation of a binary redo stream. that delete isn't really a "delete", it looks to be synthesized as part of a rollback -- which logically deletes (it was a rollback, not a delete)

rowids are file/blocks/slot on block -- we just used a different slot on the block. later when the block fills up -- then we would waste cpu cycles to "organize it better". so you used a slot and rolled back -- we just won't reuse that slot and may never ever have to reuse that slot on the block.

Logfiles of Different database

Surendra Tirumala, May 28, 2004 - 12:07 pm UTC

Hi Tom,

Thanks for all the useful info about LogMiner...

I have a couple of questions, may be exactly not related to the topic of discussion here, but related to logminer.

When I tried to analyze the log files of a database(arch mode) by using logminer on some other database(noarch mode), I am hitting ORA-01295. Both the databases are 9203 and running on Solaris 8.

Can you please let me know if I am missing something?

After hitting this error, I have decided to use logminer on the database from which logfiles are. I am hitting the error below. Here I am using dictionary in redo logs as I don't have my "UTL_FILE_DIR" set.

===================
ERROR at line 1:
ORA-01371: Complete LogMiner dictionary not found
ORA-06512: at "SYS.DBMS_LOGMNR", line 53
ORA-06512: at line 1
===================

Please let me know how can I overcome the problems mentioned above.

Thank you!
Surendra

Tom Kyte
May 28, 2004 - 1:21 pm UTC

[tkyte@tkyte-pc tkyte]$ oerr ora 1295
01295, 00000, "DB_ID mismatch between dictionary %s and logfiles"
// *Cause: The dictionary file is produced by a database that is different
// from that produced the logfiles.
// *Action: Specify a compatible dictionary file.


did you unload the dictionary and load it up for this other instance?

Followup to Previous

Vinnie, May 28, 2004 - 2:18 pm UTC

Rollback does sound logical so I performed a test!

rem
rem Oracle LogMiner Viewer Save Redo/Undo SQL Script
rem
rem Column Saved: SQL_REDO
rem Script Created: 5/28/04 1:56:05 PM
rem Database: PERF - SYS AS SYSDBA
rem
rem Time Range Start: 5/28/04 2:01:45 AM
rem Time Range End: 5/28/04 1:35:05 PM
rem
rem Query Date/Time: 5/28/04 1:55:26 PM
rem Query SELECT Statement: SELECT TIMESTAMP as "Timestamp",SCN,USERNAME as "Username",OPERATION as "Operation",SEG_OWNER as "Owner",SEG_NAME as "Table",SQL_REDO as "SQL Redo",SQL_UNDO as "SQL Undo" FROM SYS.V_$LOGMNR_CONTENTS WHERE ( SEG_NAME = 'TEST' )
rem


create table test (a varchar2(30), b number);
insert into "VINNIE"."TEST"("A","B") values ('TEST1','1');
insert into "VINNIE"."TEST"("A","B") values ('TEST2','2');
delete from "VINNIE"."TEST" where ROWID 'AAACJeAAVAAAAAKAAB'
For this test I performed a manual rollack.


truncate table test;
alter table test add constraint v1_pk primary key (b) using index tablespace vinnie;
insert into "VINNIE"."TEST"("A","B") values ('TEST1','1');
insert into "VINNIE"."TEST"("A","B") values ('TEST2','2');
insert into "VINNIE"."TEST"("A","B") values ('TEST2','2');
delete from "VINNIE"."TEST" where ROWID = 'AAACJfAAVAAAAAKAAC';
This was rollbacked becuase of the PK.

In both cases I do not see anything like what I found above:
Unsupported
delete from "EVENT where "TIME" = '17' and
ROWID = 'AAACIjAAQAAAB/aAAn';
Mostly the inclusion of the "TIME" in the where clause of the delete statement.

My main concern is that the APP is doing this & not ORACLE but the developers say they are not doing any deletes!


Tom Kyte
May 28, 2004 - 2:54 pm UTC

gotta see code. hate guessing.

Unloading dictionary?

Surendra Tirumala, May 28, 2004 - 2:46 pm UTC

Hi Tom,

You asked..."did you unload the dictionary and load it up for this other instance? "

Looks like I am in the worng impression that the dict file I build on an instance can be used to analyze the log files from other instance.

Sorry, I am not able to find docs about how to buid the dict file on a instance to analyze logfiles from other instance.

Can you please let me know some pointers?

Thanks,
Surendra

Tom Kyte
May 28, 2004 - 2:55 pm UTC

no, you have it right. you unload from one, load it in another.

hows about you show us what you did?

This is what I did...

Surendra, May 28, 2004 - 3:05 pm UTC

Here below is what I did..I am giving both the scenarios in which I got errors which I have mentioned in my original post.

Thank you

-----------------------------
DB_A: Log files to be analyzed
DB_B: LogMiner running

On DB_B:
========

1)EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
'/backup2/UTL/OSOST2', -
OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

2) Got logfiles from DB_A shipped to an area "'/backup2/UTL/OSOST2/" which can be accessed by DB_B

Then...

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST2/OSOST19227.log', -
OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST2/OSOST19226.log', -
OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST2/OSOST19228.log', -
OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.START_LOGMNR( -
DICTFILENAME => '/backup2/UTL/OSOST2/dictionary.ora');

And it gave "ORA-01295"...
Then decided to analyze the logfiles on DB_A iteself by building the dict file on Redo Logs...and did the following

On DB_A:
========

EXECUTE DBMS_LOGMNR_D.BUILD ( -
OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST1/OSOST19224.log', -
OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST1/OSOST19225.log', -
OPTIONS => DBMS_LOGMNR.ADDFILE);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST1/OSOST19226.log', -
OPTIONS => DBMS_LOGMNR.ADDFILE);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST1/OSOST19227.log', -
OPTIONS => DBMS_LOGMNR.ADDFILE);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/backup2/UTL/OSOST1/OSOST19228.log', -
OPTIONS => DBMS_LOGMNR.ADDFILE);

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => -
DBMS_LOGMNR.DICT_FROM_REDO_LOGS);

And it gave ....

ERROR at line 1:
ORA-01371: Complete LogMiner dictionary not found
ORA-06512: at "SYS.DBMS_LOGMNR", line 53
ORA-06512: at line 1


Tom Kyte
May 28, 2004 - 3:43 pm UTC

the build routine should have been run on "A", not on "B".

On "B" you would have used start_logmnr with a parameter that pointed to the dictionary built *on A*. from A you would have pulled the redo and dictionary files.


The other error just means you didn't add all of the needed logfiles -- that command built the dictionary into the current redo log file and I don't know that you added that in that session.


Suggestion:

on "A", run the build
on "B", load that built file.

Got it cleared...

Surendra Tirumala, May 28, 2004 - 4:18 pm UTC

Hi Tom,
Now with your suggestions and a quick re-visit to the Fine Manual, I got most of my doubts cleared. Yes, I didn't add the logfiles (online) in which my dict has been built to logminer. I am in the impression that "somehow"/"someway" the logminer would have the knowledge of these logfiles.

Now I should be good to go.
I will get back to you if I have further problems.

Thank you for all the help!!!!

Should one periodically build the logminer dictionary file?

Anand, May 28, 2004 - 4:56 pm UTC

How often do you recommend that one build the logminer dictionary file in a 'typical' OLTP 24x7 production environment? Once a week, once a month?

Yes, I realize that the frequency ought to be proportional to the amount of 'development' that happens on the database (more specifically - DDLs, objects created/dropped). But I'm looking for some kind of a thumbrule, or an industry practise suggestion. Is there one at all?

Tom Kyte
May 28, 2004 - 8:06 pm UTC

well, logminer is the exception not the rule, I see it used to figure something out from time to time - not on a daily basis.

so, i build it when i need it. when I need to figure something out. if you are using logminer on an ongoing basis - I'd be interested in "why" (and you would need to rebuild after any ddl type of operation, unless you mine EVERY redo log and you have it go into there)

Regarding periodically build the logminer dictionary file

A reader, May 29, 2004 - 1:58 am UTC

Thanks for the response.

The reason I asked the question is - for example, say someone accidentally dropped a table and that you'd never built a logminer dictionary in your database thus far; wouldn't you be in trouble? How would you go about tracking this accident 'after the fact'? Since the table is now gone, the new dictionary file won't have it anymore..Am I missing something obvious?

Tom Kyte
May 29, 2004 - 11:26 am UTC

if you feel someone might be "accidently" doing that -- I'd rather have auditing in place -- or have the DDL recorded in the redo logs. You don't need the dictionary for that.

dictionary build - frequency

A reader, May 29, 2004 - 2:04 pm UTC

Thanks again. I'm not sure I completely understood. Sorry for persisting this route. I realize that auditing is the preferred way to go, in order to track 'DDL accidents'. However, I would appreciate it if you could please clarify the below scenario.

Say I've never built a logminer dictionary on my database thus far. A table 'JUNK' was created way back in the past (I don't know the exact date/time) and it was 'accidentally' dropped on May 28, 2004. How would you go about tracking this using logminer?

Now, when you create a dictionary 'after the fact' today; Oracle won't have the object names for the internal object ids. So even if you were to mine DDLs, all that you would get by feeding the archivelogs of May 28, to logminer is a not so meaningful statement like - "drop table Object#1234" ? Is that correct? I'm looking for a way for logminer to display a "drop table JUNK" after having mined the May 28th's logs. Will "DDL tracking" address this as well? The logminer documentation is not very clear about it..I'm not able to test it right now, as I don't have access to my database at the moment.

- Anand
PS : Like many others, I've benefitted a lot from this site. You seem superhuman. Wonder how you balance your life! You must be a very contented man, considering the number of people you help out :). There aren't many others like you. Way to go!

Tom Kyte
May 30, 2004 - 10:30 am UTC

tracking what exactly? the drop? -- in 9i we can have supplemental logging going on so you can have DDL logged right into the redo.  that would do it.  else you MIGHT be able to dig it out by finding the operation against obj$ (that is, knowing the obj# doesn't mean you'll definitely be able to find when it was dropped, just about when it might have been dropped).

If recovery from an accidently dropped table is paramount, please put into place the necessary safeguards today!


but what would you be looking for is my question...  in order to get it back (not just to point blame) you'll have to do a point in time tablespace recovery at the very least (TSPITR) so just do that - open readonly before rolling forward, get the object id, find out when it may have been dropped -- then roll forward to retrieve it.

consider (out of the box here, nothing special -- not even in archivelog mode on this test database)


ops$tkyte@ORA9IR2> alter system switch logfile;
 
System altered.
 
ops$tkyte@ORA9IR2> drop /* hello */ table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2> select * from v$logfile;
 
    GROUP# STATUS  TYPE
---------- ------- -------
MEMBER
-------------------------------------------------------------------------------
         3         ONLINE
/home/ora9ir2/oradata/ora9ir2/redo03.log
 
         2         ONLINE
/home/ora9ir2/oradata/ora9ir2/redo02.log
 
         1 STALE   ONLINE
/home/ora9ir2/oradata/ora9ir2/redo01.log
 
 
ops$tkyte@ORA9IR2> select * from v$log;
 
    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
         1          1        260  104857600          1 NO  INACTIVE
     30430851 29-MAY-04
 
         2          1        261  104857600          1 NO  ACTIVE
     30627264 30-MAY-04
 
         3          1        262  104857600          1 NO  CURRENT
     30627405 30-MAY-04
 
 
ops$tkyte@ORA9IR2> alter system switch logfile;
 
System altered.
 
ops$tkyte@ORA9IR2> !cp /home/ora9ir2/oradata/ora9ir2/redo03.log /tmp/x.log
 
ops$tkyte@ORA9IR2> exec sys.dbms_logmnr.add_logfile( '/tmp/x.log', sys.dbms_logmnr.new );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec sys.dbms_logmnr.start_logmnr;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select scn, sql_redo from v$logmnr_contents;
 
       SCN
----------
SQL_REDO
-------------------------------------------------------------------------------
  30627408
set transaction read write;
 
  30627408<b>
drop /* hello */ table t;</b>
 
  30627409
Unsupported
 
  30627409
Unsupported
 



 

Thank you!

A reader, May 30, 2004 - 6:34 pm UTC

Thank you for elaborating with an example.

Neat that you can trap DDL with supplemental logging! I hadn't specified the DBMS_LOGMNR.DDL_DICT_TRACKING option..so I wasn't seeing something as 'readable' as in your v$logmnr_contents.

A question though - Would you recommend that one turn on supplemental logging on all your databases that you really care about? Will there be a noticable performance impact?

<QUOTE>
If recovery from an accidently dropped table is paramount, please put into place the necessary safeguards today!
</QUOTE>

I assume you mean foolproof backup and DR procedures. Yes, we've got decent RMAN backups and Data guard in place.



Tom Kyte
May 31, 2004 - 12:47 pm UTC

Neither had I, in 9i, it just puts it in there (that was the "out of the box" comment -- i enabled nothing beyond the standard DBCA install here)

the logging is supplemental out of the box -- meaning there is more of it. You can turn on even more for supporting other features, but I would enable it only as needed.


No -- I meant AUDITING so you can catch the person doing it! One thing to be able to recover, another to be accountable.

Using LogMiner

Meghana, January 01, 2005 - 9:08 am UTC

Hi Guys,
I have been teaching myself to use the LogMiner utility. I was following Oracle Documentation for that. When I query the V$LOGMNR_CONTENTS view, I get statements which don't really make sense (shows I'm a novice at this). The documentation shows some output which is totally unlike what I am getting. Please tell me how do I get the exact statement that I had entered. I am able to mine the logs, but unless I find the statement which I had given, how would I be able to reverse it?

Also I would like to know the purpose of having the undo entry in the redo logs.

Tom Kyte
January 01, 2005 - 11:41 am UTC

do you have access to my book "Expert One on One Oracle" for you see -- I cannot see your monitor, so I don't know what unexpected output you might be getting from where I sit.... and I wrote about log miner extensively.... and why UNDO is protected by REDO (think crash recovery -- undo, which we read and write is buffered in the cache, anything buffered that we might need upon a restart from failure must be protected and the database protects buffered data in the cache with redo on disk)

Using Logminer....

reader, January 01, 2005 - 6:25 pm UTC

Hi Meghana,

Assuming that you are using 9iR2,

Check if you have enabled SUPPLEMENTAL LOGGING for the database?

sql>SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

If not, you could do the following.

sql>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Meghana With LogMiner again

Meghana, January 02, 2005 - 5:57 am UTC

Hi Tom,
I am sorry for not giving enough info earlier. I am copying the spool file that I created in my LogMiner session.
I had given the commands:
SQL> alter database add supplemental log data;
SQL> exec dbms_logmnr_d.build(options=>dbms_logmnr_d.store_in_redo_logs);
SQL> exec dbms_logmnr_d.set_tablespace('USERS');
SQL> exec dbms_logmnr.add_logfile(logfilename=>'c:\sql\viv\log\log1a.log', options=>dbms_logmnr.new);
SQL> exec dbms_logmnr.add_logfile(logfilename=>'c:\sql\viv\log\log2a.log', options=>dbms_logmnr.new);
SQL> exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.print_pretty_sql);

I then logged on as user U1 and gave the commands:

SQL> create table t1(col1 number(2));
SQL> insert into t1 values(90);
SQL> insert into t1 values(90);
SQL> insert into t1 values(90);
SQL> insert into t1 values(90);
SQL> commit;

Then I queried V$Logmnr_Contents as:
SQL> select scn, sql_redo from v$logmnr_contents where username='U1';

I got a lot of output related to a create table command, which I guess is related to a Data dictionary write.

       SCN SQL_REDO                                                             
---------- ------------------------------                                       
    181870 create table t1(col1 number(2)                                       
           );                                                                   
                                                                                
    181871 update "UNKNOWN"."OBJ# 60"                                           
             set                                                                
               "COL 3" = HEXTORAW('80'),                                        
                                                                                
               "COL 4" = HEXTORAW('c121')                                       
           ,                                                                    
               "COL 5" = HEXTORAW('3e6466                                       
           '),                                                                  

       SCN SQL_REDO                                                             
---------- ------------------------------                                       
               "COL 6" = HEXTORAW('80'),                                        
                                                                                
               "COL 7" = HEXTORAW('80'),                                        
                                                                                
               "COL 8" = HEXTORAW('80')                                         
             where                                                              
               "COL 1" = HEXTORAW('c11a')                                       
            and                                                                 
               "COL 3" = HEXTORAW('80') a                                       
           nd                                                                   
               "COL 4" = HEXTORAW('80') a                                       

       SCN SQL_REDO                                                             
---------- ------------------------------                                       
           nd                                                                   
               "COL 5" = HEXTORAW('3e6466                                       
           ') and                                                               
               "COL 6" = HEXTORAW('80') a                                       
           nd                                                                   
               "COL 7" = HEXTORAW('80') a                                       
           nd                                                                   
               "COL 8" = HEXTORAW('80') a                                       
           nd                                                                   
               ROWID = 'AAAAAKAABAAAFxeAA                                       
           A';                                                         For the actual inserts I got the following output:
       SCN SQL_REDO                                                             
---------- ------------------------------                                       
               ROWID = 'AAAAAIAABAAAF5ZAA                                       
           E';                                                                  
                                                                                
    181874 commit;                                                              
    181878 set transaction read write;                                          
    181878 insert into "UNKNOWN"."OBJ# 66                                       
           17"                                                                  
            values                                                              
               "COL 1" = HEXTORAW('c15b')                                       
           ;                                                                    
    181879 insert into "UNKNOWN"."OBJ# 66                                       
           17"                                                                  
            values                                                              
               "COL 1" = HEXTORAW('c15b')                                       
           ;                                                                    
    181880 insert into "UNKNOWN"."OBJ# 66                                       
           17"                                                                  
            values                                                              
               "COL 1" = HEXTORAW('c15b')                                       
           ;                                                                    
    181880 insert into "UNKNOWN"."OBJ# 66                                       
           17"                                                                  
            values                                                              
               "COL 1" = HEXTORAW('c15b')                                       
           ;                                                                    

    181882 commit;                                                              
Now I would like to know what are the actual insert statements I wrote. How do I find it out? 

Tom Kyte
January 02, 2005 - 10:59 am UTC

well, you will never find out the ACTUAL inserts.  your insert into t select * from another_t will never ever appear in that fashion -- you would get N insert statements that taken together would do the same thing, but you'll never ever see your original SQL.


obj# 60 and 6617 you'll find to be SYS owned objects, they are dictionary objects. 


but, did you  

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';


to find the minimal set of logs you must load in order to have the entire dictionary loaded up in your log miner session?


Perhaps, if you are just tooling about and trying to play with log miner and are not doing tons of drop/create/drop/create -- you want to use dict_from_online_catalog.
 

Thanx for that input on LogMiner

Meghana, January 04, 2005 - 4:38 am UTC

I am doing this on a test database and I only have 2 redo log files. Do I also need to load the archived log files into the LogMiner session? I forgot to run the 2 commands that you have mentioned. But I would try all this again and see what difference it makes.

I am also going through your book for LogMiner.

Thanx Tom. Wish You a Very Happy New Year(I know its a little late)

Tom Kyte
January 04, 2005 - 8:33 am UTC

typically log miner works only on archives - online redo logs are in a state of flux.

Logminer - Sql statemetns

Prasad, January 10, 2005 - 1:27 am UTC

Hello Tom,
I am new to oracle technologies. I have a question on archive logs having sql statements. Would you please clear my doubt, who writes sql statements to redo buffer and when and from where (sql statements are in library cache right?) ? I was unable to see this info in Expert one on one oracle book. Your help would be greatly appreicated.

Thanks with regards
Prasad.


Tom Kyte
January 10, 2005 - 8:39 am UTC

sql statements are not written to redo. (well, with suplemental logging, ddl and some other stuff could be but it really isn't relevant here)

redo is written to redo, changed bytes, binary data that says "if you change these bytes from X to Y on this block, it'll redo the operation"


logminer is a program that takes that binary redo information and says "ok, instead of changing these bytes from X to Y on that block, if you do this SQL we'll achieve the same effect"


if you read the chapter on redo, you'll see what is in redo, logminer is just a program to turn that binary goo into "logically equivalent SQL"

Thanx for your reply - please correct me if I am wrong

Prasad, January 11, 2005 - 1:13 am UTC

This is unbelievable site. It has cleared my doubt. Please correct me if I am wrong, based on the previous explanation Oracle server also uses the same way (constructs the sql statements from binary info in archive logs for DML statements) in doing database recovery with arch. Log files right ? Which mean it will reconstruct the statements and redo the actions to recover the database.

Thanks a lot
Prasad.


Tom Kyte
January 11, 2005 - 8:58 am UTC

no, during recovery Oracle takes the binary information that says "bytes 0 through 55 on block 42 in file 17 should be 'faklfjafjadfkjdafjdas'" and does that.

it does not reconstruct the SQL and re-execute it, it just takes the binary information and applies it.

Thanks a lot

Prasad, January 11, 2005 - 11:43 pm UTC

Great help : ) now I am clear. You are so kind to answer my questions. You really rock.

Thanks with great regards
Prasad.


Hextoraw

Rory, March 28, 2005 - 1:53 am UTC

Hi Tom,

I tried re-executing the sql_undo in v$logmnr_contents
but either I encountered an error or the data is different
from the original.

SQL> INSERT INTO T1 VALUES ('ME');

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> COMMIT;

Commit complete.

SQL> ALTER SYSTEM SWITCH LOGFILE;

System altered.

SQL> select * from t1;

NAME
----------
ME
ME
ME

SQL> delete from t1 where name='ME';

3 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from t1;

no rows selected


SQL> SELECT * FROM V$LOG;

    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
         1          1       2518  104857600          1 NO  CURRENT             5.9683E+12 28-MAR-05
         2          1       2516  104857600          1 NO  INACTIVE            5.9683E+12 28-MAR-05
         3          1       2517  104857600          1 NO  ACTIVE              5.9683E+12 28-MAR-05

    GROUP# STATUS  TYPE    MEMBER
---------- ------- ------- ----------------------------------------
         1         ONLINE  /u01/oradata/MPRT/redo01.log
         2         ONLINE  /u01/oradata/MPRT/redo02.log
         3         ONLINE  /u01/oradata/MPRT/redo03.log

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME=>'/u01/oradata/MPRT/redo01.log',OPTIONS=>dbms_logmnr.new);

PL/SQL procedure successfully completed.

SQL> execute dbms_logmnr.start_logmnr;

PL/SQL procedure successfully completed.

SQL> alter system switch logfile;

System altered.

SQL> SELECT SCN,SQL_REDO,SQL_UNDO,ROLLBACK
  2  FROM V$LOGMNR_CONTENTS;

       SCN SQL_REDO                                 SQL_UNDO                                   ROLLBACK
---------- ---------------------------------------- ---------------------------------------- ----------
5.9683E+12 set transaction read write;                                                                0
5.9683E+12 delete from "UNKNOWN"."OBJ# 10068" where insert into "UNKNOWN"."OBJ# 10068"("COL           0
            "COL 1" = HEXTORAW('4d45') and ROWID =  1") values (HEXTORAW('4d45'));
           'AAACdVAADAAB3YeAAA';

5.9683E+12 delete from "UNKNOWN"."OBJ# 10068" where insert into "UNKNOWN"."OBJ# 10068"("COL           0
            "COL 1" = HEXTORAW('4d45') and ROWID =  1") values (HEXTORAW('4d45'));
           'AAACdVAADAAB3YeAAB';

5.9683E+12 delete from "UNKNOWN"."OBJ# 10068" where insert into "UNKNOWN"."OBJ# 10068"("COL           0
            "COL 1" = HEXTORAW('4d45') and ROWID =  1") values (HEXTORAW('4d45'));
           'AAACdVAADAAB3YeAAC';

5.9683E+12 commit;                                                                               

# Now to put back those deleted rows, I did the following.

SQL> insert into "UNKNOWN"."OBJ# 10068"("COL1") values (HEXTORAW('4d45'));

ERROR at line 1:
ORA-00942: table or view does not exist

# Error, so I tried another one

SQL> INSERT INTO T1 values (HEXTORAW('4d45'));

1 row created.

SQL> select * from t1;

NAME
----------
4D45

I was able to insert back the value but it was different. It should have been "ME". What was wrong here? Thanks Tom.

My Database is version 9.2.0.3 

Tom Kyte
March 28, 2005 - 7:48 am UTC

it did not know what the types of anything where, that was just raw data.

ops$tkyte@ORA9IR2> select to_number('4d','xx'), to_number('45','xx') from dual;
 
TO_NUMBER('4D','XX') TO_NUMBER('45','XX')
-------------------- --------------------
                  77                   69
 
ops$tkyte@ORA9IR2> select chr(77), chr(69) from dual;
 
C C
- -
M E
 
ops$tkyte@ORA9IR2>

Assuming single byte ascii data, you can see what was there:

ops$tkyte@ORA9IR2> select utl_raw.cast_to_varchar2(hextoraw('4d45')) from dual;
 
UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('4D45'))
-------------------------------------------------------------------------------
ME


Might want to use a dictionary with that so it "knows" stuff. 

redo log and auditing

Jianhui, May 16, 2005 - 3:40 pm UTC

Tom,
I am curious what data is actually written into redo log when statements like these issued:
1)delete emp where deptno=10 , suppose it deletes 20 rows, does it record 1 delete SQL or 20 delete SQLs?

2)insert into t1 (select * from t2), suppose it inserts 100 rows, what is recorded in redo logs?

3)insert with /*+ append */ hint?

4)DDL statements?

Some tools use redo logs as the data source for auditing, your answer to above questions will be very useful for me to evaluate these tools and their methodology.
Best,


Tom Kyte
May 16, 2005 - 5:06 pm UTC

it records binary information, enough to undo what we did.

it would record the equivalent of 20 "delete where rowid=xxxxxxx"
it would record the equivalent of 100 "insert into values.... AT this rowid"

and so on. They are known as redo change vectors, that are binary data that tell us what to do to a block to redo the operation.

Since undo is logged in redo, pretty much all of the information is there (before and after data). Our streams based technology, data guard -- all based on this.

do log files contain session context information

Jianhui, June 06, 2005 - 2:28 pm UTC

Hello Tom,
Thank you for your comments! Since you mentioned log files contain binary vector data, I was wondering whether it also contain session context related data, such as client machine name or IP, client program, OS user name, database user, etc. So I uesd binary editor to open an archived log file and I did find something like "sqlplus@myhostname" and IP address text string in the log file. It seems that oracle does save context information in the redo log.

So my question is why does oracle save this information in the logs, shouldnt it be pure physical level file#, block# information in the logs regardless of anything logical level aspect of the database? Since all database changes eventually turn out to be physical file changes, it seems these context information is recorded just for some enhencement purposes, doesnt it? All i can imagine is for DDL, DCL changes might need these information for oracle to redo these type of changes. Does oracle redo the DDL/DCL changes through replaying SQL statements instead of changing datafile blocks directly(like in system datafiles), more like logical standby resync? For DML changes, i guess oracle does it directly through changing datafile blocks, more like physical standby resync. approach?
What are other purposes of saving these information in redo logs?

Many thanks for your time!

Tom Kyte
June 06, 2005 - 3:10 pm UTC

it saves some stuff -- for log mining

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96536/ch3103.htm#1117527 <code>

when asked "what goes into a log file for redo", the answer is as above.

redo log contents

Prasad, July 18, 2005 - 11:02 am UTC

Hello Tom
In the previous answer you have mentioned that UNDO (before) and after information is availbale in redo log files. Would you please tell me what is beening written to UNDO tablespaces and who wirtes that information (LIke Log writer wirtes from redolog buffers to redo logfiles and dbwriter writer writes to database files).

Tom Kyte
July 18, 2005 - 11:17 am UTC

When you do an insert -- you (your server process) generate UNDO that will delete that new row, to put the database back the way it was.

When you do a delete -- you generate UNDO that will insert that old row, to put the database back the way it was.

When you do an update - you generate UNDO that will update the row back the way it was.


Your server process generates this UNDO, it is cached in the buffer cache like any other segment and dbwr writes it to disk as needed.

Oracle log miner stores ROWID only

A reader, July 18, 2005 - 5:15 pm UTC

Tom,

This question is not related to any problem.

Just curious, why Oracle logminer view's sql_redo and sql_undo columns shows ROWID and not the PK of the executed sql statement (infact the sql was with where <pK> = ...).

We can always apply undo/redo sql statements (using logminer) if that table was not REORGANIZED. (alter table move). Once reorged (during alter table move, ROWID changes) , then we can not use "sql_redo and sql_undo" sql statements on this table.

How do we use logminer when faced with this situation.

Tom Kyte
July 18, 2005 - 5:18 pm UTC

the primary key isn't necessarily in the redo stream at all. There are ways to "force" it in there.

What is your goal here -- it would be pretty "unusual" to need to mine the redo to restore a row long after a reorg was done (easy fix: stop reorging)

Logminer has limited use

Des B, July 19, 2005 - 5:35 am UTC

Maybe I haven't had the occasion to exploit it fully, though I have used it to identify some transactions. It seems very useful in these sort of limited circumstances, where you can tie down the logfile you want to examine to one or two. If you had to deal with many logs I think it is less useful.

Tom Kyte
July 19, 2005 - 7:36 am UTC

logminer is best used with streams, meaning -- you don't use it as it contains way too much information for us to actually "process"

redo log contents

Prasad, July 19, 2005 - 6:24 am UTC

Thanks for you usefull reply. Please correct me the UNDO is available in UNDO TABLESPACE as well as in redo log files (based on the previous answers).
Redo log files only used in case of recovery or redo.
If we rollback the transaction, before information(UNDO) is from UNDO segments right. Your answer is greatley appreciated.

Tom Kyte
July 19, 2005 - 7:46 am UTC

UNDO is only available in undo segments.

REDO is what you find in redo logs. That UNDO is logged does not mean "undo is in the redo stream". What is in the redo stream is redo.

Upon crash recovery, we apply REDO to put the database back the way it was at the time of the crash (including undo) and then rollback from the undo segments.

Logminer Usability

Des B, July 20, 2005 - 5:43 am UTC

In response to my comments on Logminer you said it should be used in streams. Please could you expand on what you mean by this ?

Tom Kyte
July 20, 2005 - 6:45 am UTC

streams uses the logminer technology. streams, a piece of software, can manage and understand tons of data. Humans, carbon based lifeforms, do not do so well with that volume of information.

So a technology like streams is made possible by a technology like logminer.


and in 10g with things like

flashback transaction history (retrieve the exact UNDO sql for a transaction without going to logs)

flashback database

flashback whatever....


the need to mine redo logs for anything by *us* is pretty much not there (so much so that in the next volume of my book -- I'm dropping the log miner chapter all together, it is a foundation technlogy for other technologies in my opinion)

Streams

Des B, July 20, 2005 - 6:57 am UTC

I sure you must know of Carl Dudley ( UKUOG ). When Logminer came out he was very enthusiastic about it, presenting a short paper on it at conference. Without Streams it would seem his enthusiasm was a bit misplaced !

Tom Kyte
July 20, 2005 - 8:30 am UTC

No, not at all -- at that point in time, I was as enthusiastic (check out expert one on one Oracle - I talked a lot about it)

My point is that today, there are so many higher level (eg: easier) things to use, that I'm personally deprecating it from my inventory of things "to jump right on".

Point is: there are easier ways to use logminer than using logminer :)


analogy: you don't need to buy an HR system because you yourself can create emp and dept, the database is all you need. But, we buy HR systems because using the database as an HR system is "harder" than using a higher level thing.

Streams

Des B, July 20, 2005 - 11:26 am UTC

Tom - thanks, this very interesting. I've tried 'goggling' for 'streams' but couldn't find anything. Do you have a link ?

Tom Kyte
July 21, 2005 - 7:24 am UTC

</code> https://docs.oracle.com#index-STR <code>

(i'd think to go to the oracle docs myself first....)

'(i'd think to go to the oracle docs myself first....) '

Des B, July 21, 2005 - 12:08 pm UTC

Oops, showing my ignorance ! Replication of course. Something I haven't had to use - yet. So Logminer is used to strip out the DML so that it can be applied to the replication sites more quickly, and with less traffic as it's far smaller than a logfile. Isn't this the same approach as Quest, with Sharedata ?

Was this the intended use for Logminer from the start ?

Tom Kyte
July 21, 2005 - 4:46 pm UTC

log miner is used to let us conver the binary redo stream (which would probably be smaller -- the redo stream would be) into logical change records which STREAMS can then apply.

Logminer was initially a thing to mine the logs with -- a tool

Why SESSION#, SERIAL# returns 0 & SESSION_INFO is NULL on 10g LogMiner?

Shailesh Saraff, August 31, 2005 - 2:38 am UTC

Hello Tom,

On Oracle 10g, when we query v$logMnr_Contents, SESSION# and SERIAL# returns 0 and SESSION_INFO shows NULL. All other columns display expected value. Problem occurs with only these 3 columns.

later again we tried to do the same and following entries were found in alert.log.

Wed Aug 03 11:31:51 2005
LOGMINER: setting Log lookback/checkpoint gap to 4095M
Wed Aug 03 11:31:51 2005
LOGMINER: Parameters summary for session# = -2147483332
LOGMINER: ReqParallelism = 0, EagerThreshold = 1
LOGMINER: StopMiningThreshold = 10M, MemorySize = 10M
LOGMINER: MaxLogLookback = 4095M
Wed Aug 03 11:33:23 2005
Errors in file d:\orant\admin\qo760\udump\qo760_ora_3176.trc:
ORA-00600: internal error code, arguments: [krvxbpns], [], [], [], [], [], [], []

Couls you please help us to resolve this issue.

Thanks & Regards,

Shailesh


Tom Kyte
August 31, 2005 - 1:24 pm UTC

utilize support for an ora-600, please.

Shailesh Saraff, September 01, 2005 - 3:28 am UTC

Dear Tom,

ORA-600 occurs later when we tried next day, but could please guide us when we get output from v$logmnr_contents (ORA-600 has not occurred) why do we get mentioned columns NULL, are we missing something or is this a bug?

Please let us know.

Regards,

Shailesh

Tom Kyte
September 01, 2005 - 5:05 am UTC

please utilize support for this -- I don't have your step by steps, nor an understanding of your system. support will collect all of that and diagnose the issue.

Adding new log group

Suvendu, November 24, 2005 - 7:31 am UTC

Hi Tom,

Could you please, elaborate options :
1. After droping an existing log group (assume group 1), and when creating a new log group using same group id like below statement, but it's going to fail, why?

SQL> alter database drop logfile group 1;   
SQL> alter database add logfile group 1 '/usr/oracle/dbs/log4PROD.dbf' size 10M; 

2. From where we can get the MAX LOG GROUP limit in a database?

Thanks a lot for answering me.

Regards,
Suvendu
 

Tom Kyte
November 24, 2005 - 9:00 am UTC

ops$tkyte@ORA10GR2> alter database drop logfile group 1;
Database altered.

ops$tkyte@ORA10GR2> !
[tkyte@dellpe ~]$ su - ora10gr2
Password: *
[ora10gr2@dellpe ~]$ rm /home/ora10gr2/oradata/ora10gr2/redo01.log
[ora10gr2@dellpe ~]$ logout

[tkyte@dellpe ~]$ exit

ops$tkyte@ORA10GR2> alter database add logfile group 1 '//home/ora10gr2/oradata/ora10gr2/redo01.log' size 10m;

Database altered.

ops$tkyte@ORA10GR2>



doesn't fail for me. v$controlfile_record_section shows you the max records permitted for your logfiles. 

Log Miner without UTL_FILE

Tracy, March 03, 2006 - 7:45 am UTC

Is there any way of generating the dictionary.ora without having a UTL_FILE_DIR set? I want to generate a dictionary file but I cannot bounce the database to set the UTL_FILE_DIR parameter to a non-NULL value. This is a live 9.2.0 database.
Can I connect to this database from another one, for example, where UTL_FILE_DIR is set, and generate the file remotely as it were?

Tom Kyte
March 03, 2006 - 8:26 am UTC

the utl_file_dir would have to be set on the target database - it is the one that will be doing the writing.

can you generate it into the redo logs instead?

exec sys.dbms_logmnr_d.build( options=>sys.dbms_logmnr_d.store_in_redo_logs);


Log Miner store in redo logs

Tracy, March 03, 2006 - 8:47 am UTC

Maybe I will have to use this option then. My only concern is the amount of redo generated during the DBMS_LOGMNR_D.STORE_IN_REDO_LOGS process. I suppose I will just have to choose my time to run it. The reason I am going to run log miner in the first place is to try and discover the cause of increased update activity on the database! We have suddenly been generating much more redo than normal and getting 'Checkpoint not complete' warnings in a database where nothing obvious has changed recently. I have added more log files as a quick fix, but that's kind of papering over the cracks and I want to find out the underlying cause of this increase in activity.

Tom Kyte
March 03, 2006 - 11:04 am UTC

you could restore the database to your test environment and generate the dictionary there as well.

Log Miner trouble

Hoffman, December 03, 2006 - 2:57 pm UTC

I'm try to get DML from Redo information using logminer when I insert BLOB in a table.

If BLOB size is smaller than 4k and bigger than 2k the generated SQL is:

RECORD 1 set transaction read write
RECORD 2 insert into table values (1, EMPTY_BLOB());
RECORD 3 update table set f2 = hextoraw('e467daa...
RECORD 4 ...a9cc401d71') where ROWID = 'AAATKxAAVAAAAWtAAB';
RECORD 5 commit;

But the SQL size exceed the limit of executable SQL DML and can not be executed. I thing that this is related whit some logminer BUG but I look for in metalink and I dont find any information about this issue.

I need some help about this...
Thanks a lot...

Automate execution of V$LOGMNR_CONTENTS

A reader, March 21, 2007 - 10:35 pm UTC

Tom,
Once log miner is setup and all the data is available in V$LOGMINER_CONTENTS, is there a way to run/execute the SQL_REDO contents? In other words, how can I take the contents of SQL_REDO and execute them automatically - in other words how do I take all the inserts and run them so the data gets inserted into appropriate tables? Or do I have to write out the SQL to a file and run that file using SQLplus or something like that?

Thanks
Tom Kyte
March 22, 2007 - 7:42 am UTC

you would write a program to read it and execute it, eg: a simple PLSQL block could do that using execute immediate

Execute Immediate and Log Miner

A reader, March 22, 2007 - 8:55 am UTC

Tom,
I tried your suggestion of using execute immediate but I am getting the following error when the SQL string has a date clause in it.

1 begin
2 execute immediate 'insert into t values(TO_DATE('21-may-2006 20:07:27', 'dd-mon-yyyy hh24:mi:ss'))';
3* end;
SQL> /
SP2-0552: Bind variable "MI" not declared.

I tried declaring a variable to capture the SQL and execute that and still get the same error.

1 declare
2 x varchar2(4000) := 'insert into t values(TO_DATE('21-may-2006 20:07:27', 'dd-mon-yyyy hh24:mi:ss'))';
3 begin
4 execute immediate x;
5* end;
SQL> /
SP2-0552: Bind variable "MI" not declared.

Is there a workaround for this?

Thanks
Tom Kyte
March 22, 2007 - 10:21 am UTC

well, you didn't try my suggestion, i said to read from the table and execute it.

You are having a problem setting the value of a character string literal, period.


declare
l_str long := 'hello ''world''';
begin
...


that puts the string:

hello 'world'

into the variable l_str, use two quotes to get a single quote (or use the 10g feature of

l_str long := q'|hello 'world'|';


to get a quote in there.

Works

A reader, March 22, 2007 - 10:54 am UTC

Thanks Tom. I used a loop to read the source SQL and then used execute immediate to execute each SQL statement, it works regardless of how many quotes there are in the statement.

I now have a question about Log miner itself. We plan on using this to extract data (CDC) out of a VERY active production database (with about 5 million transactions each day i.e. 5 million rows expected out of log miner) to load into a warehouse. I would like to process the archive logs a couple of times per hour. Do you think log miner is the right tool to process this large a volume? What has your experience been? Have you seen or heard of success stories with this kind of volume or do you suggest we use a commerical tool for the CDC (although even a commerical tool will most likely use the redo logs to do CDC).

Thanks
Tom Kyte
March 22, 2007 - 10:57 am UTC

i would be looking at streams - you'll find log miner to be a huge pain to try to do this yourself. The code exists with streams to do the heavy lifting for you.


http://www.oracle.com/pls/db102/portal.portal_db?selected=3#index-STR

Chad, March 22, 2007 - 1:52 pm UTC

I was looking at Streams Downstream capture for capturing data from the archive logs transferred to a warehouse server. This is so we can avoid the perormance hit on the source production server. However, in reading the documentation I see that a DB link must be setup on the destination server that points to the source server. What is the purpose of that - the documentation says it is for administrative jobs the destination server will run on the source server. What is that all about? Can you explain the purpose for that? If all the data comes out of archive logs transferred to the destination server where is the need to access the source server?
Tom Kyte
March 22, 2007 - 3:00 pm UTC

streams works by capturing the changes from the redo stream, converting that into a logical change record, queuing that into a message queue and then applying that to a destination database OR if you write your own custom apply, by letting you dequeue the message and apply it.

The links are to get data from one database to another - one of the 'goals' of replication with streams.

Chad, March 22, 2007 - 6:30 pm UTC

I am confused. I thought with downstream capture the capture and apply happens on the destination server. In downstream capture mode, getting data from one database to another happens via redo transport or FTP (DBMS_FILE_TRANSFER) and is really not dependent on db links.

The documentation says in this approach the link is used only for administrative purposes i.e. the destination uses the link to perform administrative actions on the source database.

LogMiner Query

MK, May 29, 2008 - 3:47 pm UTC

Hi Tom,

I just managed to query V$LOGMNR_CONTENTS to get some information in regards to some of the transactions that have been taking place. What I really want to know is how long the database spent for each DML statement. Is this possible?
I am able to extract the Trx ID, the REDO SQL and UNDO SQL, the SessionID, Table and Username. This shows me the exact sequence of transactions and when transactions are being committed and what sort of statements developers are running. Is the TIMESTAMP and COMMITED_TIMESTAMP the timestamps that I am looking for?

We are currently facing a problem of a huge number of open database instances on 3 nodes of a RAC 10g install. I am trying to get to the source of the problem and am hoping to see a lot of transaction IDs and maybe update statements taking too long to complete and holding on to row level locks etc.. hence causing the Java Connection Pool in the front-end to grow out of proportion.

Can you please offer some troubleshooting tips and advice as to how I can identify some faulty trends in the REDO logs and also how I can record exactly what transactions are the longest running or if they could be running slow due to interference from other uncommitted transactions?



EXECUTE dbms_logmnr.start_logmnr(Options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.PRINT_PRETTY_SQL ); 



select username as USR,
       timestamp,
       commit_timestamp,
       (xidusn || '.' || xidslt || '.' || xidsqn ) as XID,
       sql_redo,
       sql_undo,
       table_name,
       session#
from   v$logmnr_contents 
where  username in ('XXX-USR');       





Thanks in advance,
MK

Logminer base redo log scrapper

Shree, May 30, 2008 - 4:48 pm UTC

Greetings Tom,

I want your valuable opinion on this.

We are trying to create a solution which will make the latest transactions from an OLTP database available to a bunch of remote applications asynchronously on a near real time basis. We are currently evaluating a number of commercially available redo log based replication technologies to do this and we have developed a prototype using the Logminer API. The preliminary results look really promising for Logminer approach, for a 25G/hr redo generation the transactions could be queued with sub sec latencies. Currently we are focused only on the throughput and latencies, but did not examine the extracted data integrity in detail yet. I was going through the posting related to Logminer on this page and found this ¿typically log miner works only on archives - online redo logs are in a state of flux¿ this kind of puts our approach in question? Do you have any specific reasons why you made this comment?

And do you think that it is wise to use Logminer API for developing a redo log processor.

Thanks in advance
Tom Kyte
May 31, 2008 - 6:53 am UTC

I think it has already been done.

We called it Streams and you already have it. It does that.

Ricardinho, June 20, 2008 - 2:30 pm UTC

hi tom;
in order to see dml statements in the redolog or archivelog files, is it necessary to issue:
alter database add supplemental log data;

because without doing that I can only see ddl statements.
Oracle 10g.
Tom Kyte
June 20, 2008 - 4:05 pm UTC

supplemental logging is adding more information to the log files in support of chained rows and various data structures.

We don't really capture the original source DML statement in the redo stream, we don't need it for anything, it is not of any real use.

You would use auditing to capture SQL statements and their bind variable values (fine grained auditing)

from the redo, we generate single row insert/update/deletes for you to look at - but the original source "dml" - we don't use that (cannot use it, it would be useless)

Ricardinho, June 21, 2008 - 12:22 pm UTC

you said that
supplemental logging is adding more information to the log files, but I dont see any data before adding suppmental logging. I dont understand why?
Here is my work:

delete from branch.pricelist where rownum<2
1 row deleted.
commit;


BEGIN
DBMS_LOGMNR.ADD_LOGFILE ('C:\ORACLE\ORADATA\XE\REDO1.LOG');
DBMS_LOGMNR.ADD_LOGFILE ('C:\ORACLE\ORADATA\XE\REDO2.LOG');
DBMS_LOGMNR.ADD_LOGFILE ('C:\ORACLE\ORADATA\XE\REDO3.LOG');
END;
/


BEGIN DBMS_LOGMNR.START_LOGMNR (options => dbms_logmnr.dict_from_online_catalog);
END;
/


select SQL_UNDO from
v$logmnr_contents
where table_name='PRICELIST'
AND OPERATION='DELETE'
ORDER BY T¿MESTAMP DESC


no rows selected.



BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/

----------------------------------------------------------------------------------------------------------------------

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

delete from branch.pricelist where rownum<2
1 row deleted
commit;

BEGIN
DBMS_LOGMNR.ADD_LOGFILE ('C:\ORACLE\ORADATA\XE\REDO1.LOG');
DBMS_LOGMNR.ADD_LOGFILE ('C:\ORACLE\ORADATA\XE\REDO2.LOG');
DBMS_LOGMNR.ADD_LOGFILE ('C:\ORACLE\ORADATA\XE\REDO3.LOG');
END;
/

BEGIN DBMS_LOGMNR.START_LOGMNR (options => dbms_logmnr.dict_from_online_catalog);
END;
/

select SQL_UNDO from
v$logmnr_contents
where table_name='PRICELIST'
AND OPERATION='DELETE'
ORDER BY T¿MESTAMP DESC

SQL_UNDO
insert into "BRANCH"."PRICELIST"(........);
Tom Kyte
June 21, 2008 - 8:33 pm UTC

it adds more stuff, more stuff that we use. It doesn't mean it'll add anything you could or would see - but that other feature functions of the database would utilize.

I told you what it adds - additional information in support of chained rows, objects types and other complex structures - for logical standby to use, for streams to use.

A reader, June 22, 2008 - 8:10 am UTC

so as I showed above;
without adding suppmental logging, I cannot see inserts,updates and deletes,is this right Tom?
Tom Kyte
June 22, 2008 - 9:17 am UTC

supplemental logging adds support into the redo log for certain unsupported operations. It depends entirely on what is in the redo logs. If you were to probe the redo logs, you would likely find an "unsupported" record in there for that operation.


ops$tkyte%ORA10GR2> connect /
Connected.
ops$tkyte%ORA10GR2> set echo on
ops$tkyte%ORA10GR2> column member format a45
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter system switch logfile;

System altered.

ops$tkyte%ORA10GR2> delete from emp where rownum = 1;

1 row deleted.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> column member new_val M
ops$tkyte%ORA10GR2> select a.member, b.status from v$logfile a, v$log b where a.group# = b.group# and b.status = 'CURRENT';

MEMBER                                        STATUS
--------------------------------------------- ----------------
/home/ora10gr2/oradata/ora10gr2/redo01.log    CURRENT

ops$tkyte%ORA10GR2> alter system switch logfile;

System altered.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.add_logfile( '&M' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.start_logmnr( options => sys.dbms_logmnr.dict_from_online_catalog );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec print_table( 'select scn,timestamp, operation, info, sql_redo, sql_undo from v$logmnr_contents' )
.SCN                          : 65179584
.TIMESTAMP                    : 22-jun-2008 09:03:03
.OPERATION                    : UNSUPPORTED
.INFO                         : In Memory Undo is unsupported
.SQL_REDO                     :
.SQL_UNDO                     :
-----------------

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.end_logmnr;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> connect /
Connected.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

ops$tkyte%ORA10GR2> alter system switch logfile;

System altered.

ops$tkyte%ORA10GR2> delete from emp where rownum = 1;

1 row deleted.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> column member new_val M
ops$tkyte%ORA10GR2> select a.member, b.status from v$logfile a, v$log b where a.group# = b.group# and b.status = 'CURRENT';

MEMBER                                        STATUS
--------------------------------------------- ----------------
/home/ora10gr2/oradata/ora10gr2/redo03.log    CURRENT

ops$tkyte%ORA10GR2> alter system switch logfile;

System altered.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.add_logfile( '&M' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.start_logmnr( options => sys.dbms_logmnr.dict_from_online_catalog );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec print_table( 'select scn,timestamp, operation, info, sql_redo, sql_undo from v$logmnr_contents' )
.SCN                          : 65179596
.TIMESTAMP                    : 22-jun-2008 09:03:12
.OPERATION                    : START
.INFO                         :
.SQL_REDO                     : set transaction read write;
.SQL_UNDO                     :
-----------------
.SCN                          : 65179596
.TIMESTAMP                    : 22-jun-2008 09:03:12
.OPERATION                    : DELETE
.INFO                         :
.SQL_REDO                     : delete from "OPS$TKYTE"."EMP" where "EMPNO" =
'7902' and "ENAME" = 'FORD' and "JOB" = 'ANALYST' and "MGR" = '7566' and
"HIREDATE" = TO_DATE('03-dec-1981 00:00:00', 'dd-mon-yyyy hh24:mi:ss') and
"SAL"
.SQL_UNDO                     : insert into
"OPS$TKYTE"."EMP"("EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO")
values ('7902','FORD','ANALYST','7566',TO_DATE('03-dec-1981 00:00:00',
'dd-mon-yyyy hh24:mi:ss'),'1000',NULL
-----------------
.SCN                          : 65179596
.TIMESTAMP                    : 22-jun-2008 09:03:12
.OPERATION                    : INTERNAL
.INFO                         :
.SQL_REDO                     :
.SQL_UNDO                     :
-----------------
.SCN                          : 65179597
.TIMESTAMP                    : 22-jun-2008 09:03:12
.OPERATION                    : COMMIT
.INFO                         :
.SQL_REDO                     : commit;
.SQL_UNDO                     :
-----------------

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.end_logmnr;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> ALTER DATABASE drop SUPPLEMENTAL LOG DATA;

Database altered.




but this was due to the use of a new feature in 10g (eg: instead of supplemental logging, you could turn off that thing or do something that cannot use it)

eg: if you perform a larger transaction, in memory undo cannot be used. Then, you would find the needed information in the redo logs. So again, it goes back to "it depends, supplemental logging adds additional information to the log file to support unsupported constructs"


ops$tkyte%ORA10GR2> alter system switch logfile;

System altered.

ops$tkyte%ORA10GR2> delete from t;

49738 rows deleted.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> column member new_val M
ops$tkyte%ORA10GR2> select a.member, b.status from v$logfile a, v$log b where a.group# = b.group# and b.status = 'CURRENT';

MEMBER                                        STATUS
--------------------------------------------- ----------------
/home/ora10gr2/oradata/ora10gr2/redo01.log    CURRENT

ops$tkyte%ORA10GR2> alter system switch logfile;

System altered.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.add_logfile( '&M' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.start_logmnr( options => sys.dbms_logmnr.dict_from_online_catalog );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec print_table( 'select scn,timestamp, operation, info, sql_redo, sql_undo from v$logmnr_contents where rownum <= 10' );
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : UNSUPPORTED
.INFO                         : In Memory Undo is unsupported
.SQL_REDO                     :
.SQL_UNDO                     :
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'I_TS#' and "SUBOBJECT_NAME" IS NULL and "OBJECT_ID"
= '7' and "DATA_OBJECT_ID" = '7' and "OBJECT_TYPE" = 'INDEX' and "CREATED" =
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'I_CDEF4' and "SUBOBJECT_NAME" IS NULL and
"OBJECT_ID" = '53' and "DATA_OBJECT_ID" = '53' and "OBJECT_TYPE" = 'INDEX' and
"CREATED
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'IND$' and "SUBOBJECT_NAME" IS NULL and "OBJECT_ID" =
'19' and "DATA_OBJECT_ID" = '2' and "OBJECT_TYPE" = 'TABLE' and "CREATED" =
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'SEG$' and "SUBOBJECT_NAME" IS NULL and "OBJECT_ID" =
'14' and "DATA_OBJECT_ID" = '8' and "OBJECT_TYPE" = 'TABLE' and "CREATED" =
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'C_TS#' and "SUBOBJECT_NAME" IS NULL and "OBJECT_ID"
= '6' and "DATA_OBJECT_ID" = '6' and "OBJECT_TYPE" = 'CLUSTER' and "CREATED"
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'I_FILE2' and "SUBOBJECT_NAME" IS NULL and
"OBJECT_ID" = '42' and "DATA_OBJECT_ID" = '42' and "OBJECT_TYPE" = 'INDEX' and
"CREATED
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'COL$' and "SUBOBJECT_NAME" IS NULL and "OBJECT_ID" =
'21' and "DATA_OBJECT_ID" = '2' and "OBJECT_TYPE" = 'TABLE' and "CREATED" =
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'I_TS1' and "SUBOBJECT_NAME" IS NULL and "OBJECT_ID"
= '43' and "DATA_OBJECT_ID" = '43' and "OBJECT_TYPE" = 'INDEX' and "CREATED"
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------
.SCN                          : 65184461
.TIMESTAMP                    : 22-jun-2008 09:07:33
.OPERATION                    : DELETE
.INFO                         : no supplemental log data found
.SQL_REDO                     : delete from "OPS$TKYTE"."T" where "OWNER" =
'SYS' and "OBJECT_NAME" = 'I_UNDO2' and "SUBOBJECT_NAME" IS NULL and
"OBJECT_ID" = '35' and "DATA_OBJECT_ID" = '35' and "OBJECT_TYPE" = 'INDEX' and
"CREATED
.SQL_UNDO                     : insert into
"OPS$TKYTE"."T"("OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT
_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","
GENERATED","SECONDARY") values
-----------------

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec sys.dbms_logmnr.end_logmnr;

PL/SQL procedure successfully completed.

Ricardinho, June 23, 2008 - 1:59 pm UTC

Thanks a lot Tom for this example.
what I understand from your example is that:
1-)small insert,update,deletes which`s operation is unsupported can not be seen without addding
supplemantal logging. Is this right?
2-) you also mentioned that "if you perform a larger transaction, in memory undo cannot be used"
is that mean that "delete from t", doesnt generate undo?
I thought that every transaction generates undo.

Tom Kyte
June 23, 2008 - 3:35 pm UTC

1) depends on release and what features are enabled. Answer is once again "maybe, it depends"

2) "IN MEMORY" (as opposed to "not in memory") undo is not used for a large transaction.

delete from t -- if t has one row - will use in memory undo
delete from t -- if t has 1,000,000 rows - will NOT use in memory undo

Every transaction generates undo. 10g has a new optimization "in memory undo", that is all - it is still undo, but things are done in such a way that the redo stream is not capable of being mined anymore. Putting supplemental logging on will inject into the stream information needed to support that construct allowing us to mine the redo stream for that transaction again.

Cost of Building Log Miner Dictionary

Mahesh, June 25, 2008 - 4:36 pm UTC

I have scheduled to build log miner dictionary every hour for stream capture process. I wonder if there is additional cost associated with it expect generating redo/archive logs ?

Tom Kyte
June 25, 2008 - 5:06 pm UTC

why are you changing your schema so rapidly? seems you would have to rebuild this so infrequently in a well run system?

A reader, June 26, 2008 - 3:28 pm UTC

Informatica-PWX is mining through this dictionary. In case there is some failure may be n/w, pwx will mine all the previous log and it takes many hours to mine through all that. So we are building dictionary every hour.
Tom Kyte
June 26, 2008 - 4:40 pm UTC

I still don't understand why you build a dictionary every hour.

Sorry, this didn't explain why.


X is mining through this dictionary (I don't care what X is really)

In case there is some failure may be <something???>, X will mine all the previous log


why - why does X need to mine the previous logs unless a dictionary is rebuilt? I don't see the connection.

Log_Miner clarification(!?)

tuananhtran, July 21, 2008 - 11:36 pm UTC

Good morning, sir!
In your book, Expert One on One, the Appendix AH Chapter of Using DBMS_LOGMNR, you wrote:
-------------------------------------------------------------------------------------
[quote=Expert One on One - from 7.3 to 8.17]
LogMiner works best on archived redo log files, although it can be used with online redo log files that are not active. Attempting to use an active online redo log file could lead to an error message or just confusion on your part, as the redo log file will contain a mixture of old and new transaction data. An interesting thing to note about LogMiner is that you do not need to analyze a log file in the database that originally created it. It doesn¿t even have to be the same exact database version (you can analyze version 8.0 archive files in an 8.1 database). You can move an archived redo log file to another system, and analyze it there instead. This can be quite convenient for auditing and looking at historical usage patterns, without impacting the existing system. In order to do this however, you must use
a database that is on the same hardware platform (byte¿ordering, word sizes, and so on will be affected by this). Also, you will want to make sure that the database block sizes are the same (or that the database doing the analysis has a block size at least as big as the database originating the redo log), and have the same character set.
[/quote]
----------------------------------------------------------------------------------------

Yes, I knew that, some thing, may be everything would (should) be analyzed, collected statistics, .. etc .. on clone machine - the same of target DB's machine. But I really did not understand How I analyze the copied archive_log_file in the clone machine from target DB's machine?
Supposed that I have 2 DBs on Windows NT system, same as physical_files location, same as block size, and same as archive_log_files's, utl_file_dir's location. Analyze them - the archive_log_files in clone machine is being done by

[code]
meta@meta> begin
2 sys.dbms_logmnr.add_logfile
3 ('c:\meta_archive\UTD10_660649532_1.ARC',
4 sys.dbms.logmnr.NEW);
5 end;
/
PL/SQL procedure successfully completed.
meta@meta>
meta@meta> begin
2 sys.dbms_logmnr.start_logmnr;
3 end;
4 /

PL/SQL procedure successfully completed.
[/code]

With UTD10_660649532_1.ARC was the copied archive_log_file from production Database.

Am I wrong?

Thank you for your clarification!
Tom Kyte
July 22, 2008 - 11:15 am UTC

I do not understand what you are asking.

That you have two databases with the same file names is not relevant to log miner at all - they are entirely different databases. You would

a) extract dictionary from database 1
b) import dictionary into database 2
c) load log files in to database 2

if you wanted to analyze the logs of database 1 on database 2 - they are entirely different databases, entirely different..

Log file analyzed in other database

A reader, July 23, 2008 - 4:25 am UTC

Yeah, thank you sir!
[quote]
That you have two databases with the same file names is not relevant to log miner at all - they are entirely different databases. You would

a) extract dictionary from database 1
b) import dictionary into database 2
c) load log files in to database 2
[/quote]

That is all I need.
Reading the chapter in your book that I quoted above, because of my bad language, and so that I did not really understand. I thought that, I can take the log_file from Database 1, generate information by dictionary in Database 2, view something occurred in v$logmnr_contents in Database 2. But I were wrong.

Thank you again!

How can I find out the log files of my search interest ?

Chris, September 09, 2008 - 5:44 am UTC

hi Tom,

I dropped on production log table by mistake.
I am using 10.2.0.2.

The following explains what happened based on time sequence.
1. 21 Aug 2008 17:30:00 Export Dump File Created.
2. 22 Aug 2008 18:35:40 (Not exact time) Dropped the table by mistake.
3. 22 Aug 2008 18:40:10 dropped table recreated with different version sql script(index tablespace), and transaction for the table started..
4. 23 Aug 2008 03:59:15 supplemental logging enabled for the first time using 'ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;'

I need to recover the data between 1. and 3.(after generating export dump and before dropping the table)
My system had been enabled for archive logging before disaster. If I ruined the whole database, I would try the recovery using online recovery.
But in this case, I have to recover only one table for specific time period.
So I decided to use lonminer as below.

a)ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MM-YYYY HH24:MI:SS';
b)ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
c)EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

Then, I added the logfiles from 4039 to 4052, which fall into the target time scope of my search as below.
d)EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logminer/4039_617656652.arc', OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logminer/4040_617656652.arc', OPTIONS => DBMS_LOGMNR.ADDFILE);
~~~~~~~~~~~~~~~~~
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logminer/4051_617656652.arc', OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logminer/4052_617656652.arc');

e)EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '21-08-2008 13:00:00', ENDTIME => '22-08-2008 18:40:10', OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.COMMITTED_DATA_ONLY);

What I got for the target table DML all looked like "insert into "UNKNOWN"."OBJ# 16850"("COL 1","COL 2","COL 3",~~~") values (HEXTORAW('32202020'),HEXTORAW('33363030'),HEXTORAW('3230303830383231'),~~);"

It must mean that the dictionary does not contain the table information before being dropped.
But how can I know which arch log files has the previous table definition?
Here are my qusetions.

1) Is it only possible to recover the data which has been created after enabling supplemental logging?
Then I tried all that is impossible.
2) There are many arch log files(more than 200) available in backup. But I can not decide what are the exact log files of my interest, which also include the dictionary information before table trop.
3) I don't remember exact time of table drop.

Thanks for your help
Tom Kyte
September 09, 2008 - 8:05 am UTC

if the database was in archive log mode, do a tablespace point in time recovery on another machine and recover the database to right before you did the drop table.

Then export that, you'll have everything you need

and if you do not have what you need to do the point in time recovery, you do not have what you need to use log miner either - so you must have it.


The simplest approach will be the tablespace point in time recovery.

I don't remember the exact time of table drop.

Chris, September 10, 2008 - 6:38 am UTC

Thank you for your fast advice.
But I am concerned how I can find out the exact time point of recovery.
Tom Kyte
September 11, 2008 - 10:47 am UTC

look at the created time of the other table in your production database.

it was created just after right? so, the easiest answer is "just before that"

I don't remember the exact time of table drop.

Chris, September 10, 2008 - 6:38 am UTC

Thank you for your fast advice.
But I am concerned how I can find out the exact time point of recovery.

using log miner to exec regression tests

Andre, November 17, 2008 - 7:00 pm UTC

Tom,

Here is the situation - say a long ETL process:
(1) you want to capture and fix a performance problem
(2) you have one shot = but the code is not instrumented
(3) you want to adequately capture the situation:
--- a. you code LOGON triggers to fire 10046 trace
--- b. you get export of all relevant schemas before ETL
--- c. you get all trc files
.. as this enough..? maybe to diagnose
.. but how about POC test+ solution..???
.. so:
--- d. you ask for supplemental log data
--- e. you make sure that DB is in ARCHIVELOG
--- f. you collect all arch-logs

Question is:
Is it feasible to process arch-logs with LOG MINER to capture all data coming in before a particular segment of ETL and build a fragment of the DB (e.g. just 5 tables out of 300 - i.e. 30GB out of hundreds of GB) and use LOG MINER to populate these tables so you can run regression tests once you determine what the performance root cause is...?

STREAMS operate on that principle - but Oracle internals are involved - so can a mere mortal use LOG MINER to use arch-logs selectively to populate objects of interest to the point-in-time where say one of 800 DataStage jobs start and then examine the current process + revert to the same start point to try alternative - which would be a custom written PL/SQL stored procedure to be called by DataStage instead of the generated code..???

If this is not clear please do let me know.

Kindest regards
Andre
Tom Kyte
November 18, 2008 - 7:37 pm UTC

why wouldn't you just point in time restore into a test environment?

If I had a one time capture, I'd want 10g or above and ADDM/ASH/AWR to mine after the fact.

LOG MINER for regression performance tests

Andre, November 20, 2008 - 9:05 am UTC

Tom,

Thanks - SURE = but... as I said in my former note:
(1) I am OK for a point-in-time restore except that means requesting a new database to be created (which may imply an additional server) + more importantly lots more disk space especially with DW that is using say 3TB ...
(2) Furthermore - this requires special actions and effort from DBA team + Sys Admin - SAN management etc - further approval process etc., etc... may never be able to get it
(3) ADDM/ASH/AWR - all require additional licensing that very few organizations purchase - so I am told "Do not use what we are not allowed to use"

Condiering the above my plan is:
a) request a DBA to run a provided by me LOGON Trigger that will fire for specific user sessions (i.e. those executing Data-Stage ETL etc.) + also in specific time-windows = all such parameters stored in a control table - also provided by me to be created
b) request an AUD to be available "db,extended" - the AUD$ will provide SCN for a DML on the LOGON Trigger control table = which means precise start point to restore database (if this is feasible) but also to locate the spot in ARCH LOGS
c) 10046 level 12 tracing initiated by a LOGON Trigger
d) Request supplemental LOG DATA
e) Request full export + cold backup if possible prior to ETL nightly run

So my question is = how feasible it would be to utilize LOG MINER to create a point-in-time SUBSET of the database that is identified as causing performance problems..?

One could consider creating triggers for all DML (I/D/U) for tables of interest to rebuild the point-in-time for DB subset - but this may not be allowed - too high intrusion on DB + Apps etc...

Oracle Streams could be installed - and this would accomplish the same as it uses LOG MINING - but it is not something that can be done in a day and very few DBA teams and their managers would even consider this to facilitate performance tuning project.

Thanks for your valuable comments in advance
Andre

Tom Kyte
November 24, 2008 - 1:15 pm UTC

3) because your time and the time of all the engineers working on this is "free" - that doesn't cost a cent.


... So my question is = how feasible it would be to utilize LOG MINER to create a
point-in-time SUBSET of the database that is identified as causing performance
problems..?
...

it is pretty much beyond practical.


tell dba to use statspack and get what you can from that.

LOGMNR - for performance tuning project

Andre, December 01, 2008 - 2:30 pm UTC

It must be a language barrier (mea culpa) ¿ although I was hoping I communicate the issue.

Please forget my former report and allow me to try it again.
First the facts:
o DW is running on a large AIX IBM server and is using over 4TB in total ¿ so there is no way to clone this DW and do point-in-time or SCN recovery, as there is only 500GB disk space available on the SAN and all other TEST and UAT databases use approx. 700GB
o AWR (or STATSPACK) is producing tons of data and it is rather difficult to map the SQL that appears to be poorly generated by Data-Stage from the rest ¿ but more importantly one should be able to run a regression test on these few weakest links
o Export is not really an option as certain tables are processed many times before Data-Stage reaches the point of very poor performance

So ¿ considering these constraints I thought about LOGMNR.

In fact I ran several experiments and it appears that I should be able to set up a subset data that is relevant as far as performance tuning.

I agree that it is an effort ¿ but I believe it is feasible

UNLESS ...

You want to tell me that I would be running a risk of failing to produce a subset of data ¿ please tell me if there are any technical issues that theoretically or practically would render mining archive logs produce INCORRECT data.

This is what I have tried thus far:
1. I wrote a LOGON TRIGGER for ALL schemas that Data-Stage connects through to DW and the Logon Trigger executes the following:
a. Generates a unique identifier for a TRC file so I can locate all TRC files later easily
b. Starts 10046 extended trace with level 12
c. Records which TRC file is generated by which Oracle session
d. Records the starting SCN
2. The above generates a few thousand trace files with a total of some 20,000 SQL statements generated by nearly 1000 Data-Stage jobs
3. I have automated collection and analysis of these trace files to run TKPPROF against those and locate the worst SQL ¿ however these statements can only be run on DEV platform that is on a small machine and with a miniature DW ¿ approx 3-4% of the PROD size
4. I have managed to understand some of the performance issues ¿ but it would be 100 times better to actually be able to run several regression tests with data as on PROD ¿ GOOD NEWS ¿ in almost all cases I do not need more than 100GB disk space for a subset of all data required at the start of a poorly performing job - - the issue is how to collect the data that is relevant and not much else

I trust you would agree with me that export would not be feasible as one would need to suspend the operation of a Data-Stage entirely generate an export of a few tables of interest and resume only to suspend it again somewhere else etc.

Sure ¿ if the developers instrumented their Data-Stage jobs ¿ perhaps this would work ¿ either export of table Copy - - or I am missing something here?


Assuming I have not missed any better option here is my plan:
1. We run another Data-Stage process (approx total elapsed time = 11 hours
2. Trace files of 10046-level 12 are generated for all sessions
3. We prepare a hot backup sometime at midnight + collect all archive logs for further analysis + control files and redo etc
4. I take a full export of the database
5. Trace files as well as archive logs are moved and compressed when the respective processes are finished (this is easy to find out from a database and generate such UNIX processes (move + compress) executing these commands with Java + PL/SQL procedures (thank you for your examples on this one Tom)
6. I also generate a dictionary from PROD that is running with supplemental log data
7. When the entire DW ETL process is finished I am now in a position to do the analysis and set up regression tests as follows:
a. Process all TRC files to locate those that run inefficiently ¿ there may be between 15 and 25 such processes
b. Find all database objects that are required for the top ranking poor performer
c. Import data to these tables from the export file
d. Locate the START SCN = i.e. the one belonging to a session that runs this poor performer
e. Run LOGMNR for a data subset = i.e. from the time of the export to the start SCN and only for the tables that are required for a job being analyzed for tuning
f. As I have supplemental log data for all PK + Unique Keys + Foreign keys and I know that there are no duplicate records ¿ I trust that I am able to restore all relevant tables (typically there are only 3-6 tables that are needed for a given Data-Stage job)
g. I run LOGMNR with an option to ignore ROWID = so that SQL can run against data subsets on a different machine
h. When the LOGMNR reaches the SCN when the Data-Stage job started (this SCN is generated with FLASHBACK procedure in the LOGON trigger and stored along with session identifiers and TRC file unique identifier) ¿ then I reach the start point of the job that needs tuning
i. I take a cold backup of the DEV or UAT database on which I analyze performance problems
j. Now I look at the execution plan + wait events etc., etc., to find what the root cause is or are and come up with solution that I can test on the same data as on production ¿ I have to accept the fact that the other objects are empty but it is out of necessity as I do not have free hardware like 4 or 5TB on a SAN
k. When I am done with one and I have a POC demonstrating that a job that used to take 37 minutes with a SQL generated by Data-Stage can execute in a 46 sec (BTW - this is actual accomplishment, not wishful hoping), then I move to the next one down the ranking order ¿ which means truncating tables and reloading with LOGMNR a new subset of data for my JOB#2
l. When I am done with say the top 12 and see that I am reaching a point of quickly diminishing returns the tuning POC stage is done.
m. NOW = If it is OK to rerun all regression tests on a PROD during the weekend maintenance window ¿ we can now take cold backup of the PROD + compress all DBF files that typically reduce the allocated disk space by 90%, so now it is feasible to reload the old hot backup (prior to start of the performance project) ¿ and run the same regression tests except now I can actually do a partial recovery up to SCN and run POC solution against PROD exactly as it was captured at that time. If some additional tasks are needed ¿ stats on some tables or indexes or adding a new index or converting a particular table to a partitioned object etc., etc., we do that to prove a combined performance gain. Then we reload the hot backup and archives + control files + redo logs and flashback area etc., so we can move into another change SCN recovery point and POC for tuned JOB#2 etc., etc.
n. We restore the PROD and developers go to their Data-Stage ETL procedures to replace the worst performing ones with the custom-made PL/SQL stored procedures.
o. We analyze the remaining jobs using ranking numbers say down to #50 to make sure that these would not be affected in any significant negative way by some redeployment of objects as per POC case studies and we are done.

Basically what I really need to know is ¿ if you see any risk in using LOGMNR the way I have described above ¿ that would produce unreliable data contents for these subsets I was addressing.

+

PS

As far as your other comments ¿ re ¿our engineers time is `free¿¿ ¿ I do agree in principle but I would be fired if I made such statements to the management.

I heard from a friend of mine about HOTSOS Profiler and followed up with some research ¿ again another tool and another license fee etc., etc.

Here I do and will use TKPROF that is free and if HOTSOS can prove to me that their tools are superior in saving engineers time I will bring it to management¿s attention.

I hope you understand that my request for a replica server + another 5TB SAN would not be well received as my time and developers time would indeed be cheaper.

Regards
Andre


Tom Kyte
December 02, 2008 - 7:16 am UTC

It is not a language barrier, I just see this as not being an efficient way to attack the problem. Doing performance tests on a subset shows - what would happen if you ran your data warehouse as a subset of itself. It won't show what will actually happen - or even what happened (the bits and bytes will be on different parts of the disk, index clustering factors - changed, average row widths - different - every statistic will be different, not only will plans be affected - but even if you get the same plan by luck - you won't really understand how that plan will have or will perform in real life)




.... So ¿ considering these constraints I thought about LOGMNR. ....



streams maybe, create your own custom apply routine, filter the data you want perhaps.

log miner directly - I already said "not really practical", you'll find missing bits - not everything you need.


you can create your own reports off of the statspack/awr data - I still think you would be best served there.


... but more importantly one should be able to run a regression test on these few
weakest links....


you cannot do that on a subset properly.

you can in fact find the information about the high load sql without using trace files (it is all in the v$ tables, statspack snapshots have that information - you can tell in a given window what the high load sql was - a LOT easier than grep-ing through trace files, you have the power of SQL to find what you need.


You are free to try anything of course (in a review/followup - I scan, I skim, there is a lot of stuff here, I scanned it all, nothing more)



... As far as your other comments ¿ re ¿our engineers time is `free¿¿ ¿ I do agree
in principle but I would be fired if I made such statements to the management. ...

that might be the best thing that could happen. It does point to the long term viability of your current place of employment.


.... and developers time would indeed be
cheaper.

...

nope, I don't agree.

LOGMNR - "missing bits" vs STREAMS ?

Andre, December 02, 2008 - 12:34 pm UTC

Dear Tom,

Thank you.

Just to clarify:
1. What I meant by a subset of data was not to be construed as a subset of rows but a subset of objects: i.e. instead of loading 300+ objects (4.7 TB and asking for a new SAN) I was proposing to load just those being processed by a Data-Stage JOB #1 (Say 4 tables + 6 indexes = total 150GB) ¿ using IMPORT to a UAT database
2. As various preceding JOBS perform DML (ODS style) it is impossible to establish the exact data content of these 4 tables, hence proposed LOGMNR to bring the data content from the point of the export (before the entire ETL started) to the SCN point when JOB #1 starts
3. I know STREAMS and have deployed it successfully on other systems, however in this case, I thought that as STREAMS use LOG MINING, whatever limitations LOGMNR may have STREAMS would inherit the same, too ¿ so STREAMS would not cure the problem if LOGMNR had one
4. So, if these objects do not carry any special data types that STREAMS would fail to process, LOGMNR should not fail either ¿ but you have warned me: ¿you find missing bits, not everything you need /q - could you then please give me an example of such a ¿SHOW-STOPPER¿. What should I watch for if I ever want to use the LOGMNR utility ¿ now or in the future?
5. Let me further explain my performance tuning strategy:
a. I follow the same principles that HOTSOS have been advocating for the last 9 years = specifically I quote: ¿System-wide performance metrics provide insufficient information to enable you to draw cause-effect relationships¿ - - and indeed when I look at AWR reports these fall short in comparison with HOTSOS Profiler (or TKPROF when a company does not buy HOTSOS license)
b. One instrumentation for the entire ETL with easy to implement LOGON triggers provides me with all TRC files and therefore all ACTUAL PROD Execution plans so I can see the REAL performance problems
c. Next phase is to reconstruct the same problem on a UAT platform ¿ with the same exact data loaded and processing with Data-Stage generated SQL and custom-made well designed stored procedures.
d. In a typical case UAT (different server) may run a given ETL job even longer ¿ so I hope you would agree that when I isolate the poorly generated SQL that PROD ran in 22 minutes ¿ UAT ran 33 minutes and my custom-made stored procedure ran 44 sec ¿ I can say that the root cause of the performance problem has been detected and corrected by 45:1 performance improvement ratio < i.e. = 22*60*33/(22*44) >
e. Sure ¿ if say JOB #5 took 25 minutes to run on PROD but only 10 sec on UAT with the same data with the same SQL ¿ it is clear that I need to examine other angles as SQL is not a problem here = but this is more of an exception than a rule. And YES ¿ I do agree that in such cases AWR report might help ¿ when compared to AWR reports for UAT when the same JOB #5 was executed
f. Finally ¿ I did say that once POC for say 12 worst ETL JOBS produced a combined gain of say 30:1 of the total PROD processing time of 6 hours bringing down to just 12 minutes we can then make arrangements for PROD regression tests during a few hour weekend maintenance window.


Summary questions:
1. Do you share HOTSOS views ¿ specifically as quoted in 5.a) above?
2. Other than unsupported data types ¿ what other issues could I get by using LOGMNR ¿ or for that matter Oracle STREAMS, too?

PS ¿ What I meant by my time being cheaper was in comparison to purchase a UAT server exactly like PROD with another 5TB SAN.

Tom Kyte
December 09, 2008 - 9:04 am UTC

I'm telling you

"sure, anything is possible. How much code do you want to write? I want to write as little as possible and I will *always* go that path."


5a) yes, I agree that that is the right approach. However, via sql trace - via ASH - you would get that. You will not get it in a testing environment, for whatever you do in test will differ from production somehow, in someway. If you want a full subset of just some tables - use tablespace point in time recovery. Restore SYSTEM, restore UNDO, restore the tablespaces you need. Do a point in time recovery. done - simple - no code. Why not use this approach?? Why complicate it with a lot of stuff.


You will run into limitations with logminer, chained rows, unsupported datatypes, the fact that if something DOES go missing - it is gone - never to be seen again (if we don't have the redo for it, if we cannot reconstruct it from redo - you won't ever see it).


You do not need to get a server EXACTLY like prod, but if you want to measure the true IO's and what'll actually happen to the query in real life - you do need something the SCALE of prod. Many people use their disaster recovery site as a testing instance. You can get 5tb of storage - maybe not of the class you have in production - for less than the cost of this conversation between us.

while awaiting for new h/w resources...

Andre, December 10, 2008 - 9:44 am UTC

Dear Tom,

I do appreciate your comments ¿ really and truly.

And ¿ I am NOT trying to get via the most difficult path, either.

It would be great to be able to invite you to a conference with the management down here ¿ and quite possibly your authority would make an impact ¿ but here is a reality check:
o FACT 1: SAN storage is unavailable and even if a very good argument is put forward it takes time and money ¿ weeks (many weeks) to get it approved + ordered + wait for delivery + install etc. The ERP and CRM and other OLTP database systems have failover servers ¿ but these do not use anywhere near that disk capacity
o FACT 2: I have thought of TSPITR right away ¿ but had gone to verify ¿ only to find out that for various reasons the DW database had not been deployed very well and various tables co-exist on several very large (hundreds of GB) table-spaces so if I find a performance ¿show-stopper¿ that uses 4 tables (forget indexes for the moment) and these four tables actually reside on 3 table-spaces that measure nearly 3TB ¿ I still have a problem. I wish I was able to do TSPITR on a fragment of TBS_A + a fragment of TBS_B etc = i.e. limit myself to the datafiles that hold these tables ¿ but of course this would not work.
o FACT 3: This is a PROD DW database with very limited maintenance windows, so I just cannot run some performance measurements for the entire 11-hour ETL to locate the top ranking 30 ETL Jobs (as far as poor performance) and then say to the management and the Sys Admin teams ¿We need to do a database reorganization to move these (say 30 out of 300 tables) to a new set of table-spaces no bigger than 300GB each + set up corresponding index TBS. Again such an approach requires various approvals + reprioritization of ADM tasks etc., when all I can do is to ¿promise¿ good results. It is a catch 22 ¿ if and when I do POC and prove results that are significant ¿ sure many things can then be done


ASH related QUESTION for you = if I want to use the ASH approach to complement AWR reports - what must be done to guarantee that I have ALL measurements that can possibly be taken and saved ¿ if the ETL runs unattended between late Saturday evening and Sunday morning for 11 hours thrashing millions of rows of data with some 40000 SQL statements executed out of nearly 1000 Data-Stage jobs generating nearly 5000 10046 trace files (45GB of TRC) ¿ so that nothing is lost?

I assume that come Monday morning the buffer would not have all of that available, right?

Should then ASH be configured to direct its own output to additional TRC files?

+

Finally re LOG Miner:

I checked all the Oracle docs for limitations etc, too and examined feasibility of writing PL/SQL procedures to reconstruct selected tables for regression tests.

I also verified that there are no data types used here that would be unsupported.

So the only issue appears to be ¿CHAINED-ROWS¿.

Assuming that while awaiting approvals + budget allocations + purchase orders + delivery and installation of a new SAN that would host a PROD DW copy or partial (if at all feasible) copy = = which may take easily 2 months (or longer) during which time I am trying to come up with one at least, if not 2, POC to show how this Data-Stage generated ETL can be improved in a dramatic way (like 40:1 ratio)

And - - - I want to reconstruct the necessary objects for regression tests of one JOB using Log Miner ¿ I believe that I would be able to spot ¿unsupported¿ operations or I would see that ALL rows that I wanted to reconstruct have been processed successfully, can I at least assume that it will be very clear from looking at the CONTENTS view that there are no such operations therefore archive reconstructions of say the required 4 tables can be done with 100% GUARANTEE of being CORRECT as well as COMPLETE = i.e. NO MISSING BITS that will render the test DW erroneous ???

From what I have studies thus far I think there are no surprises here ¿ but I would GREATLY APPRECIATE YOUR CONFIRMATION + comments if time permits

Many thanks and regards
Andre

Tom Kyte
December 10, 2008 - 10:06 am UTC

fact 2 seems easy to fix? easier than you spending months developing this infrastructure.


... what must be done to guarantee that I have ALL measurements that
can possibly be taken and saved ...

nothing, it is on in 10g all of the time. But bear in mind - ASH is a sample, samples are never 100% complete. And sql_trace would be unreasonable. The ASH sample however would be better than good enough to perform an analysis. ASH is maintained in a 30 day rolling window (it is partitioned, you can control the window if you like). ASH data is stored in the database, never in trace files.


I cannot confirm your approach will work. It seems to me that if you could set up your test environment with the tables needed and then apply the log mined data to it.... You would be able to set up your test environment with the tables needed as of the point in time you needed them as of!

That is, if you have the ability to get the data you need and then maintain it with a redo stream - it seems you have the ability to get the data you need and then recover it using the redo (eg: back to a TSPITR type of story once again)


Mine a delete using logminer

Rosalyn, April 15, 2010 - 1:59 pm UTC

Dear Tom,

We noticed that one record was missing from a table.
To find the how it happened it was decided to use log miner and see if there is unauthorised action taking place. we do not have auditing set up.

After we loaded the logs into the table V$LOGMNR_CONTENTS we were able to query out the insert of that record but was not able to find the delete or drop or truncate on that table.
Now my question is
If the table was altered to be in nologging mode and then the delete statement was run then will I get that delete statement from mining the log?


Your response is greatly appreciated.
I have learnt a lot from your website.

Thanks
Rosalyn



Tom Kyte
April 15, 2010 - 2:11 pm UTC

did you look for an update?

the delete would ALWAYS be logged, delete and update are always logged.

Mine a delete using logminer

Rosalyn, April 15, 2010 - 4:36 pm UTC

Thank you for your response.
That eliminates one doubt.

I do not see a commit on that table after that time.
Infact I see many commits but the table_name column is null so cannot connect the commit the the insert statement
The session_info is null and the table_name is also null in response for the query

select * from v$LOGMNR_CONTENTS where operation like '%COMMIT%'.

I get no rows returned for the query
select * from v$LOGMNR_CONTENTS where lower(sql_redo) like '%COMMIT%'

I do not have any evidence to show that the insert of the record was actually committed.
Don't know in what situation the session_info and the tablename can be null.
Not sure how I can find how the record went missing.

Would you be able to give me some insight in what situation the session_info and table_name can be null for a commit?

I forgot to mention that this is Oracle 10g.

Thanks
Rosalyn



LogMiner and Performance

Eduardo Márquez, April 26, 2010 - 11:02 pm UTC

Hello Tom,
About the potential impact of LogMiner on database performance. I have a situation on which we are to monitor changes and replicate information using logminer (using a 3rd party tool) so we are reading from logs quite often (every 30 secs or so) what could be the wait events related to it?, does the i/o that by nature are to be physical against v$logminer_contents then directly to online or offline logs counts at all to the consistent gets thus affecting hit ratios?. I've been comparing AWR reports without finding any meaningful difference from the time it wasn't there then now that I have a few tables with supplemental logging activated already. My thoughts are that this tool works in such a low level that it's impact on database performance is not measured in a traditional way.

Thanks in advance for your feedback.
Tom Kyte
April 27, 2010 - 6:50 am UTC

You would see perhaps increased log file parallel writes - by themselves are not bad - but if you see corresponding increased log file syncs, that would be affecting your application code.

... My thoughts are that this tool works in
such a low level that it's impact on database performance is not measured in a
traditional way.
...

everything that impacts the database performance would manifest itself in some tangible way.

Daniel Stolf, May 03, 2010 - 4:29 pm UTC

Tom,

I'm trying to learn about the log miner on oracle 11gR2  (32bit CentOS Linux), but it looks like it's not showing me all DML on my test table. Am I doing something wrong?

Here's my test case:

====================================================
- Session #1, create table and insert first row:

SQL> create table stolf.test_table (
col1 number,
col2 varchar(10),
col3 varchar(10),
col4 varchar(10));
2 3 4 5
Table created.

SQL> insert into stolf.test_table (col1, col2, col3, col4) values ( 0, 20100305, 0, 0);

1 row created.

SQL> commit;

SQL> select t.ora_rowscn, t.* from stolf.test_table t;

ORA_ROWSCN COL1 COL2 COL3 COL4

1363624 0 20100305 0 0

====================================================
- Execute shell script to insert a thousand lines into table:

for i in `seq 1 1000`; do
sqlplus -S stolf/<passwd><<-EOF
insert into stolf.test_table (col1, col2, col3, col4) values ( ${i} , 20100429, ${i}, ${i} );
commit;
EOF
done

====================================================
- Session #1, switch logfiles:

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

====================================================
- Session #2, start logminer with continuous_mine on, startscn = first row ora_rowscn, endscn=right now. The select on v$logmnr_contents should return at least a thousand rows, but it returns only three rows instead :

BEGIN
SYS.DBMS_LOGMNR.START_LOGMNR(STARTSCN=>1363624, ENDSCN=>timestamp_to_scn(sysdate), OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + sys.DBMS_LOGMNR.COMMITTED_DATA_ONLY + SYS.DBMS_LOGMNR.CONTINUOUS_MINE);
END;
/

SQL> select SCN, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_REDO IS NOT NULL AND seg_owner = 'STOLF';

SCN
-
SQL_REDO
-
SQL_UNDO
-
1365941
insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('378','20100429','378','378');
delete from "STOLF"."TEST_TABLE" where "COL1" = '378' and "COL2" = '20100429' and "COL3" = '378' and "COL4" = '378' and ROWID = 'AAASOHAAEAAAATfAAB';

1367335
insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('608','20100429','608','608');
delete from "STOLF"."TEST_TABLE" where "COL1" = '608' and "COL2" = '20100429' and "COL3" = '608' and "COL4" = '608' and ROWID = 'AAASOHAAEAAAATfAAm';

1368832
insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('849','20100429','849','849');
delete from "STOLF"."TEST_TABLE" where "COL1" = '849' and "COL2" = '20100429' and "COL3" = '849' and "COL4" = '849' and ROWID = 'AAASOHAAEAAAATbAAA';

3 rows selected.


SQL> BEGIN
sys.DBMS_LOGMNR.END_LOGMNR();
END;
/       2    3    4

PL/SQL procedure successfully completed.

====================================================

Tom Kyte
May 06, 2010 - 12:42 pm UTC

timestamp_to_scn(sysdate)

things like that are not accurate, they are only accurate to within +- a couple of seconds.

use dbms_flashback.get_system_change_number


and you don't tell us about your redo log configuration at all - how many, archivelogmode, etc.

Daniel Stolf, May 04, 2010 - 9:02 am UTC

Nevermind my last question about log miner, I have forgotten to turn SUPPLEMENTAL LOG on ;)

extra redo due to supplemental log,

A reader, December 16, 2010 - 1:53 pm UTC

Hello,

Using the historical information (from v$archived_log I can find out how much redo in the past), is there a way to find out how much more redo is going to be generated after I turn on supplemental logging on all of my tables?

In one of our live production database, it is found after turning on supplemental logging (on most of the tables), the amount of redo got doubled. We turned it on after Nov 29th 2010.

Here is the statistics:
(SL=>Supplemental Logging)

INST_ID FLAG AVG_REDO_SIZE AVG_LOGSWITCH
---------- ------------------ ---------- -------------
1 After SL turned on 682.32 1739.56
2 After SL turned on 682.32 1739.56
3 After SL turned on 682.32 1739.56
1 Before Oct 10 270.21 769.51
2 Before Oct 10 270.21 769.51
3 Before Oct 10 270.21 769.51
1 Nov 10 337.34 924.71
2 Nov 10 337.34 924.71
3 Nov 10 337.34 924.71
1 Oct 10 394.2 971.48
2 Oct 10 394.2 971.48
3 Oct 10 394.2 971.48


We need to plan ahead to accomodate more storage for archivelog destination.


Tom Kyte
December 16, 2010 - 9:17 pm UTC

there is nothing beyond testing and measuring that I am aware of. A feature like Real Application Testing in 11g would be able to discern this pretty quickly. There is no formula that I've heard of.

supplemental logging

David, March 08, 2011 - 3:56 pm UTC

Hi

Regarding supplemental logging this iw what 10gR2 documentation states:

By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable.

How true is that because from your answers logminer seems so useful?

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm

Thanks
Tom Kyte
March 09, 2011 - 7:14 am UTC

I would disagree with their statement "is not usable". I would agree with them if they said "is not 100% functional". Log Miner in general is still usable - you will find certain functionality to not be available however if it relies on the extra log information.

archivelog filled up fast, logminer used, but can't find out why

A reader, August 27, 2012 - 2:16 am UTC

hi tom,

i am on 11g and this is a Dev DB.
my logfiles are 128mb each and they are filled up pretty fast (every 5 min).

Since this is a dev db, it should not be filling up so fast so i went to use logmnr to open up the redolog.

<code>
/rdsdbdata/db/SOCIALSG_A/arch/redolog-35898-1-781327165.arc 35898 27-AUG-12 07:11:56
/rdsdbdata/db/SOCIALSG_A/arch/redolog-35897-1-781327165.arc 35897 27-AUG-12 07:06:49
/rdsdbdata/db/SOCIALSG_A/arch/redolog-35896-1-781327165.arc 35896 27-AUG-12 07:01:45
/rdsdbdata/db/SOCIALSG_A/arch/redolog-35895-1-781327165.arc 35895 27-AUG-12 06:56:39


Inside view logmnr_contents, I do not see plenty of transasctions (about only 250 rows)..

so how do i know which is the rows that is actually filling up this 128mb ? 250 rows of data only, it can't be that big ?

Regards,
Noob</code>

A reader, August 28, 2012 - 12:33 am UTC

hi tom,

is there any other reason why log switches, i seen the v$archived_log block and block size for individual archivelog, and it is not even 128mb yet.

Regards,
Noob
Tom Kyte
August 29, 2012 - 1:27 pm UTC

you could have a parameter set to force log switches

http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams009.htm#CHDHFDGI

someone could be forcing a log switch

alter system

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library