Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Fred.

Asked: February 24, 2017 - 3:14 pm UTC

Last updated: March 04, 2017 - 1:54 am UTC

Version: 11.2.0.4

Viewed 1000+ times

You Asked

The v$logmnr_contents view shows the transactions from some tables as "unsupported". It seems to not like some of the column data types... I need to know how is it that in normal RMAN recovery, these transactions are successfully applied to the database? How to bring in these transactions?

Thank you

and Connor said...

"Unsupported" is logminer saying that it cannot resurrect the *command* that created the redo stream.

An RMAN recovery does not need those commands, it just needs the redo stream. A recovery is just a sequence of "make a change X to block Y", whereas Logminer is trying to re-interpret that into a sql statement.

There are various reasons why Logminer might show "unsupported" including

- insufficient level of supplemental logging enabled
- complex datatypes
- in-memory undo

Check the restrictions in the docs here

http://docs.oracle.com/database/121/SUTIL/GUID-D11CC6EF-D94C-426F-B244-96CE2403924A.htm#SUTIL1619

Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

where

Feraidoon, February 27, 2017 - 2:31 pm UTC

Connor,

Thank you for your excellent explination. I am wondering then, where does RMAN find this "redo stream", since I only present to it these same redo logs, which I also present to LogMiner. The reason, I cannot use RMAN is because at the end of applying the logs, if forces RESETLOGS, thus a new incarnation of the database, and I will not be able to apply further logs coming in from my source database. I need to continuously bring in and apply logs from source DB to target DB.
Connor McDonald
February 28, 2017 - 1:44 am UTC

"I need to continuously bring in and apply logs from source DB to target DB."

Then you dont need Logminer. You want Goldengate or Dataguard.

http://docs.oracle.com/database/121/SBYDB/toc.htm

http://docs.oracle.com/goldengate/1212/gg-winux/index.html

Thx - challenge

Feraidoon, February 28, 2017 - 3:10 pm UTC

Thanks again connor. Here's a challenge:

DataGuard - not an option, since the replication to be across domains; source and target DB can only see a piece of storage between them. No Oracle NET, IPC communication, i.e. FAL, RFS... of DataGuard will not cross over domains to communicate and collaborate.

GoldenGate, similar issues and not available.

Hence, looking into 'customized' method of replicating using LogMiner... Only challenge with few tables of 'Unsupported' columns.

Any suggestions?

Thx
Connor McDonald
March 02, 2017 - 3:00 am UTC

If they can access shared storage, you could mimic DG with some scripting.

Node 1:
- archives logs in normal way

Node 2:
At regular intervals
- copies archived log to an alternate destination
- performs 'recover auto until cancel'


mode ...

A reader, March 02, 2017 - 2:47 pm UTC

Thanks again Connor.

I'm wondering what you mean by "mimicing" DG. Are you suggesting not running DataGuard services with Oracle Net, which coordinate sending, receiving, applying logs, as well as resolving any gaps in delayed/missed logs?

In that case, which exact mode is the target opened in, and which command is it using to apply recent logs? Will it be in open mode?

Otherwise, my method is to:
1) Clone the target from source
2) Conditionally, mine transactions from source using Logminer, and run those transactions on the target DB.
3) For "unsupported" tables, export them and import them

Thank you
-Feraidoon
Connor McDonald
March 03, 2017 - 2:18 am UTC

You said there was shared storage between the two nodes.

"source and target DB can only see a piece of storage between them"

but no network connectivity between them.

So your primary database works as per normal, ie,

- writes redo
- when redo logs fill, they get archived to archived redo logs

You can "home grow" a DataGuard copy of that database by

- taking a hot imagecopy backup of the primary to a new set of files on the shared storage
- copying the archived redo logs to a new location in shared storage at regular intervals
- creating a new database based on this imagecopy
- putting it in recovery mode and applying the copied archive redo logs.

Then you get an exact replicate of the primary.

Recovery

Feraidoon, March 03, 2017 - 3:10 pm UTC

Connor, thanks again for your input.
Yes, the only thing visible to the two instances accross the domains is the Storage. Actually, a CDSSF job will copy the files (archivelogs) from source to target.

With any possibility of a "quasi" DataGuard setup, how will the Standby know of the arrival of the archivelog file? Note, there cannot be any Oracle Net connection whereby the FAL or other process/s can Notify or Poll the other instance, and the Server Redo Logs not useful.

Also, would there be any possibility of having the Standby in read-only mode?

Thanks
-Feraidoon
Connor McDonald
March 04, 2017 - 1:54 am UTC

"how will the Standby know of the arrival of the archivelog file"

It doesn't (by default). You'd need to check at regular intervals (cron etc). Its basically how people used to implement standbys before dataguard existed. Any mechanism to allow that can be used (primary writes a flat file of archive sequence numbers etc).

"Also, would there be any possibility of having the Standby in read-only mode? "

Yes, but you need to pause recovery to do so

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.