Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, shanmugam.

Asked: October 07, 2002 - 10:41 pm UTC

Last updated: January 31, 2013 - 10:04 am UTC

Version: 9.2.0.1.0

Viewed 10K+ times! This question is

You Asked


Hi Tom,

1) What is advantage and disadvantage of Logical Standby database?

2) undo_management = Auto or undo_management = manual
Which is best method?

3) What is the use of following users in 9.2 database
ODM ,ODM_MTR,OE,OLAPSYS,PM,QS,QS_ADM,QS_CB,QS_CBADM,QS_CS,QS_ES,QS_OS,QS_WS,SH,WKPROXY,WKSYS,WMSYS,XDB


With best regards
shanmugam


and Tom said...

1)

key advantages:

o database is opened READ WRITE while in "apply mode" (eg: it can be used to generate reports, host another application if you like, whatever. It is a fully functional database)

o the tables you are protecting from production can have ADDITIONAL structures added to them (extra indexes, materailized views, etc). These structures are ones you would not have in "production" since they would slow your transactional applications down -- but the reports you run against this failover copy would benefit from them.

key disadvantage:

o not all datatypes supported. See the docs for a list of supported vs not supported datatypes.


2) I like auto.

3)

ODM schemas -> Oracle Data Mining support

OE, PM, QS*, SH -> demo accounts for various features see:
</code> http://docs.oracle.com/cd/B10501_01/server.920/a96539/toc.htm

OLAPSYS -> OLAP support (owns olap metadata structures)

WKPROX, WKSYS -> ultrasearch support

WMSYS -> workspace managment support

XDB -> XML db support


see also
http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/win.920/a95493/startrdb.htm#1006823 <code>



Rating

  (70 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Logical Standby Database

shanmugam, October 08, 2002 - 8:36 am UTC

The information is very useful to me.
Thank you very much Mr.Tom.




9.2 Database sample schemas

shanmugam, October 09, 2002 - 3:34 am UTC

Rite now we are not using any of the sample schemas. Shall I drop the users? or any unInstall script?

Tom Kyte
October 09, 2002 - 7:03 am UTC

the accounts are locked upon install. they are harmless. you can leave them or not, it is your call

logical standby -- unsupported tables with funtional indexes

Baqir Hussain, January 27, 2003 - 11:52 am UTC

Tom,
Oracle9 r2 says in its chapter 4 "data guard concept and administration" under "Unsupported Tables and Sequences" that
* Tables used to support functional indexes.
What it means is that oracle9.2 will not support tables with functional indexes if we implement logical standby database. If this is correct what i understand, please tell me what is work around. Because we have few tables that use funtional indexes heavily.
I would appreciate your help in this regard. Thanks

Tom Kyte
January 27, 2003 - 12:28 pm UTC

currently function based indexes are not supported with LOGICAL (sql apply) data guard.

PHYSICAL (redo apply) they are.

XMLTypes, Rowids are also not supported .

Bala, January 27, 2003 - 6:35 pm UTC

Hi Tom,

As per documentation, CLOB datatype is supported but XMLType based on CLOB is not supported in 'Logical Stdby'
along with 'Functional Indexes' and 'Rowids' .

Am i correct.


Tom Kyte
January 27, 2003 - 7:10 pm UTC

because objects are not....

we can save some time here:

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96653/createls.htm#1035815 <code>

has the restrictions all listed..

Logical standby question

SR, January 23, 2004 - 10:24 pm UTC

Tom,

here is the situation I hope you can help me understand

We have EMC process which cuts a business continuity volume (bcv) for us. Production database synced, broken and detached. Its like a instance recovery process on bcv. Using the onlines logs the instance is recovered on the bcv and mounted. At this point this database is as good as production (except it is slightly behind).

Now this is what I want to do

1) keep all (most may be) production based tablespaces in sync with production (meaning the ability to apply archive log, not necessarily at pace with time) - Thus I want to keep them for disaster recovery purpose. 1 day behind is fine, just they shoud be able to rollforward.

2) I have report users who would like to build some indexes and create some tables in this database. I was hoping I will provide them a brand new tablespace and let them do what ever in it without touching any bit from rest of tablespaces (system, rbs I am not sure how this will work)

3) while system is operational, use RMAN COPY command or best if just O/S copy to copy data files (as #1 means the files are really not writeen to since files are recoverable using prod archive logs)

4) Since, the bcv will be cut once a day and I really don't plan on applying redologs for rest of the day - user report/index is all this instance will be used (of course for disaster this will be a fallback - we have a seperate standby which is in sync with prod with .25 delay)

I was hoping Logical Standby works for me, because traditional 'alter database open read only' won't allow table/index creation. In short, Logical Standby will be for two reasons (1) allow creation of indexes, tables (2) allow backups. If this works, its a pot of gold! What do you think? I won't be applying any archive logs even though is Logical Standby thus performance hit won't be an issue.

I really really appreciate your comments.

Again, thanks in advance and hope to hear from you at your convenience.

Regards

Tom Kyte
January 24, 2004 - 8:43 am UTC

why would you use bcv's for this?

You just described a standby database - we do this out of the box, this is supplied functionality. There is a logical standby which allows the redo to be applied via sql to the standby and users can in fact create additional structures like MV's and indexes.

bcv won't have anything to do with this situation.

Also, remember, a DR site should be miles away from production -- not "bcv" distances away.

you WILL be applying archive logs.

my original question

SR, January 24, 2004 - 10:02 am UTC

Tom,

I understand your concerns on bcv and why do it etc.
The situation is that way for now. I am looking for a step in correct direction by using the Logical Standby (LS).

Lets even forget about the fact its bcv. Its a recovred database - ready to be opened (and then it will not be same as production) OR kept at mount and keep applying redo logs OR just simply use it to take backup.

My goal is to use it such that 1) users can do indexes, tables 2) I can still take backup (knowing its still useful for production recovery).

And from your answer it seems like Logical Standby will do this - correct? I will not fork over any more redo logs, but thats ok. Because this feature of Logical Standby is what I am most interested in (1) ability to open database (2) you can still use datafiles of LS over at prod. That's all.

No, its not really all things are done right here. But I am just looking for this thing to work. Then I will look at doing it "right" as you said by thinking through it.

Again, thanks for your help



Tom Kyte
January 24, 2004 - 1:18 pm UTC

You need to read the data guard docs. they tell you step by step what you need to do in order to set this up.

You CANNOT use datafiles of logical (it is logical, not a physical copy) on the production box. You will be backing up production in the same old fashion.

Tom, thanks

SR, January 24, 2004 - 7:23 pm UTC

That turned out to be a little bit of disappointment for me. I read the document and like you mention, its a new database, just with data integrity as original.

I think it will be so much interesting if I can make my data tablespaces readonly and open the database (I know it doesn't work that way now) - however I don't want to lose the production connection of these tablespaces. Is there any option?

Again, thanks for the information

Tom Kyte
January 25, 2004 - 9:59 am UTC

You would have to use a physical standby with can be opened read only -- but it will be ACCUMULATING redo logs not applying them (so it is point in time) and you cannot create any auxillary structures like indexes and mvs since -- well -- it is a PHYSICAL copy.

You'll need to figure out which is more important to you

a) restore to production from standby
b) add additional structures to standby

A and B are mutually exclusive by their very implementations.

Logical Standby Database

Earl Coombs, March 09, 2004 - 3:47 pm UTC

Hi Tom,
None of your links work anymore. When I click on any of your links like :

</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96653/createls.htm#1035815

I get forwarded to the index page:

http://otn.oracle.com/documentation/index.html <code>

This is not good!

Earl


Different Hardware

daniel, June 17, 2004 - 4:55 pm UTC

Tom,
for a logical standby database you still need to be on the same hardware/os on both primary and standby, is that correct? If you had 2 different hardware/os would you recommend Data Streams?

Tom Kyte
June 17, 2004 - 6:40 pm UTC

yes, you need to be the same.


if you want a DR site, you better use the same hardware. think about it -- you are in failure mode, you want things to go "smoothly", the disaster has already happened, no need to compound the problems with "totally different stuff".

i would not recommend anyone use heterogenous hardware for failover/DR

Standby

daniel, June 18, 2004 - 10:09 am UTC

This isn't for a DR. This will be used for reporting.

Tom Kyte
June 18, 2004 - 11:03 am UTC

streams would be appropriate and allow for heterogenous environments.

Data guard - Physical and logical standby combo : switchover effect

A reader, June 25, 2004 - 7:49 pm UTC

Tom,
I haven't worked with Data guard logical standbys. A quick question though.

Say Production database P1 replicates to physical Standby P2. P1 also replicates to logical standby L1. Say I perform a SWITCHOVER from P1 to P2; does L1 need to be rebuilt (to enable P2->L1)? How about if I perform a Failover from P1 to P2?

Tom Kyte
June 28, 2004 - 3:05 pm UTC

That was a good question, one I didn't know the answer to right off (and I had guessed wrong too, good thing I asked first :)

the guys who wrote it say:

<quote>

If P1 is shipping redo to P2 and L1 then after switchover from P1 to P2, P2 will ship redo to P1 and L1 if you configure the log_archive_dest_n parameters correctly. (LAD_n means log_archive_dest_n)

P1
LAD_1 Local archiving
LAD_2 Service=P2
LAD_3 Service=L1
FAL_SERVER=P2
FAL_CLIENT=P1

P2
LAD_1 Local Archiving
LAD_2 Service=P1 (deferred until switchover or failover)
LAD_3 Service=L1 (deferred until switchover or failover)
FAL_SERVER=P1
FAL_CLIENT=P2

L1
LAD_1 Local archiving
STANDBY_ARCHIVE_DEST Incoming Redo local archiving.
No fal server mechanism.
</quote>


In 10g, this literally takes two mouse clicks (i saw it) in enterprise manager do perform if you like. In 9i, you do it more or less manually (but it does it)


see
</code> http://otn.oracle.com/deploy/availability/pdf/40056_Ray_ppt.pdf <code>

by Ashish Ray, in particular slides 26-32.

Physical/Logical combo - switchovers and failovers

A reader, June 27, 2004 - 1:58 pm UTC

Tom,
I don't mean to be a pest. But I'm just wondering if my question somehow got missed from your daily sequential list? Or was it not really a 'quick question' and requires an involved effort (or a test case) to address it?

Let me take this opportunity to add one more variable to the intial question. What happens when you introduce a LAG on L1 to the mix? Say L1 was well behind P1. Does that make a difference now - about L1 having to be rebuilt during a failover/switchover?

Thanks for your time again.

Tom Kyte
June 28, 2004 - 7:39 am UTC

Technically -- the only place to ask a question would be the ask question page. this place is for following up on the original question... but.

I've book marked this, I'm waiting to hear back from someone. I want to get confirmation that my answer would be accurate.

Thank you - look forward to your response

A reader, June 28, 2004 - 1:33 pm UTC

Thank you. I look forward to hearing from you about the effects of physical standby switchover/failover on the logical standby. Sorry if I didn't post on an appropriate thread. I thought it was related to Logical standby's and therefore took the liberty.

I'm guessing the best way to avoid a possible mess is by cascading logs. By using a P1->P2->L1 architecture, rather than P1->P2; P1->L1. Note : P1, P2 and L1 are all on the LAN. A cascade architecture will reduce the 'load' on P1 slightly, and minimize (or eliminate altogether??) effects on L1 during a physical standby switchover/failover.

The question stems from a real life business case. I need to create a reporting environment (with minimal predefined lag, to no data loss) and maintain a physical standby on my production environment.

Thanks again.

What about failover?

A reader, June 28, 2004 - 6:14 pm UTC

Tom -
Thank you for checking on this.

A few followup questions :
1) Does any of this change in case of a failover? I would guess one MIGHT have to rebuild L1 in case of a failover from P1 to P2. Say standby redologs weren't used and failover to P2 caused online redolog data loss. These transactions have already been propagated to L1 (prior to failover). You are exposing yourself to potential data integrity issues if you don't rebuild L1 in such a scenario. Correct?

2) The way I interpret your answer - new primary controlfiles and DBIDs after the physical standby switchover/failover have no effect on the logical standby. Everything will function normally. Even if there are data integrity issues on L1, simply issue a "ALTER DATABASE STOP LOGICAL STANDBY APPLY;". Set the DG bypass to on. Fix the integrity issues manually if possible. Set bypass to off and continue applying logs. Correct?

3) In mycase since everything is on the LAN, I'm not even going to bother cascading. I'm just gonna go with P1->P2, P1->L1. Do you agree/disagree or "it depends".

You said
>> I had guessed wrong too, good thing I asked first :)

Curious - but why did you 'guess wrong'? Because of new controlfiles and DBIDs? Obviously, you weren't simply thinking "no data loss" during P1->P2 switchover - therefore, no need to rebuild L1. Just trying to understand your rationale behind the "wrong guess".

>> In 10g, this literally takes two mouse clicks (i saw >>it) in enterprise manager do perform if you like. In >>9i, you do it more or less manually (but it does it)


As for 10g mouse clicks, 9i EM if I recall doesn't take all that many either. May be 4 or 5 :-). Most of them are simply confirmation clicks.

9i EM switchover - manually?! Not really. EM works quite well for switchovers. I didn't *have* to do it manually at all.

Tom Kyte
June 28, 2004 - 7:24 pm UTC

1) in failover, unless you were in zero data loss mode, you lost data and the two systems are out of sync. I would envision a rebuild of L1 in that case.

2) there would be no data integrity issues in a switchover, there would/could be with a failover as noted in 1

3) it would be less work for p1 if you went p1->p2->l1


I guessed that P2 would feed P1 and P1 would feed L1.
I guessed wrong and wasn't willing to put my guess out there for public consumption :)


I've not used EM in the past ;) can it handle the triple site thingy that 10g can?

Thanks

A reader, June 28, 2004 - 7:57 pm UTC

Many thanks for addressing this again.

> 3) it would be less work for p1 if you went p1->p2->l1

On second thoughts, I think I'll go the cascaded route.

> I guessed that P2 would feed P1 and P1 would feed L1. I > guessed wrong and wasn't willing to put my guess out > there for public consumption :)

Aah. I see. Yes, public wouldn't have consumed it ;)).

> can it handle the triple site thingy that 10g can?

I'm not sure. I haven't configured it yet. But yes, it allows you to add additional sites, so I would think it'll handle it.

InterMedia Text and Logical Standby

Anton, June 28, 2004 - 9:11 pm UTC

Hi Tom.

We are looking at using Oracle 9i logical standby to provide a DR and reporting solution for our current production database. The production database uses InterMedia Text and has CLOBS containing XMLType data, so it is unsuitable for logical standby. Physical standby does not allow concurrent read only and log apply so this is no suitable either as the standby database must be up to date AND available for queries.

Apart from looking at replication (which can be more involved than setting up a standby database), are there any other options ? Does Oracle 10G logical standby support those datatypes that are not supported in 9i ?

Thanks Tom

Tom Kyte
June 29, 2004 - 7:33 am UTC

XMLType is not supported for logical standby in 10g.

You could perhaps use a logical standby for 99% of the objects -- using materialized views on prebuilt tables for the unsupported objects. In that fashion, if you ever fail over -- you can drop the MV, but the table will stay behind "as a table", in a read/write fashion.



XMLTypes and logical standby

Anton, June 29, 2004 - 10:57 pm UTC

Hi Tom.

Thanks for the reply on logical standby database and unsupported XMLTypes.

Can you explain more your solution of using MVs ? Do you mean use replication (found out Oracle stream don't support XMLTypes either) to replicate the XMLType tables (which are the main tables of this database) to a logical standby database ?

PS - what is the reason behind logical standby and Oracle streams not supporting XMLType ?

Thanks

Tom Kyte
June 29, 2004 - 11:51 pm UTC

yes, replication....

i can only hypothesize as to the reason...

XMLType Replication

Anton, June 30, 2004 - 12:36 am UTC

Hi Tom.

Sorry to beat this one to death.

So the replicated tables would exist in a separate schema in the logical standby (ie. in a schema that is not part of the logical standby mainatained schemas)?

Thanks

Tom Kyte
June 30, 2004 - 10:02 am UTC

i don't think that is a requirement, no. the schema can be exactly the same. in fact for DR, you would want it to be the same.

in a logical standby you are allowed to add tables/indexes whatever.

Logical Standby Config

Praful Dhonge, June 30, 2004 - 2:42 am UTC

Hi Tom ,
I read your answers but can u tell me the steps thru which I can configure the Logical Standby Database on other
machine in Oracle 9.2.0.1.0
which i will use as my Reports Server


Tom Kyte
June 30, 2004 - 10:05 am UTC

read the docs, we document stuff like that.


</code> https://docs.oracle.com#index-DAT <code>

XMLTypes

Anton, June 30, 2004 - 5:46 pm UTC

Hi Tom.

Thanks again.

So, just to clarify, I would use the DBMS_LOGSTDBY.SKIP API to skip all the tables that hold XMLType (so the logical standby does not replicate them) and replicate them using MVs ?

Also, when I run the query (found on Metalink) that reports all the unsupported tables for logical standby, tables belonging to CTXSYS, MDSYS and WKSYS are reported. Do I need to replicate these tables or will logical standby do it ?

Thanks Tom

PS - when are you coming to New Zealand to do a tour ?

Tom Kyte
June 30, 2004 - 8:13 pm UTC

yes.... that was the idea, skip it but use an MV (on prebuilt table)

they need not be logical "standby'ed"

ctxsys for example -- any "create index .... indextype is ctxsys.context" will have that ddl replayed at the remote site -- causing the ctxsys tables to be populated.

SQL query performance

Praful, July 01, 2004 - 3:23 am UTC

Hi Tom,

Currently i am facing performance problem in my one of the query. when i attached ADDRESS table in this query, query performance goes down drastically. after that query takes more then 40 min. ADDRESS table having 40k rows.

select
NVL(L.LCO_ZONE,0) ZONE_NO,
NVL(L.LCO_STATE,0) STATE_NO,
NVL(L.LCO_CITY,0) CITY_NO,
NVL(L.LCO_AREA_JV,0) AREA_NO,
NVL(L.LCO_DISTRIBUTOR,0) DIS_NO,
NVL(L.LCO_SUBDISTRIBUTOR,0) SUBD_NO,
NVL(L.LCO_LCO,0) LCO_NO,
ZONENAME, STATENAME, CITYNAME, AREANAME, DISNAME, SUBDNAME, MSEGNAME,
M.MSUSERKEY ,
NVL( SUBSTR(MT.MSTUSERKEY,3,1),' ') LCOTYPE,
CH_LCONO ,
CH_MONFROMDATE ,
CH_MONTODATE ,
CH_CUSTNR ,
CST.STATUSDESCRIPTION CUSTSTATUS,
AST.DESCRIPTION ACCSTATUS,
SA.ACCACCOUNTTYPE,
CH_ACCNR ,
CH_PRODUCTNR ,
CH_PRODSORTKEY ,
PR.DESCRIPTION PRODNAME,
CH_SNO ,
ch.COMM_BALBASE CHARGESBASEAMT ,
COMM_BALTAX CHARGETAXAMT ,
pay.COMM_CHSNO ,
pay.COMM_PAYALLOBASE ,
pay.COMM_PAYALLOTAX ,
pay.COMM_BALBASE ,
pay.COMM_PAYDATE ,
pay.COMM_PAYFTNR
from (
(SELECT ch_lcono,cH_monfromdate, cH_montodate, cH_custnr,
cH_accnr, cH_productnr,ch_prodsortkey,CH_SNO, a.COMM_balbase,a.COMM_baltax
FROM h_comm_allocation a, h_comm_charges ch
WHERE a.comm_sno IN (SELECT MAX(aa.comm_sno)
FROM h_comm_allocation aa
WHERE aa.comm_paydate < 731550
AND aa.comm_accnr = a.comm_accnr
)
AND a.comm_balbase > 0
AND a.comm_chsno = ch.ch_sno
and CH_ACCNR=15769
AND ch.ch_monfromdate < 731550)
UNION ALL
(SELECT ch_lcono, cH_monfromdate, cH_montodate, cH_custnr,
cH_accnr, cH_productnr,ch_prodsortkey, CH_SNO,ch.CH_CHARGESBASEAMT,ch.CH_CHAREGTAXAMT
FROM h_comm_charges ch
WHERE CH_ACCNR=15769 AND
CH_MONFROMDATE < 731580 and
ch.ch_sno not IN
(SELECT nvl(comm_chsno,0)
FROM h_comm_allocation c
WHERE comm_paydate < 731550
AND c.comm_accnr = ch.ch_accnr)
)
) ch ,LTRAIN.SUINVOIC SA ,LTRAIN.ACCSTAT AST,LTRAIN.SUSTATUS CST,LTRAIN.SUBSCRIB CS,
LTRAIN.PRODUCTS PR
,
(select comm_chsno, COMM_PAYALLOBASE,COMM_PAYALLOtax,a.COMM_BALBASE , comm_paydate, comm_payftnr from h_comm_allocation a where
a.comm_accnr=15769 and
((a.comm_monfromdate between 731550 and 731580 and a.COMM_PAYDATE <731550)
or(a.comm_monfromdate < 731580 and a.COMM_PAYDATE between 731550 and 731580)))
pay, H_LCOHIERARCHY L ,LTRAIN.MSEGMENT M,LTRAIN.MSTYPES MT, ltrain.address ad
where pay.comm_chsno (+) = ch.ch_sno
AND CH.CH_LCONO = L.LCO_LCO (+)
AND CH.CH_LCONO= M.MSMSEGNRCUSTNR (+) AND
M.MSMARKETSEGTYPE = MT.MSTMSTYPENR (+) AND
CH_ACCNR= SA.ACCOUNTNR AND
SA.ACCACCOUNTSTATUS = AST.ACCOUNTSTATUS AND
CH.CH_CUSTNR= CS.CUCUSTNR AND
CS.CUSTATUS = CST.STATUS AND
CH.CH_PRODUCTNR= PR.PRODUCTNR
and ch.ch_custnr= ad.addrcustnr
and ad.addreventnr=100


Tom Kyte
July 01, 2004 - 10:33 am UTC

nothing to do with a logical standby....

not that anyone on earth could really tune a "single query" like that, without understanding what the question was, what the data was. are those outer joins really need, what was the original question (heck, the query might not even answer the original question -- find that often).

I do see lots of "where in" with max's -- i generally use analytics for that

see
</code> https://asktom.oracle.com/Misc/oramag/on-html-db-bulking-up-and-speeding.html <code>
playing with aggregation for pointers.

DBA_LOGSTDBY_UNSUPPORTED

Sami, July 12, 2004 - 2:43 pm UTC

Dear Tom,

Why HR.COUNTRIES table is listed under DBA_LOGSTDBY_UNSUPPORTED view eventhough it has 3 columns and all of them are supported datatypes.


SELECT table_name,column_name,data_type FROM DBA_LOGSTDBY_UNSUPPORTED WHERE owner='HR';
TABLE_NAME COLUMN_NAME DATA_TYPE
------------------------------ ------------------------------ ------------------
COUNTRIES COUNTRY_ID CHAR
COUNTRIES COUNTRY_NAME VARCHAR2
COUNTRIES REGION_ID NUMBER
3 rows selected

Thanks
Sami

Tom Kyte
July 12, 2004 - 11:23 pm UTC

well, unsupported datatypes is just one reason:

Unsupported Tables and Sequences

Tables and sequences in the SYS schema
Tables with unsupported datatypes
Tables used to support functional indexes
Tables used to support materialized views
Global temporary tables

perhaps it is one of those..

"TNS:lost contact" error while configuring logical standby DB

A reader, July 13, 2004 - 12:19 pm UTC

Dear Tom,

Thanks for your response.

I am testing logical standby db on the same host.
Version 9.2.0.4
OS: Windows NT


I couldn't figure out the reason for the follwoing error at primary database alertlog file

I have tested BOTH primary and secondary database tnsnames and it works fine.

-----------------------------------
Tue Jul 13 11:58:28 2004
ARC0: Begin FAL archive (thread 1 sequence 11 destination LGSTBY)
Creating archive destination LOG_ARCHIVE_DEST_2: 'LGSTBY'
ARC0: Complete FAL archive (thread 1 sequence 11 destination LGSTBY)
Tue Jul 13 12:00:01 2004
Errors in file c:\oracle\admin\prmy\bdump\prmy_reco_401.trc:
ORA-12547: TNS:lost contact
ORA-12547: TNS:lost contact
ORA-12547: TNS:lost contact

-----------------------------------

REM connecting to primary database

SQL> connect /@prmy as sysdba
Connected.
SQL> select name from v$database;

NAME
---------
prmy

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
prmy

SQL> show parameter log_archive_dest_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------------------------
log_archive_dest_1                   string      LOCATION=C:\oracle\admin\prmy\arch

SQL> show parameter log_archive_dest_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------------------------
log_archive_dest_2                   string      SERVICE=LGSTBY lgwr


********************************************************
connecting to logical standby db using tns entry lgstby
********************************************************


SQL> connect /@lgstby as sysdba
Connected.
SQL>

SQL> select name from v$database;

NAME
---------
LGSTBY

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
lgstby

SQL>

Could you please shed some light on this. I looked at metalink and other resources but nothing helped. 

Tom Kyte
July 13, 2004 - 12:33 pm UTC

please file an itar. (if reading the head of the tracefile doesn't lead anywhere useful of course)

Logical Standby Database as a Reporting Database - Function-based Index Workaround?

Lisa, January 05, 2005 - 2:11 pm UTC

Tom,

We need to VERY QUICKLY move our reports off of the primary OLTP database to secondary server. A Logical standby database looks like a perfect, quick solution to solve our problem that will avoid time-consuming application-driven approaches. Unfortunately, we make use of function-based indexes against heavily-reported tables in our OLTP system, and have a desire to continue to use function-based indexes in the future in our OLTP system.

Can you please elaborate on a workaround that will allow us use a logical standby database and keep our function-based indexes in the primary production database? Is is possible to build a standby database by selecting only supported tables and use other means (such as Streams) for the tables that have function-based indexes? Does Data Guard support such configuration?

Also, the 9.2 Data Guard Concepts Manual speaks of ALTER DATABASE { STOP | ABORT } LOGICAL STANDBY APPLY;

The description for STOP is:

"Stops log apply services in an orderly fashion so that you can make changes to logical standby settings or perform planned maintenance. This clause is useful for refreshing materialized views or function-based indexes. Log transport services will continue to send archived redo logs to the logical standby database."

Does this hint to a workaround, or just indicate that while the source database CANNOT have function-based indexes the standby database can?

Metalink Bug 2717383 "DOES LOGICAL STANDBY SUPPORT FUNCTION-BASED INDEX? " is fixed in 10g, but this bug complains about both the lack of functionality and the lack of documentation regarding this issue. Does this mean that 10g supports function-based indexes with a logical standby database?

Tom Kyte
January 05, 2005 - 7:38 pm UTC

it indicates the standby could have them, but the oltp would not -- not on tables that would be propagated to the standby in any case. it is that the capture process cannot in 9.2 deal with the virtual columns fbi's create, the redo is not processed by it.


but if you were going to use streams for some of the objects, you could use them for all -- did you need the other features of standby, or did you just want another instance?


and can you say "why" you need to move the reports off? perhaps there are other feasible ways to address this issue.

very useful information

A reader, January 05, 2005 - 4:29 pm UTC

Tom, Very interesting thread. I have played with physical and logical standby database. This is the situation though..

If I build physical first and then create a logical standby, upon the switchover (to physical), logical doesn't know where the primary is now. But if you build logical first and then physical, you can keep switching back and forth without any interruption. I am still testing these scenarios.

Thanks

locical stdby

Nishant Choudhary, February 19, 2005 - 6:25 am UTC

sir,
i have configured logical standby on the test server following steps mentioned in 9i doc ..everything gone fine except the error below .. is it am missing with some steps or .... 


SQL> alter database register logical logfile 'D:\oracle\stdby\archive\ARC00007.001';

Database altered.

SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';

no rows selected

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
ALTER DATABASE START LOGICAL STANDBY APPLY
*
ERROR at line 1:
ORA-16100: not a valid Logical Standby database
ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 95
ORA-06512: at line 1


SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

Database altered.

SQL>  ALTER DATABASE START LOGICAL STANDBY APPLY;
 ALTER DATABASE START LOGICAL STANDBY APPLY
*
ERROR at line 1:
ORA-16100: not a valid Logical Standby database
ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 95
ORA-06512: at line 1

please help

thanx in advance
 

Tom Kyte
February 19, 2005 - 8:21 am UTC

sorry, but the mistake made would have been much further up in your process.

What should be my approch ?

Parag Jayant Patankar, February 21, 2005 - 11:06 am UTC

Hi Tom,

I am working for MNC bank, which is having 100 gig of database ( Oracle database in archive mode [ around 80 gig ] + COBOL flat / sequential files [ around 20 gig ] ). Flat/Sequential files and Oracle database is logical requirement of my database consistency ( It means if there is a serious problem in my End of day processing then I have to restore oracle database + flat/sequential files, I can not restore single component ). Every day we are having 5 hours of end of day processing for various important reasons. In this situation database is getting completely restarted twice and not available to users for any option. ( I am not a deciding factor for current database structure and end of day processing ).

Now I want some kind of process to be build such that before EOD processing I will make this database available to users for accepting a request ( for e.g. credit card or debit card transactions ) and apply same changes after end of day processing is over. I have a mechanism already in a place, from where I can re-apply changes to a database within few minutes.

My questions regarding this

1/ What are the various options available to me in oracle for making another oracle database available in fastest and cheapest ways ? ( of course other than duplicating this database every time or exporting and importing )

2/ Can using Standby database this scenarios will be good option ? if yes then which one physical/logical ...etc ( I am not having special datatypes in the oracle database )

regards & thanks
pjp

Tom Kyte
February 21, 2005 - 12:50 pm UTC

seems the cobol flat sequential files could be "tables", removing that big of fragility doesn't it.


if you already have such a mechanism in place? aren't you done?

Normally we would call this "replication", why don't you reverse the roles here -- do you EOD processing against a replica of the real system -- during your "5 hours of down time", don't replicate

or have this EOD system be a physical standby that you open read only for 5 hours of read only processing. Then put it back into managed recovery mode (no code for you to write, we catch it back up for you)


Or, alter your session to be read only in the production system and basically "freeze" your view of the database (ensure that you have sufficient UNDO configured)

Logical stdby

nishant Choudhary, March 12, 2005 - 5:19 pm UTC

Sir,

Problem 1
---------
Test environment.
Primary database i created a table with nologging even if i have force_logging set.

1.how it can allow me to create a table with nologging ?
2.At logical standby site i got the table but when i inserted record and switch log file it shown me nothing ..insetead my logical apply service getting stopped automatically .

please have a look and guide me up ..

============================================================
primary db

SQL> create table scott.nolog nologging as select * from scott.emp;

Table created.

SQL> insert into scott.nolog select * from scott.nolog;

14 rows created.

SQL> /

28 rows created.

SQL> /

56 rows created.

SQL> /

112 rows created.

SQL> /

224 rows created.

SQL> /

448 rows created.

SQL> /

896 rows created.

SQL> /

1792 rows created.

SQL> /

3584 rows created.

SQL> /

7168 rows created.

SQL> /

14336 rows created.

SQL> /

28672 rows created.

SQL> /

57344 rows created.

SQL> /

114688 rows created.

SQL> /

229376 rows created.

SQL> commit;

Commit complete.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            d:\oracle\oradata\orcl\archive
Oldest online log sequence     9
Next log sequence to archive   11
Current log sequence           11
SQL> alter system archive log current;

System altered.

SQL> alter system archive log current;

System altered.

=======================================================================

logical db

SQL> select sequence#,first_time,next_time,dict_begin,dict_end,first_change# from dba_logstdby_log order by sequence#;

 SEQUENCE# FIRST_TIME        NEXT_TIME         DIC DIC FIRST_CHANGE#
---------- ----------------- ----------------- --- --- -------------
         3 02-03-05 18:11:25 02-03-05 18:11:34 YES YES        199534
         4 02-03-05 18:11:34 02-03-05 18:11:39 NO  NO         199841
         5 02-03-05 18:11:39 02-03-05 20:08:55 NO  NO         199954
         6 02-03-05 20:08:55 02-03-05 20:11:02 NO  NO         226871
         7 02-03-05 20:11:02 02-03-05 20:11:04 NO  NO         227361
         8 02-03-05 20:11:04 02-03-05 20:11:10 NO  NO         227365
         9 02-03-05 20:11:10 02-03-05 20:11:11 NO  NO         227370
        10 02-03-05 20:11:11 12-03-05 23:21:29 NO  NO         227373
        11 12-03-05 23:21:29 12-03-05 23:35:51 NO  NO         260133
        12 12-03-05 23:35:51 12-03-05 23:39:24 NO  NO         263661
        13 12-03-05 23:39:24 12-03-05 23:45:42 NO  NO         264518

11 rows selected.

SQL> select applied_scn,newest_scn from dba_logstdby_progress;

APPLIED_SCN NEWEST_SCN
----------- ----------
     263388     266030

SQL> select type,high_scn,status from V$logstdby;

no rows selected

SQL> select type,high_scn,status from V$logstdby;

no rows selected

SQL> alter database start logical standby apply;

Database altered.

SQL> select name,value from v$logstdby_stats;

NAME                                                             VALUE
---------------------------------------------------------------- ----------------------------------------
number of preparers                                              1
number of appliers                                               1
maximum SGA for LCR cache                                        7
parallel servers in use                                          5
maximum events recorded                                          100
transaction consistency                                          FULL
record skip errors                                               Y
record skip DDL                                                  Y
record applied DDL                                               Y
coordinator state                                                APPLYING
transactions ready                                               1
transactions applied                                             0
coordinator uptime                                               0
preparer memory alloc waits                                      3
builder memory alloc waits                                       13
attempts to handle low memory                                    12
successful low memory recovery                                   2
pageout avoided                                                  0
pageout avoided by rollback                                      0
pageouts                                                         1
memory low watermark reached                                     0
recovery checkpoints not taken                                   0
recovery checkpoints taken                                       0
available work units                                             72
prepared work units                                              0
committed txns ready                                             0
un-committed txns ready                                          0
committed txns being applied                                     0
un-committed txns being applied                                  1

29 rows selected.

SQL> select type,high_scn,status from V$logstdby;

TYPE                             HIGH_SCN STATUS
------------------------------ ---------- -----------------------------------------------------------------------------------
COORDINATOR                               ORA-16117: processing
READER                                    ORA-16127: stalled waiting for additional transactions to be applied
BUILDER                            263324 ORA-16127: stalled waiting for additional transactions to be applied
PREPARER                           263324 ORA-16116: no work available
ANALYZER                                  ORA-16116: no work available
APPLIER                                   ORA-16116: no work available

6 rows selected.

SQL> select applied_scn,newest_scn from dba_logstdby_progress;

APPLIED_SCN NEWEST_SCN
----------- ----------
     263388     266030

SQL> select name,value from v$logstdby_stats;

NAME                                                             VALUE
---------------------------------------------------------------- -------------------------------
number of preparers                                              1
number of appliers                                               1
maximum SGA for LCR cache                                        7
parallel servers in use                                          5
maximum events recorded                                          100
transaction consistency                                          FULL
record skip errors                                               Y
record skip DDL                                                  Y
record applied DDL                                               Y
coordinator state
transactions ready
transactions applied
coordinator uptime

13 rows selected.

SQL> select type,high_scn,status from V$logstdby;

no rows selected

SQL> select * from dba_logstdby_skip;

no rows selected
===========================================================

Problem 2
----------

switchover operation --doc followed

http://www.lc.leidenuniv.nl/awcourse/oracle/server.920/a96653/standbyscen.htm#1008843


SQL> select name,value from v$logstdby_stats where name='coordinator state';

NAME                                                             VALUE
---------------------------------------------------------------- -------------------------------------------------
coordinator state                                                INITIALIZING

SQL> select * from v$logstdby;

   SERIAL# LOGSTDBY_ID PID          TYPE                           STATUS_CODE STATUS
---------- ----------- ------------ ------------------------------ ----------- -----------------------------------
         1          -1 1272         COORDINATOR                          16111 ORA-16111: log mining and apply set

SQL> select * from dba_logstdby_progress;

APPLIED_SCN APPLIED_T   READ_SCN READ_TIME NEWEST_SCN NEWEST_TI
----------- --------- ---------- --------- ---------- ---------
          0                    0

SQL> select * from dba_logstdby_log;

no rows selected

SQL> select name,value from v$logstdby_stats;

NAME                                                             VALUE
---------------------------------------------------------------- -------------------------------------------------
number of preparers                                              1
number of appliers                                               1
maximum SGA for LCR cache                                        7
parallel servers in use                                          5
transaction consistency                                          FULL
coordinator state                                                INITIALIZING
transactions ready                                               0
transactions applied                                             0
coordinator uptime                                               44
preparer memory alloc waits                                      0
builder memory alloc waits                                       0

NAME                                                             VALUE
---------------------------------------------------------------- -------------------------------------------------
attempts to handle low memory                                    0
successful low memory recovery                                   0
pageout avoided                                                  0
pageout avoided by rollback                                      0
pageouts                                                         0
memory low watermark reached                                     0
recovery checkpoints not taken                                   0
recovery checkpoints taken                                       0
available work units                                             0
prepared work units                                              0
committed txns ready                                             0

NAME                                                             VALUE
---------------------------------------------------------------- -------------------------------------------------
un-committed txns ready                                          0
committed txns being applied                                     0
un-committed txns being applied                                  0

25 rows selected.

---------------------------Am i missing with steps as it is at INITIALIZING stage from last 2 hours even when i have a very small test enviroment.


thanx in advance
 

Tom Kyte
March 13, 2005 - 9:35 am UTC

nologging is a "request", a request that need not be satisfied for the command to succeed. Just like /*+ append */ in an insert is a request, not a directive.

I see nothing wrong here really -- what do you think is wrong?


(and please, this is not support, if you are having installation and configuration issues.......)

logical

Nishant, March 14, 2005 - 5:33 am UTC

Sir,

case 1 :
primary db --
1. Table created with nologging when db is in force logging mode with 14 rows ( ex. create table scott.a as select * from scott.emp nologging; )

2. rows inserted .

Logical db --
alter database start logical standby apply;
table exists with 14 rows.

but is not inserting the rows for the particular table from primary to logical .

log services automatically getting stopped ..( i.e no cordinator ... neither initializing nor applying .. no row selected from v$logstdby )

why logical standby automatically getting out of apply mode and even not flusing any error ?
..... is it due to that table

please suggest

Case 2
Not a problem .... thanx

Tom Kyte
March 14, 2005 - 8:07 am UTC

please contact support for installation/configuration assistance.

Logical Standby Database - Bulk Operations

Lisa, April 27, 2005 - 9:50 am UTC

Tom,

We have recently set up a Logical Standby Database to support reports against the OLTP system, and have noticed performance problems with the SQL Apply during bulk operations (last dealt with archiving the DBA_AUDIT_TRAIL). Such incidents have caused us to lag by as much as 19 hours. Our DBAs are looking into tuning the Apply process as a separate issue (any suggestions are welcome).

Occasionally (well, more frequently as of late following a large data migration and the fallout from that), we have to perform large DML operations as part of a code drop. For example, imagine we have to add a new NOT NULL column to a 25 million row table.

What do you suggest we do to work around large operations against replicated tables? For example, is it a good idea to temporarily skip the table, apply the DML to primary and standby manually, and unskip the table (assuming downtime)?
I would like to figure out a best practice for large, bulk transactions and build it into our build process (and scripts if possible).

Regards,

Lisa

Tom Kyte
April 27, 2005 - 9:54 am UTC

need more info -- are you using the standby "as a standby" or just as a reporting instance?

are you doing the entire database or just part of it?

More Information

Lisa, April 27, 2005 - 10:06 am UTC

Tom,

We are not replicating the entire database, but the vast majority of tables within a single primary application schema. Other schemas that hold data such as our application audit trail are skipped. Within the primary application schema we skip just a handful of unsupported tables (IOTs, tables with FBI's, etc), and tables used for temporary purposes (data loads, etc).

The logical standby is only devoted to supporting reports. We have a second data center synchronized at the hardware level for disaster recovery purposes.

Regards,

Lisa

Tom Kyte
April 27, 2005 - 10:19 am UTC

you might consider using streams directly (this is 100% concept here, I didn't set up a test for this)....

streams with default apply...
get to the point where you have that big batch update...
- sync the systems (have the last apply record get out of the production system and quiesce that table)
- stop queueing messages for that table(s)
- stop apply on copy
- do big operation on production
- start queueing messages again (prod is back for modifications)
- goto copy and do big operation in bulk
- start apply again

similar to procedural replication (replicating a process, not the changes made by a process) from advanced replication.

Approach with Logical Standby

Lisa, April 27, 2005 - 10:28 am UTC

Tom,

Assuming that I'm stuck with the Logical Standby for the the short-term at least (another bulk data fix is in the development queue as I type), how do you suggest we handle it? Is there another option other than grin-and-bear-it?

Regards,

Lisa

Tom Kyte
April 27, 2005 - 10:34 am UTC

short of rebuilding the standby after something really big -- I don't have any other ideas. the standby is "automated" in this fashion, takes the redo and redoes it. standby is built on the streams technology, but is an application on top of it so you cannot really do what I describes without confusing the heck out of it.

Logical Standby & RMAN

Rich, May 12, 2005 - 11:37 am UTC


I found the following text on p.417 of the Oracle 9i
RMAN Backup & Recovery book by Robert Freeman.
<quote>
RMAN backups cannot be used to create a logical Standby
database, because the Primary database must be quiesced
prior to taking a backup to be used for a logical standby.
</quote>

Then p.13-2 of the 9i RMAN User's Guide says:
<quote>
You can use either manual methods or the Recovery Manager
DUPLICATE command to create a standby database from
backups of your primary database.
</quote>

Though the above text does not specifically say whether
physical or logical. But it seems to imply that we can use
RMAN for both.

I hope you can provide some clarification to this.

Many thanks again.

Logical standby created

Narayana Pocham, July 05, 2005 - 5:49 pm UTC

Tom, Created logical standby I guess and logs are not being applied. I followed the same procedures as in the document.
* Primary and Standby environments are SAME, two different Unix boxes
* Using the same locations like primary
* Same SID like primary


This below error is from the trace file. Can you please suggest?
error 12801 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-12801: error signaled in parallel query server P001
ORA-06550: line 1, column 250:
PLS-00306: wrong number or types of arguments in call to 'LOGMNR_KRVRDA_TEST_APPLY'
ORA-06550: line 1, column 250:
PL/SQL: Statement ignored

Tom Kyte
July 05, 2005 - 6:19 pm UTC

sorry, insufficient data (and this would be a support issue really, configuration and installation).

besides, looks like you've added something, 'test_apply'

Logical standby created

Narayana Pocham, July 05, 2005 - 6:02 pm UTC

Hi Tom, Created logical standby I guess and logs are not being applied. I followed the step by step procedures as in the document except few things:

* Primary and Standby environments are SAME, two different Unix boxes
* Using the same file locations like primary, didn't rename
* Same SID like primary, didn't change db_name

Can I have exactly primary environment on standby logical? like sid, db_name, same file locations,etc?

This below error is from the trace file. Can you please suggest what is causing this?

error 12801 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-12801: error signaled in parallel query server P001
ORA-06550: line 1, column 250:
PLS-00306: wrong number or types of arguments in call to 'LOGMNR_KRVRDA_TEST_APPLY'
ORA-06550: line 1, column 250:
PL/SQL: Statement ignored

Below error from Alert Log file:
Errors in file /u01/app/oracle/admin/earth/bdump/earth_lsp0_6663.trc:
ORA-12801: error signaled in parallel query server P001
ORA-06550: line 1, column 250:
PLS-00306: wrong number or types of arguments in call to 'LOGMNR_KRVRDA_TEST_APPLY'
ORA-06550: line 1, column 250:
PL/SQL: Statement ignored


Logical Standby Database

Parag Jayant Patankar, July 06, 2005 - 8:49 am UTC

Hi Tom,

I am preparing logical database in 9iR2. Suppose for this I have defined "Maximum Availability". then afterwards can I change data protection mode to "Maximum Availability" or "Maximum Performance" ?

regards & thanks
pjp

Tom Kyte
July 06, 2005 - 9:02 am UTC

yes, you can change the primary databases mode.

Same names and locations

A reader, July 06, 2005 - 9:15 am UTC

Hello Tom,
Scenario: Two identical Unix boxes, same version/release, Oracle 9.2.06.
Can I have same primary database SID, database name and data file locations for logical standby database?
Thanks in advance.

Tom Kyte
July 06, 2005 - 9:28 am UTC

sure, on two different machines.

logical standby database

reader, July 07, 2005 - 5:58 am UTC

Tom,

In logical stand by, since it is read/write mode, will it allow to drop/rename a table? If so,this may cause the transaction failure in the standby database and thus defeating the very purpose of having the standby database. Isn't it?


Also, how would we avoid a primary key violation in the logical database in case the current val and the next val are difference between the primary and the logical database.

Thanks..

Tom Kyte
July 07, 2005 - 9:34 am UTC

you would not want to write the tables you are copying from production obviously. (you would use grants to limit what can be done)

it is NOT bi-directional replication, you would want to treat those tables as "read only".

You can write to other tables, you can write for example to the AUDIT TRAIL.

You can create new structures (materialized views for example, indexes)

But you would not mess with updating the data you are copying, that would defeat the purpose.

LGWR for Logical Standby DB

parag jayant patankar, August 30, 2005 - 6:34 am UTC

Hi Tom,

I have setup logical standby database (LSB )on same server i.e AIX 5 on Oracle 9.2 database. My imp parameters regarding LSB on primary db are 

  LOG_ARCHIVE_START        = TRUE
  LOG_ARCHIVE_DEST_1       = 'LOCATION=/apps/oradbf/infod/archive MANDATORY'
  LOG_ARCHIVE_DEST_STATE_1 = ENABLE
  LOG_ARCHIVE_DEST_2       = 'SERVICE=standby LGWR SYNC'
  LOG_ARCHIVE_DEST_STATE_2 = ENABLE
  LOG_ARCHIVE_FORMAT       = infod_log%s_%t.arc

15:41:10 SQL> l
  1* select dataguard_broker, protection_mode from v$database
15:41:13 SQL> /

DATAGUAR PROTECTION_MODE
-------- --------------------
DISABLED MAXIMUM AVAILABILITY

Few last lines of alert of primary db ( which may be useful )
LGWR: Completed archiving  log 2 thread 1 sequence 87
Creating archive destination LOG_ARCHIVE_DEST_2: 'standby'
LGWR: No standby redo logfiles exist to archive thread 1 sequence 88
LGWR: Beginning to archive log 3 thread 1 sequence 88
Thread 1 advanced to log sequence 88
  Current log# 3 seq# 88 mem# 0: /apps/oradbf/infod/log3/log03infod.log
Tue Aug 30 13:23:35 2005
ARC0: Evaluating archive   log 2 thread 1 sequence 87
ARC0: Archive destination LOG_ARCHIVE_DEST_2: Previously completed
ARC0: Beginning to archive log 2 thread 1 sequence 87
Creating archive destination LOG_ARCHIVE_DEST_1: '/apps/oradbf/infod/archive/infod_log87_1.arc'
ARC0: Completed archiving  log 2 thread 1 sequence 87

But it always create xxxx_LGWR_XXXX.trc file.

(TESTINFO-oracle)/apps/oracle/admin/infod/bdump > ls -ltr |tail -3
-rw-r-----   1 oracle   dba             676 Aug 30 13:14 infod_arc1_67424.trc
-rw-r-----   1 oracle   dba            3018 Aug 30 13:23 infod_lgwr_76102.trc
-rw-r--r--   1 oracle   dba           60977 Aug 30 13:23 alert_infod.log

cat infod_lgwr_76102.trc

Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
ORACLE_HOME = /apps/oracle/product/920
System name:    AIX
Node name:      mumsa128200
Release:        2
Version:        5
Machine:        00CCFF1E4C00
Instance name: infod
Redo thread mounted by this instance: 1
Oracle process number: 4
Unix process pid: 76102, image: oracle@mumsa128200 (LGWR)

*** SESSION ID:(3.1) 2005-08-30 13:14:04.936
Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR
*** 2005-08-30 13:14:04.943 43184 kcrr.c
Initializing NetServer for dest=standby
Initializing PGA storage for Netserver communication
Allocating a brand new NetServer
Allocated NetServer 0
         Starting NetServer
NetServer 0 has been started.Subscribing to KSR Channel 0
success!
Indicating recv buffer for KSR Channel 0
success
Waiting for Netserver 0 to initialize itself
*** 2005-08-30 13:14:08.000 43464 kcrr.c
    Netserver 0 has been initialized
LGWR performing a channel reset to ignore previous responses
LGWR connecting as publisher to KSR Channel 0
LGWR-NS 0 initialized for destination=standby
*** 2005-08-30 13:14:08.000 43918 kcrr.c
Making upiahm request to NetServer 0
Waiting for NetServer to respond to upiahm
*** 2005-08-30 13:14:08.047 44104 kcrr.c
   upiahm connect done status is 0 
Receiving message from NetServer 0
Receiving message from NetServer 0
*** 2005-08-30 13:14:08.064 45009 kcrr.c
Making upidhs request to NetServer 0 (hst 0xfffffffffffc728)
NetServer pid:60632
*** 2005-08-30 13:14:08.064 45233 kcrr.c
  upidhs done status 0
*** 2005-08-30 13:14:08.071 43184 kcrr.c
Initializing NetServer for dest=standby
Allocating a brand new NetServer
Allocated NetServer 0
         Starting NetServer
Shutting down previously started NetServer 0 pid 60632..
NetServer 0 has been started.Waiting for Netserver 0 to initialize itself
*** 2005-08-30 13:14:11.138 43464 kcrr.c
    Netserver 0 has been initialized
LGWR performing a channel reset to ignore previous responses
LGWR-NS 0 initialized for destination=standby
*** 2005-08-30 13:14:11.138 43918 kcrr.c
Making upiahm request to NetServer 0
Waiting for NetServer to respond to upiahm
*** 2005-08-30 13:14:11.190 44104 kcrr.c
   upiahm connect done status is 0 
Receiving message from NetServer 0
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM AVAILABILITY mode
Receiving message from NetServer 0
*** 2005-08-30 13:14:11.210 45434 kcrr.c
Making upinbls request to NetServer 0
NetServer pid:61436
*** 2005-08-30 13:23:35.047 45264 kcrr.c
Making upinblc request to NetServer 0
NetServer pid:61436
Receiving message from NetServer 0
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM AVAILABILITY mode
Receiving message from NetServer 0
*** 2005-08-30 13:23:35.069 45434 kcrr.c
Making upinbls request to NetServer 0
NetServer pid:61436
(TESTINFO-oracle)/apps/oracle/admin/infod/bdump >

My problem is though redo shipping by LGWR in SYNC manner, changes are not getting applied to standby db immediately. 

On Primary when I do alter system switch logfile; then only changes are getting applied to standby db.

Kindly tell me why LGWR not able to apply changes on standby and how to correct this problem.

regards & thanks
pjp

Note : Pl do not advise me for Oracle support. 

Tom Kyte
August 30, 2005 - 7:22 am UTC

why not? this is what support does. I'll tell you how things can be done, how to do certain things, but I'm not here to configure and setup the system.

Logical Standby Database

parag jayant patankar, August 30, 2005 - 9:19 am UTC

Hi Tom,

No oracle user including me can deny your great contribution to oracle family. We are very much thankful to you for your valuable time to us. Considering this I do not want to waste your time, for which I can get answers from somewhere else or where I can try test cases to get an answer.

I requested you "not to advise me for oracle support" for Logical Standby DB question because

1. for same kind of question to Oracle support ( metalink ) two days back and i have no reply yet. ( I do not know when I will get an answer )

2. Secondly, I agree with you that we should not take your valuable time for configuration and setup. But I do not think my question coming exactly to this category. My problem is that I am able to apply changes on Standby Database with LGWR but working with ARCH process. I am not fully agree that this problem is coming under completly under configuration and setup.

Like this way we can put every question into configuration and setup. If you put some questions into configuration and setup I am in big trouble because I am not sure from where I can get an answer.

Considring all this I request you to help me on this though the question is coming slightly into configuration and setup.

regards & thanks
pjp


Tom Kyte
August 30, 2005 - 12:14 pm UTC

sure it is - this is a configuration and setup problem.. I'm traveling, this is a "new question", it is NOT a followup to the original question. I'm *not support*

sorry. I just don't have time to diagnose this issue. can't answer everything.

Not applied?

Albert Nelson A, August 31, 2005 - 6:26 am UTC

Hi,

As I see in documentation, LGWR does not apply the redo logs to standby. It only transports. The LSP process in standby coordinates in applying it.

Redo logs are read and applied to a logical standby database when a log switch occurs, not as they arrive on the standby site.

See: </code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96653/log_apply.htm#1015095 <code>

Regards,

Albert Nelson A


Tom Kyte
August 31, 2005 - 1:46 pm UTC

yes, that is correct, is there a mistake elsewhere on the page regarding this?

Not mistake

Albert Nelson A, September 01, 2005 - 7:10 am UTC

Hi Tom,

As parag jayant patankar from India was asking you

<quote>
My problem is though redo shipping by LGWR in SYNC manner, changes are not
getting applied to standby db immediately.

On Primary when I do alter system switch logfile; then only changes are getting
applied to standby db.

Kindly tell me why LGWR not able to apply changes on standby and how to correct
this problem.
</quote>

and I pointed out the documentation. I didn't see any mistake in the page.

Regards,

Albert Nelson A

Tom Kyte
September 01, 2005 - 9:12 am UTC

ahh, thanks! appreciate the followup.

Hi Albert Nelson A/Tom

Parag Jayant Patankar, September 01, 2005 - 10:38 am UTC

Hi Albert Nelson A / Tom,

Thanks for your precise answer. Now my understanding is very clear.

Sorry, I have read this point before, but not understood fully at the time of reading.

regards & thanks
pjp

Real Time Redo apply in 10g

Logan Palanisamy, September 02, 2005 - 8:44 pm UTC

Parag Jayant Patankar,

What you need is "Real Time Redo Apply" in Oracle 10g where you don't have to wait till a redo switch/new archivelog.

For Logical Standby:
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

For Physicl Standby:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

See the words "apply immediate" and "using current logfile". It uses the active redo-log file, not the archive log, for applying changes to the standby.

For more infor, please read page 13 of the following doc.

</code> http://www.oracle.com/technology/deploy/availability/pdf/40056_Ray_ppt.pdf <code>


Unfortunately you are on 9i.

A reader, September 22, 2005 - 6:28 pm UTC

Before creating the capture process, set the session event 26700.
ALTER SESSION SET EVENT="26700 trace name context forever, level 256";

This will allow hot mining of the redo log stream in 9iR2+

See:
</code> http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=304268.1 <code>

Standby Logical/Physical/MV's

jameel, October 03, 2005 - 11:42 am UTC

Hi Tom,

Thanks for all your contribution, Please suggest me a best solution for my scenario

I have a database D1 with 20 schemas. one schema GK having 10000 tables. The database is 9.2 on Solaris. I want to replicate this schema and tables to a database D2 in a different location. The database D2 is 9.2 on Linux. DML activities will be very high on D1. D2 will be read only. There are no other schemas in D2 other than the schemas from D1. it is like "standby" for this schema or for few schemas from D1. The number of tables may increase in future due to business requirement which should also be replicated to D2 database.

Which method given below will best suit me and why

1. Standby Logical

2. Standby Physical

3. Materialized view

4. any other method

Please give me the link which has the steps to configure the method choosen.


Thanks in advance.

Tom Kyte
October 03, 2005 - 8:32 pm UTC

my first question is totally

"why"

10,000 tables
very high transaction rates

"why"?

Performance improvement

Jameel, October 06, 2005 - 3:02 am UTC

Hi Tom,

"Why" because, Many applications rely on this schema. The application users are from US and UK. It is very slow. Hence decided to replicate the schema to US server.

Any other alternative or suggestions please...

Thanks,
Jameel

Tom Kyte
October 06, 2005 - 7:46 am UTC

yeah, fix the single server to have the performance characteristics you need.

replication will add complexity, overhead, maintanance issues, all kinds of bad stuff.

what does LOG_ARCHIVE_DEST_STATE_1 mean

A reader, November 29, 2005 - 10:36 am UTC

Hi

What is this parameter used for? LOG_ARCHIVE_DEST_STATE_n. What´s the purpose?

function based index workaround

A reader, December 07, 2005 - 11:15 am UTC

Hi

Since FBI are not supported in a Logical Standby Database can we trick Oracle by dropping the FBI in the standby site?

I have another question, Physical Standby Database do no generate Redo Log, but it seems to me from the docs that Logical Standby do generate redo, is it correct?

Tom Kyte
December 08, 2005 - 1:20 am UTC

it is the original capture process on the source database that has the problem with the FBI, not the target database.


physical standby sure does generate redo in it's own way. It "generates" the same redo stream as production. the logical standby just generates its own unique redo stream.

logical standby generateing redo

A reader, December 08, 2005 - 5:57 pm UTC

Hi

From

</code> http://oraclesvca2.oracle.com/docs/cd/B10501_01/server.920/a96653/log_transport.htm#1037345 <code>

5.8.2.4 Logical Standby Database Considerations

It says

"The parameter values shown in Example 5-11 ensure that the redo data stream coming from the primary database will be archived to a different location than where the archive logs generated by the database are located when it is in the logical standby role."

This does not happen with Physical Standby, does not generates REDO therefore Archive log?

Logical Standby seems like a normal database generates redo because of the SQL applications and generates Archive logs :-?



Tom Kyte
December 09, 2005 - 1:30 am UTC

I would say "it is all redo, so what"

Yes, logical and physical do different things (obviously) but both end up with a stream of archive redo logs to be managed by you.

Logical standby database - Option?

Albert Nelson A., December 09, 2005 - 10:28 am UTC

Hi Tom,

Suppose we want to have 3 months of data in the production database and have 12 months of data in the reporting database what Oracle technologies can we consider?

Regards,

Albert Nelson A.

Tom Kyte
December 09, 2005 - 1:01 pm UTC

streams to replicate

partitioning to purge.

Thanks

Albert Nelson A, December 10, 2005 - 3:04 am UTC


Workaround for unsupported datatypes?

Albert Nelson A, December 28, 2005 - 12:53 pm UTC

Hi Tom,

You have suggested streams for having 3 months data in production and 12 months data in reporting database. But there are few tables with function based index, spatial index etc which are under 'unsupported datatypes'. What can be done to replicate them?

Thanks always.

Regards,

Albert Nelson A.




Tom Kyte
December 28, 2005 - 5:46 pm UTC

You would be writing your own custom apply routines, pulling from a queue and applying the changes from the logical change record (LCR). You would not be using streams replication, you would be "using streams" to capture the changes.

Will Capture process be able to capture unsupported datatype?

Albert Nelson A, December 29, 2005 - 5:03 am UTC

Hi Tom,

The documentation

</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96571/capture.htm#45693 <code>

says:

<quote>
The capture process raises an error when it finds a change that satisfies one of its rules to a table containing a column with a datatype that is not listed. The capture process does not capture DML changes in columns of the following datatypes: NCLOB, LONG, LONG RAW, BFILE, ROWID, and UROWID, and user-defined types
</quote>

If so, how would we '"using streams" to capture the changes' for unsupported datatypes?

Sorry if I am mistaken.

Regards,

Albert Nelson A

Tom Kyte
December 29, 2005 - 10:25 am UTC

You can capture whatever you want, you have full access to the source database, you can capture for example just the primary keys and pull the current row using whatever programming language you are currently using (java, C, ...)

More explanation

Albert Nelson A, December 29, 2005 - 10:59 pm UTC

Hi Tom,

Sorry for pestering you.

I created single database capture and apply process as described in
</code> http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14229/capappdemo.htm <code>

I ensured that it is working. After deleting a record in employees table I saw the record getting populated in emp_del table.

Now I just added a column called point of type mdsys.sdo_geometry to employees table and inserted a new row with empty value in point.

When I deleted the record, I found the capture process stopped with following error stack displayed in trace file

knlcfFastEvalWithOptions: this LCR is not supported
(obj,bobj,vsn,prop,flags,tflags)=(49798,49798,2,2099218,1073742353,0)
error 902 in STREAMS process
ORA-00902: invalid datatype
OPIRIP: Uncaught error 21522. Error stack:
ORA-21522: attempted to use an invalid connection in OCI (object mode only)
ORA-00902: invalid datatype

Can you please explain how can I capture primary key alone from the employees table as I do not see any set_dml_handler procedure in dbms_capture_adm similar to dbms_apply_adm?

Thanks.

Regards,

Albert Nelson A


Tom Kyte
December 30, 2005 - 9:58 am UTC

Sorry, I stand corrected on that :( Turns out what people have done in the past is vertical de-normalization (two tables) - OR a trigger to touch another table with the primary key values (not very elegant).

Thanks

Albert Nelson A, December 30, 2005 - 12:37 pm UTC

Thanks Tom.

Regards,

Albert Nelson A

Cloning Using Rman

prasad, January 10, 2006 - 5:57 pm UTC

Sir,
I am working as a DBA.I got your mail Id While browsing the
net.I got a task to implement cloning using Rman(production to
development).Actully I have no idea on this.Please provide me the steps to
implement CLONING using RMAN.

I am giving you the details of database,server and backup
strategies

OS: IBM AIX
Prouction database : pd1
Server : prodser1
Backups using RMAN(hot backups daily)

For this database Rman catalog is RMANpd1 on server srpdrman1


OS: IBM AIX
development database : dev1
Server : devser1
Backups using RMAN(hot backups daily\

For this database Rman catalog is RMANdev1 on server srdevman1



we will be very greatful if you provide necessay steps for this
clonig as early as possible.

Regards

prasad.N
DBA


Send insta


Tom Kyte
January 10, 2006 - 7:42 pm UTC

did you check out the documentation?

</code> http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc2004.htm#sthref427 <code>

it has this information.


Regarding job submission

Vikas, March 07, 2006 - 8:57 am UTC

Hi Tom,

We have established the logical data guard,and i have a question which needs answer.

The standby database is in Read-Write mode.

If we schedule a dbms job on the primary, at 10:00 AM the meta data of the job definition will be replicated to the logical standby database thru' SQL streams via LGWR.

Now at execution time the database job will fire on the primary host, the same changes will be posted to the Logical data guard.

What will happen at the logical site since the job submit time has arrived?

I know the job will n't get executed but I need to know how does Oracle know whether to execute the job or not!

Thanks

dictionary

Samuel, November 10, 2007 - 8:46 pm UTC

one of the steps in creating logical standby is to create a dictionary. Anyway we are copying datafiles from primary to create logical and the system tablespace has the dictionary and why does it need dictionary created in redo logs and register the logs containing dictionary with logical standby database. You probably are one of the very few who knows the internal workings of it..wanted to ask you..Thanks.
Tom Kyte
November 11, 2007 - 7:35 pm UTC

it puts a marker in the redo stream - so we know "stop being physical, start being logical" right at that point:

.... SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY new-db_name;


The statement waits, applying redo data until the LogMiner dictionary is found in the log files. This may take several minutes, depending on how long it takes redo generated in Section 4.2.3.2, "Build a Dictionary in the Redo Data" to be transmitted to the standby database, and how much redo data need to be applied. If a dictionary build is not successfully performed on the primary database, this command will never complete. You can cancel the SQL statement by issuing the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL statement from another SQL session. ....

and from then on it, logical standby has to look at its own little dictionary to manage the redo input stream - which it "boots up" from the dictionary in the redo stream - since supplemental logging is what will be used from now on to maintain the dictionary (not binary redo applied to the dictionary).

Thank You

Samuel, November 11, 2007 - 8:21 pm UTC

Great explanation. Thanks!

logical standby Question

Samuel, January 14, 2008 - 8:06 pm UTC

Primary Database is A that has two standby databases. One is physical standby B and other is logical standby C.

Scenario: switch over makes B as primary and A as physical standby. Now after switch over, what is the status of logical standby C? Does C have to be rebuilt as its primary now is B. Thanks.
Tom Kyte
January 15, 2008 - 7:16 am UTC

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14239/role_management.htm#i1030646

the switchover process makes the new "production" instance "the production instance", it'll start feeding the logical standby. you do not rebuild

logical standby

Samuel, January 18, 2008 - 8:49 pm UTC

Can logical standby database be in NOARCHIVELOG mode? Thanks.
Tom Kyte
January 19, 2008 - 10:50 pm UTC

would not even begin to even make a tiny bit of sense.

failovers are to be the replacement for production.

dataguard failover and the role of flashback

khalid, January 20, 2008 - 12:36 pm UTC

Tom,

I have a question based on this scenario:
At time=t1 the primary db (A) suffers an outage, shortly thereafter, the secondary db (B) becomes the new primary. Then, after several hours at time=t2 the outage on A has been resolved, we want to make A the new standby. One of the options Oracle provides is to "flashback to a previous SCN". 

I am specifically referring to the example in section 12.4 of the doc. Data Guard Concepts and administration, Part Number B14239-04:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1049997
It has the following SQL, to flshback to a prior SCN

************* From the doc. ****************
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;
************* From the doc. ****************


My Question:

Why not just shutdown immediate, startup mount, and don't do the flashback db? because the database A will already be at the point in time=t1, where it had failed? won't it?

Thanks,
Khalid





Tom Kyte
January 20, 2008 - 1:39 pm UTC

because in general, when you fail over - you lose some transactions - the standby will lag behind the production instance, the production instance would be "in front of" the standby - there would be some transactions committed on the production instance that never see the light of day on the failover site.

we have to erase them, get rid of them, put the production site in the same state the standby was when it started as "production"

After your reply, ... it makes perfect sense

Khalid, January 20, 2008 - 4:38 pm UTC

After you reply, I went back and read the example one more time, and it makes perfect sense this time. I could never have figured that one out for myself, it kept bothering me ...

So which is why we went to the new primary and got the SCN from there like this:
************ from doc. **************
SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;
************ from doc. **************

Thanks



A reader, July 16, 2008 - 12:47 pm UTC

Can I do a transportable tablespace backup from a logical standby database ? The scenario is in production we have a primary database, a physical standby and a logical standby. If we want to refresh test environments with production data then instead of going to the production is it possible to create a transportable tablespace backup from the logical standby in order to create the test environment ?
Tom Kyte
July 17, 2008 - 11:34 am UTC

if you are doing a transportable tablespace backup, you wouldn't be "going to production", you'd be going to productions BACKUPS.

bigger difference between oldest online log sequence# and Next online log sequence#

Shailesh Patel, September 07, 2008 - 5:39 am UTC

Hi, Tom,

i found bigger difference in DR side like

Oldest online log sequence 24955
Next log sequence to archive 0
Current log sequence 24982

but my production database is ok.will you please know me why was the difference between oldest and current log sequence?.

Regards,

Shailesh

Tom Kyte
September 08, 2008 - 3:51 pm UTC

i don't even know where you got these numbers from or why you believe there is a problem with them.....

A query reg. logical standby

Anurag, October 09, 2008 - 12:59 am UTC

Hi Tom,
In note 738643.1, its mentioned the below line:

2 : Ensure Table Rows in the Primary Database Can Be Uniquely Identified.


What does it exactly mean and what implications it'll have , if we ignore this.

Thanks in advance.

Tom Kyte
October 09, 2008 - 10:45 am UTC

you have a primary key in place - that is what it means.

You cannot ignore it.

Logical Standby Recovery

Dick Goulet, January 30, 2013 - 8:56 am UTC

Question that is perplexing us at the moment. We have a logical standby of a primary database. There are quite a few new schemas in this database that have views and materialized views of the data on the replicated schemas for a portal application. The question comes in what do you do if you have to do a recovery of the primary database? In the case of a complete recovery that does not appear to be an issue, but what about an incomplete recovery? I'm assuming that we'd have to do an incomplete recovery of both databases to the same point in time. Also what about an incomplete recovery of the logical standby? I'm assuming here that the logical would simply ask the primary for the needed redo logs. This is something that th documentation doesn't seem to answer.
Tom Kyte
January 31, 2013 - 10:04 am UTC

but what about an incomplete recovery?

this is always an issue in a replication environment - in replication, you can never do a timed based recovery of one database without doing them all..

You'd have to do tablespace point in time recoveries at the very least of the "standby" data.

http://docs.oracle.com/cd/B19306_01/backup.102/b14191/recoscen.htm#CACICDFC


In short, you'd have to synchronize those two data sources - you would never want to do an incomplete recovery (physical standby - that would be OK, just flashback the standby) in a replicated environment.


Logical standby recovery

Dick Goulet, February 06, 2013 - 10:57 am UTC

MANY thanks. That is the same answer that I believed to be true. You guys are just too logical.

Unsupported tables

coredump, September 20, 2013 - 1:46 am UTC

Hello Tom

There was a recent issue with the logical standby setup in our organisation where some very heavy transactions on primary side won't keep up with logical side. Now, per vendor's recommendation, application team is suggesting us that some of those tables can be built in the SKIL/exlude list as those are not required to be replicated on logical side. However, I have a confusion that some of these tables have dependencies on replication tables (for ex some tables which we intend to put in SKIP have child tables and some others have parent tables in replication). Is it possible/feasible to build an architecture in such a way, will it cause any problems, or will oracl throw these tables in unsupported list?

The DB runs in guard none mode and we also plan to move it to guard standby mode now (for another reason).