Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, San.

Asked: January 30, 2001 - 11:29 am UTC

Last updated: November 01, 2013 - 6:42 pm UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I am in the process of setup the standby database for my production database. I have gone through with couple of doucments. Most of the things are clear to me, but still there are some comfusion:
1. Do I need to have totally separate hardware (Harddisk, controller etc.) or Can I create standby database om same hardware also?
2. Do I need to run my primary database only archive mode or can I run into noarchive mode also and use standby database
utility. Same question for standby database also "In what mode (Archive/noarchive)" I need to run standby database so I can switch over whenever its required.

3. In one of the standby database documentaion I have seen "The standby database is kept constantly in recovery mode.", and my question is What do you mean by recovery mode?

4. Do I have to set following initialization parameters at standby databse site:
-db_file_standby_name_convert = ("/dbs/","/dbs/s")
-log_file_standby_name_convert = ("/dbs/","/dbs/s")

If yes why not I have to set those parameter at primary database site. What are the exactly use of those parameters and what
are the different and default values are for above mentioned initialization parameters.

5. While performing Switchover from primary to standby database:
1. I have to apply all archived redo logs to the standby database.
2. Issue the CANCEL command to halt recovery.

and my question is when to issue cancel command? How do I know that?

Thanks
San

and Tom said...

1) you can but it is POINTLESS. A standby database is for a catastrophic failure -- e.g. the computer with the database on it got burned up in a fire. The machine is gone. The disks all exploded.

That would be like keeping your backup on the same disks as the original copies -- it would not do very much for you.

We support running a standby on the same machine for testing purposes only. In production, the standby should have its own machine, own processors, own disk and would most likely be on a different network alltogether.

2) A production database by definition will run in ARCHIVELOG mode -- regardless of whether you are using standby or not. If you do not run in this mode, you must be willing to LOSE DATA at some point in time.

That aside, a standby database requires archivelog mode.

All production machines must run in archive log mode else they will lose committed transactions the instant a disk fails (and that should happen about once a year in many systems)

3) it is constantly apply archived redo logs from the primary database as they are generated -- it is recovering the transactions from the redo log.

4) no, not if the file system setup is the same. They allow you to have different mount points on the standby site. They simply translate file names from the primary site into file names on the standby. You might have Oracle stuff in /u01/oradata on the primary but in /u25/oradata on the standby.

5) after you've had the last log applied from primary, issue the cancel recovery command. We've no more logs to apply so its time to tell us to stop waiting for more, cancel recovery.

Rating

  (157 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Feedback Requested

Saminathan Seerangan, April 18, 2003 - 7:12 am UTC

Dear Tom,

1.Why DB Link is not working is standby database?

I have created DB LINK in primary db and the link does exist in standby db-DBA_DB_LINKS table. But when i issue a select statement like 

SQL> select * from employees@tamizh_dblk;
select * from employees@tamizh_dblk
              *
ERROR at line 1:
ORA-16000: database open for read-only access

I'm just selecting, i'm not updating anything. Could someone explain to me why DB link access is not allowed?

Could you please provide me list of skipped SQL Statements on a Physical Standby Database?

2. I did setup physical standby DB without temporary tablespace. Do i have to create
TEMPORARY tablespace? Is it mandatory?

My understanding is that if I go for large SORT operation(in read only mode), standby DB should have temporary tablespace. Correct?

3.The ROWID between PRIMARY and physical standby DB will be always the same. Right?

Expecting your valuable response as always.

 

Tom Kyte
April 18, 2003 - 12:19 pm UTC

1)
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:8908307196113

(searched for ora-16000)


The documentation is awesome in this regards -- you haven't given a version but it is all on OTN and the considerations you need to take into account about statements against a physical standby are all there.  Assuming 817:

http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76995/toc.htm

and check out chapter 4!



2) see
http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76995/standbyr.htm#27494 <code>

mandatory -- no, almost always necessary -- yes.


3) correct. they are PHYSCIAL standbys -- bit for bit copies.

additional info for question # 2

Saminathan Seerangan, April 18, 2003 - 9:07 am UTC

Tom, How come TEMP tablespace appears without datafile in physical STANDBY DB? 

SQL> connect sys as sysdba( connected to STANDBY DB)
SQL> select tablespace_name,status from dba_tablespaces;

SYSTEM                         ONLINE
UNDOTBS1                       ONLINE
<b>TEMP                        ONLINE</b>
DRSYS                          ONLINE
INDX                           ONLINE
TOOLS                          ONLINE
USERS                          ONLINE
XDB                            ONLINE

8 rows selected.

SQL> select tablespace_name,file_name from dba_data_files where tablespace_name like '%TEMP%';

no rows selected

SQL> 

Tom Kyte
April 18, 2003 - 12:25 pm UTC

temp files are in dba_temp_files

Cross Platform Stnadby Database

Randy, April 18, 2003 - 12:34 pm UTC

Tom,

I am taking this opprotunity to ask you ,if it is possible to have standby database in windows box for a production database in Unix box. I know the archivelogs are not platform independent, at least was not in 8i. Just wonder if it's possible in 9i or coming 10i ? Do you have any insight about that ?

Thanks


Tom Kyte
April 18, 2003 - 12:44 pm UTC

standby's are totally -- 100% "homogenous".

In 9iR2 you can use Streams to have a reporting instance as above, which loosely could be considered as such -- but it wouldn't really be a standby in the normal sense.

Cross Platform Standby Db

A reader, April 18, 2003 - 12:55 pm UTC

Can you explain little more what Stream is ? It's also not very clear what you meant by reporting instance. What's the similarities it has with Standby db ?



Tom Kyte
April 18, 2003 - 3:47 pm UTC

all about streams:

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96571/toc.htm <code>


A standby database (logical or physical) can be used as a reporting instance (run reports).

You can build a reporting instance as well using streams (a copy of the data to be used to run reports).

A standby can be a reporting instance.

In a pinch, sometimes, depending on your needs, a reporting instance could be considered a "last ditct fail over site".

Logical Standby and Streams

Logan Palanisamy, April 18, 2003 - 5:11 pm UTC

Tom,

You said

"In 9iR2 you can use Streams to have a reporting instance as above, which loosely could be considered as such -- but it wouldn't really be a standby in the normal sense"

Isn't that called "Logical Standby" database in 9iR2?

I guess Logical Standby uses the Streams technology. The Streams technology could be used for other things as well, besides Logical Standby. Am I correct?

Tom Kyte
April 18, 2003 - 6:48 pm UTC

No, there is a logical standby -- automated, out of the box -- but HOMOGENOUS. It is a COMBO of streams + data guard (lgwr shipping redo -> another machine, thats data guard, then streams converting that into logical change records -> sql apply)

streams by itself would be very different. You would lose the data guard component and be set up for potential data loss in the event of a production box failure. but it can be hetergenous.

hidden archive logs

Alvin, April 30, 2003 - 4:59 am UTC

One of our standby db's had to be discontinued because of network problems.

I ran the following command :

alter system set log_archive_dest_2=defer

when the network was ok i enabled the earlier command i copied the gap sequence missing re-applied the logs.

But the funny thing is the archive logs becomes hidden or invisible after it has been applied and can only be seen by the unix command:

du -a

but if you list (ls -al arch_1_xxxx.arc) a specific archive log it looks fine and regular.

(???) i always get the weirdest error! karma ? <-- pls. ignore ranting.

Tom Kyte
April 30, 2003 - 7:22 am UTC

umm, what command is "missing them" then - seems that ls sees then, du sees them?

(knowing what unix you be on would be useful as well)

Alvin, May 02, 2003 - 1:44 am UTC

I ran the list (ls) command...

/export/home/oracle/arch-> ls
/export/home/oracle/arch-> ls
/export/home/oracle/arch-> ls -a
/export/home/oracle/arch-> ls -al
total 4568816

i don't see any files.
but if i ran 'du -a' i get...

1024 ./arch_1_64452.arc
1024 ./arch_1_64453.arc
1024 ./arch_1_64454.arc
1024 ./arch_1_64455.arc

/export/home/oracle/arch-> uname -a
SunOS <server_name> 5.8 Generic_108528-11 sun4u sparc SUNW,UltraSPARC-IIi-cEngine

when i transfered the gap sequence the files seems to be visible to 'ls' command but i think after applying the logs it became invisible ? do you think oracle has something to do with it ? i did check if the oracle-user owns the subdirectory and oracle does.

my Unix admin guy is as boggled as i am.

Tom Kyte
May 02, 2003 - 8:04 am UTC

this would be a question for Sun really -- I am not able to reproduce through any concocted test of my own on solaris

To refresh or not to refresh the CF

Alvin, May 11, 2003 - 10:53 pm UTC

After adding 5x2mb worth of logs i'm still getting the cannot allocate logs warnings... the last time i did it i refreshed the controlfile of both standby's.

I'm planning to add more but after browsing through the standby docs of 8i i get...

<cut from docs>
Chapter 4 - Performing Maintenance on a Standby Database

Adding or Dropping Redo Logs on the Primary Database
You can add redo log file groups or members to the primary database without affecting the standby database. Similarly, you can drop log file groups or members from the primary database without affecting your standby database. Enabling and disabling of threads at the primary database has no effect on the standby database.

Consider whether to keep the online redo log configuration the same at the primary and standby databases. Although differences in the online redo log configuration between the primary and standby databases do not affect the standby database functionality, they do affect the performance of the standby database after activation. For example, if the primary database has 10 redo logs and the standby database has 2, and you then activate the standby database so that it functions as the new primary database, the new primary database is forced to archive more frequently than the old primary database.

To prevent problems after standby activations, Oracle Corporation recommends keeping the online redo log configuration the same at the primary and standby databases. Note that when you enable a log file thread with the ALTER DATABASE ENABLE THREAD statement at the primary database, you must create a new control file for your standby database before activating it. See Refreshing the Standby Database Control File for procedures.
<end cut>

Can i add extra log files to the primary and not refresh the standby control file ?? reading from the text above it seems like oracle "recomends" (not require) refreshing the standby controlfile so the standby would perform like the primary. But i feel that i can add redologs without refreshing the Standby CF.




Tom Kyte
May 12, 2003 - 7:32 am UTC

you can add them without refreshing the standby, the caveat they are talking about is "hey, if you activate -- fail over -- to the standby, the tuning you did for the redo logs won't be over there, they will not be sufficient perhaps to handle your load, so maybe you want to keep them the same"

You can always add more logs to the standby if/when you failover.

init ora compatible parameter

Alvin, May 13, 2003 - 6:09 am UTC

Greetings Sir,

After reading the concurrency part of your book... I tried increasing the freelists of my table and i got this message.

ORA-00406: COMPATIBLE parameter needs to be 8.1.6.0.0 or greater

besides standby databases what other module/s or technology would be affected ? (our frontend is developed in java)

Tom Kyte
May 13, 2003 - 7:37 am UTC

have a backup just in case (good idea regardless) but changing compatible upwards is generally "safe". I'm not aware of any thing other then standbys and RAC.

In general, once it is set "up", it cannot be set "back down"

accidentally hit the submit..

Alvin, May 13, 2003 - 6:18 am UTC

what other modules would be affected when i change the compatible settings in the ini.ora files ?

SID's

Alvin, June 16, 2003 - 11:16 pm UTC

Can the Primary Db's SID be the same with the standby's SID ? or it is required that i change it ?

I get this error from the v$archive_dest view.

rtbe.standby3
17-JUN-03 71422 0 ORA-16032: parameter destination string cannot be translated

i rechecked the init.ora file and the db_file_name_convert and the log_file_name_convert seems to be correct.

Would it be better for me to use the (Service_name=....) in the primary's tnsnames.ora or (SID=....)


Standby's Parameter
====================
OS configs ORACLE_SID=rtbe (same with the primary's)

init params
===========
db_name = "rtbe"
db_domain =standby3
instance_name = rtbe
service_names = rtbe.standby3




Tom Kyte
June 17, 2003 - 7:20 am UTC

[tkyte@localhost tkyte]$ oerr ora 16032
16032, 00000, "parameter %s destination string cannot be translated"
// *Cause: The value for the specified parameter contained a destination
// string that could not be translated.
// *Action: Use a valid destination string in the specified parameter.

you have a simple configuration issue here. the sid is OK. your net configuration is not.

Pls don't answer the above post !!!!

Alvin, June 17, 2003 - 12:19 am UTC

It was the standby_dest_archive param...

i put in the standby_dest_archive ='Location=....' i removed it and the standby is now recieving archive files.

Sorry !

stand by db

A reader, June 30, 2003 - 11:22 am UTC

Tom,

We are looking at copying data from a 24 x 7 production box(box 1) to a different box (box 2). Box 2 will have the excat same copy of data in real time (almost real time). de-normalized views will be built on Box 2 and Box 2 will then be used for datawarehousing purposes.

To transfer data from box 1 to box 2, we are looking at using shareplex, a tool from quest. Shareplex copies redo logs from box1 to box 2.

My questions
1) Is there a way in Oracle to move all the data from box 1 to box 2 in almost real time? (standby db!)
2) For keeping the standby db in synch with the primary db, is there any advantage using the archive redo logs?
Why can't we use the redo logs instead of the archive redo logs?
3) Do Oracle Streams in 9i come into picture here anyway?


Thank you


Tom Kyte
June 30, 2003 - 1:20 pm UTC

if you have 9iR2, you already own something that does all of that and more:

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96653/standbycon.htm#1013849 <code>



stand by DB

A reader, June 30, 2003 - 4:06 pm UTC

Tom,
Do we have the concept of Logical Standby Database (or equivalent) on Oracle 8i?

Thank you

Tom Kyte
June 30, 2003 - 4:15 pm UTC

No. New in 9iR2

Reader

A reader, June 30, 2003 - 11:02 pm UTC

Is creating a database in a hetrogenious
platform using shareplex, quest supported by Oracle?

Tom Kyte
July 01, 2003 - 8:02 am UTC

shareplex is supported by quest, not by Oracle.

Patch Standby DB first...

Alvin, July 01, 2003 - 12:18 am UTC

I have an OLTP DB version 8.1.7.0.0 and we wanted to check the duration or downtime we would incur if we patched it to 8.1.7.4.0

So my boss asked me if we could patched up the standby db first to see how long it would take before patching up production. Oracle Docs states that OS, DB vers, Directory structures should be the same in a standby db environment.

Is this safe ? patching up standby's before the production ?


Tom Kyte
July 01, 2003 - 8:06 am UTC

this is what "test" is for. do your "test" or "development" machine - since you of course need to TEST your application after the upgrade.

It seems you want HA (high availability). That is why you are running a standby server. Performing an upgrade without testing it would sort of defeat your goals.

clarification..

Alvin, July 01, 2003 - 9:44 pm UTC

Does that means that i can go ahead and patch up my standby DB to 8.1.7.4.0 and open it as read only to see if it's ok ?
Then if it is i can go ahead and patch the production server ?

How do i check if its ok ? via Trace logs ?



i meant development server this time

Alvin, July 02, 2003 - 12:34 am UTC

I get your point. Standby for HA.

How would i know if after patching the "DEVELOPMENT" server that everythings a ok ?

Via trace logs ? if it starts up ok ? is there a table that i should query ?

Tom Kyte
July 02, 2003 - 7:25 am UTC

you would test your application? you would have an environment that looks like production, you would upgrade it, you would test your feature/function set -- make sure everything works.

You know, what you do before inflicting a new release of your applications on the unsuspecting end users.

Doubt

jignesh, August 08, 2003 - 12:40 pm UTC

In order to keep a standby database up-to-date, archived redo-log files need to be transferred from the primary to the standby server. Two ways to do this:--

1 Use operating system commands to transfer the files.
2 Set the STANDBY_ARCHIVE_DEST parameter on the standby database

I am not clear with 2nd point. help plese


Tom Kyte
August 10, 2003 - 11:56 am UTC

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76995/toc.htm <code>

read about the standby database.

A use for Oracle Streams?

Matt, August 11, 2003 - 3:49 am UTC

I just saw your reference to Oracle Streams. This looks good!

I was wondering whether it would be possible to use this technology to identify the data changes in a DB, and then automatically generate the SQL that produced these changes.

With this SQL it would be possible to accurately mimic the data changes in a production environment to examine database performance. All that would be needed to reflect real usage would be to codify pauses between statement execution. The result would be a production data load simulator - but without having to write something to interpret raw trace files.

Thanks in advance.

Tom Kyte
August 11, 2003 - 7:37 am UTC

You can use this to automatically generate row by row, equivalent sql.

that is, in production you issued:

update t
set x = 5
where y = 10;


that updated 100 rows. with streams, you could generate 100 updates, a row at a time, that did the same thing.


You would not be able to gauge performance in this fashion as the "query" portion of the update is effectively gone. Your complex selects included in the modification dml is "gone".

Streams problem?

Sasa, August 29, 2003 - 4:07 am UTC

Hi Tom,

We created streams between two DBs (through Oracle enterprise manager console who creates setup scripts) and during running setup sripts(part for adding table rules) we got an error that capture process does not exist although doc claims that capture process (if does not exist) will be created during adding table rules.
I will copy/paste this exception.

BEGIN
dbms_streams_adm.add_table_rules(
table_name => '"MMFIXEDODDS"."FOSPORTTYPE"',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => '"STRMADMIN"."STREAMS_QUEUE"',
include_dml => TRUE,
include_ddl => TRUE,
source_database => 'SJAWU');
END;
/


error AT line 1:
ora-26701: streams process strmadmin_capture does NOT exist
ora-06512: AT "SYS.DBMS_STREAMS_ADM", line 75
ora-06512: AT "SYS.DBMS_STREAMS_ADM", line 21
ora-06512: AT line 2

Source DB
----------
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
PL/SQL Release 9.2.0.3.0 - Production
CORE 9.2.0.3.0 Production
TNS for 32-bit Windows: Version 9.2.0.3.0 - Production
NLSRTL Version 9.2.0.3.0 - Production

Destination DB
--------------
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
PL/SQL Release 9.2.0.4.0 - Production
CORE 9.2.0.3.0 Production
TNS for 32-bit Windows: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production

Thanks in advance.


Tom Kyte
August 29, 2003 - 9:14 am UTC

did you use a drop user cascade during this setup -- to clean up schemas and such?





Streams problem

Sasa, August 29, 2003 - 9:36 am UTC

Yes we did as we haven't found in doc any way for "cleaning" such things.

Tom Kyte
August 29, 2003 - 10:42 am UTC

contact support

i believe it is the same issue as

2378852

Abstract: Cannot drop APPLY process after DROP USER CASCADE of APPLY user
Tags: STREAMS
Details:
Cannot drop APPLY after DROP USER CASCADE on the APPLY user.
Eg: - Connect as a user u1 and create an apply process a1
using an API in the dbms_apply_adm or dbms_streams_adm packages.
- Start the apply process by calling dbms_apply_adm.start_apply('a1').
- Connect as user u2 and drop user u1 cascade.
- Recreate user u1 and grant the same privileges as before.
- Connect again as u1 and try to drop or create the apply process a1,
Both fail.
Workaround:
Use a new apply process name.


and there does seem to be a patch for 9204


standby and nologging

David, June 16, 2004 - 11:04 am UTC

If I created an index with NOLOGGING clause on the primary database, my understanding is that index creation is logged but not the index entries. Is it correct? If so, data dictionary will have info of the definition of index but the index entries are not logged. If this is the case, would the index creation be propagated to standby database? In other words, will the index be created on the standby database without any index entries? Thanks.

Tom Kyte
June 16, 2004 - 1:13 pm UTC

if you create an index with nologging, the initial create and subsequent BULK operations (rebuilds) are not logged.

all inserts, updates, deletes -- ALL inserts, updates, deletes are fully and thoroughly logged.

the index definition would be sent to the standby, the index data itself would not be.

In a standby environment, using "non logged" operations is contrary to your goal (of having a DR site).

Apply lag

Jeff Hunter, August 27, 2004 - 1:21 pm UTC

I have setup a simple physical standby database using Oracle 9.2.0.5. My primary database spits out archived redo logs at the rate of about 128M/minute. Because I don't want to affect the performance of the primary db, I am not using lgwr to write to both destinations.

The logs get generated on the primary. They get transferred to the standby fine. However, the standby takes a while to apply them during the managed recovery process. For example, my primary db has generated logs 100-150 and they are sitting on disk on my primary box. Also, logs 100-150 are sitting on my standby box as they have been transferred correctly. However, my managed recovery on the standby is only on "Media Recovery Log ...1_135.log.

How do I monitor the lag between what's been transferred and what's been applied? I looked at v$archived_log where dest_id = 1 and dest_id=2, but they were in sync because the logs had already been shipped to the standby successfully. There was nothing in v$archive_gap as the primary doesn't think anything else needs to be transferred. I am not using the Data Guard tools at the moment.

Right on the mark

A reader, August 27, 2004 - 5:36 pm UTC


Easy way to identify nologging operation

L. Palanisamy, September 02, 2004 - 1:55 pm UTC

Tom,

How can a DBA find out if a nologging operation has been performed on the database since the last backup?

Only very recently, I read on asktom that you can query a DBA_ or V$ table to get this information. But I can't seem to find it. Can you help?

Tom Kyte
September 02, 2004 - 2:05 pm UTC

v$datafile has an unrecoverable_change#/time in it.

Operations on tables with more than 255 columns not getting logged in Archived Redo Logs

Logan Palanisamy, September 07, 2004 - 12:06 pm UTC

Tom,

Looks like operations on tables with more than 255 columns don't seem to get logged in the archived redo logs, even with force_logging. This will have very implications for our Standby database.

The sql_redo column of V$logmnr_contents says, "unsupported" for these wide tables. Is this a bug? Is there a way to get around this?

Here is the demo:

$ sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.4.0 - Production on Tue Sep 7 08:34:11 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production

SQL> connect / as sysdba
Connected.
SQL> 
SQL> drop user a cascade;

User dropped.

SQL> create user a identified by a;

User created.

SQL> grant dba to a;

Grant succeeded.

SQL> 
SQL> connect a/a
Connected.
SQL> 
SQL> declare
  2  l_stmt long;
  3  begin
  4  l_stmt := 'create table t10 ( x int primary key';
  5  for i in 1 .. 10
  6  loop
  7  l_stmt := l_stmt || ', c' || i || ' number default 1';
  8  end loop;
  9  execute immediate l_stmt || ')';
 10  
 11  l_stmt := 'create table t260 ( x int primary key';
 12  for i in 1 .. 260
 13  loop
 14  l_stmt := l_stmt || ', c' || i || ' number default 1';
 15  end loop;
 16  execute immediate l_stmt || ')';
 17  
 18  end;
 19  /

PL/SQL procedure successfully completed.

SQL> 
SQL> create table t10_log (creation_date date);

Table created.

SQL> create table t260_log (creation_date date);

Table created.

SQL> 
SQL> create or replace trigger t10_trg
  2  after insert on t10
  3  for each row
  4  begin
  5  insert into t10_log values (sysdate);
  6  end;
  7  /

Trigger created.

SQL> 
SQL> create or replace trigger t260_trg
  2  after insert on t260
  3  for each row
  4  begin
  5  insert into t260_log values (sysdate);
  6  end;
  7  /

Trigger created.

SQL> 
SQL> alter system switch logfile;

System altered.

SQL> 
SQL> insert into t10 (x) values (1);

1 row created.

SQL> insert into t260 (x) values (1);

1 row created.

SQL> 
SQL> alter system switch logfile;

System altered.

SQL> connect / as sysdba
Connected.
SQL> 
SQL> select force_logging from v$database;

FOR
---
YES

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /export/home/oraarch/ERPSUPP
Oldest online log sequence     886
Next log sequence to archive   888
Current log sequence           888
SQL> 
SQL> alter session set nls_date_format='DD-Mon HH24:MI:SS';

Session altered.

SQL> 
SQL> exec dbms_logmnr.add_logfile(logfilename => '/export/home/oraarch/ERPSUPP/arch_0000000887.dbf', options => dbms_logmnr.new);

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog);

PL/SQL procedure successfully completed.

SQL> 
SQL> select min(timestamp), max(timestamp) from v$logmnr_contents;

MIN(TIMESTAMP)  MAX(TIMESTAMP)
--------------- ---------------
07-Sep 08:34:48 07-Sep 08:34:48

SQL> 
SQL> set pagesize 49999
SQL> set linesize 120 trimspool on
SQL> col sql_redo format a66
SQL> 
SQL> select timestamp, sql_redo
  2  from v$logmnr_contents
  3  where sql_redo is not null
  4  order by timestamp;

TIMESTAMP       SQL_REDO
--------------- ------------------------------------------------------------------
07-Sep 08:34:48 set transaction read write;
07-Sep 08:34:48 insert into "A"."T10"("X","C1","C2","C3","C4","C5","C6","C7","C8",
                "C9","C10") values ('1','1','1','1','1','1','1','1','1','1','1');

07-Sep 08:34:48 insert into "A"."T10_LOG"("CREATION_DATE") values (TO_DATE('07-Sep
                 08:34:48', 'DD-Mon HH24:MI:SS'));

07-Sep 08:34:48 Unsupported
07-Sep 08:34:48 Unsupported
07-Sep 08:34:48 insert into "A"."T260_LOG"("CREATION_DATE") values (TO_DATE('07-Se
                p 08:34:48', 'DD-Mon HH24:MI:SS'));


6 rows selected.

SQL> 
SQL> exit;
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production 

Tom Kyte
September 07, 2004 - 1:00 pm UTC

it gets logged? what makes you think otherwise?

logminer didn't support chained/migrated rows -- but the data guard physical standby won't be affected.



It is a log_miner issue. Not archive log issue

Logan Palanisamy, September 07, 2004 - 12:27 pm UTC

Tom,

Just to be sure, I activated by standby and checked the record count for tables with more than 255 columns. All the records are there. What a relief!

So, it must be a log miner issue rather than the archive log one.



Tom Kyte
September 07, 2004 - 1:05 pm UTC

correct, chained rows.

A reader, September 21, 2004 - 3:20 pm UTC

Tom,

Is it legal/ok to sqlplus a standby database?

Tom Kyte
September 21, 2004 - 3:31 pm UTC

sure, that is the tool you use to interact as an administrator with the standby database.

also, physical standby's can be open READ ONLY -- sqlplus is fine

logical standby's are always open read write, so sqlplus is again fine.

A reader, September 21, 2004 - 3:39 pm UTC

Thanks Tom,

Somebody was trying to tell me that this is the message I would see when I try to connect as standby database. I am asked not to sqlplus the standby. I tried connecting as a "user". Could that be true?

ERROR:
ORA-01033: ORACLE initialization or shutdown in progress


Tom Kyte
September 21, 2004 - 3:47 pm UTC

sure, the database is in managed standby mode, only SYSDBA could connect right now. normal users cannot connect because the database is doing recovery right now.

Q: can you cannot to a standby with sqlplus?
A: yes, as sysdba in managed recovery mode, as "joe-user" in read only mode

A reader, September 21, 2004 - 3:55 pm UTC

Thanks Tom,

Any query to check what mode it is in right now?

Tom Kyte
September 21, 2004 - 4:09 pm UTC

sysdba can, yes. are you a sysdba?

To last reviewer (A Reader)

Bill, September 21, 2004 - 4:11 pm UTC

Seems that in order to query it to find out it's status....you'd have to be logged on? By attempting to log on and failing (with the above mentioned error msg), you would immediately know the status. ;-)

A reader, September 21, 2004 - 4:41 pm UTC

Tom,

I am a sub-junior sysdba. I only can work under supervision.

What does this mean?

SQL> select * from V$MANAGED_STANDBY;
select * from V$MANAGED_STANDBY
              *
ERROR at line 1:
ORA-01219: database not open: queries allowed on fixed tables/views only
 

Tom Kyte
September 21, 2004 - 7:19 pm UTC

[tkyte@localhost tkyte]$ oerr ora 1219
01219, 00000, "database not open: queries allowed on fixed tables/views only"
// *Cause: A query was issued against an object not recognized as a fixed
// table or fixed view before the database has been opened.
// *Action: Re-phrase the query to include only fixed objects, or open the
// database.


just means that view cannot be queried while the database is in its current state.

A reader, September 21, 2004 - 9:27 pm UTC

Tom,

Thanks very much! Though I am following the documentation, I am compelled to ask this one last question on this topic.

If I could login as sysdba, why wasn't I unable to query that view? Bottomline is that I want to find the status of db at a given instance. Is it readonly or is it in managed recovery?

By getting an error to the above query, does that mean it was standby?

Finally, how did it switch to non-read-only (or standby?) and how to do it?



Tom Kyte
September 22, 2004 - 7:36 am UTC

can you work with your DBA in charge to get the lay of the land?

A reader, September 21, 2004 - 9:30 pm UTC

A few more to add to the above please ...

By turning the datagaurd on (start) will the standby switch from readonly to managed recovery?

And should the standby be necessarily offline while dataguard applies archives? (I think it's not necessary)

Thank you



Tom Kyte
September 22, 2004 - 7:37 am UTC

see above, your DBA that set this up should be teaching you this.

the database is either PUT INTO managed recovery or PUT INTO read only - it is like a fork in the road. Pick a path and go down it...

Standby database in standard edition?

Jelena, October 21, 2004 - 1:00 pm UTC

Hi Tom,
we are using replication now and it creates a lot of mainenance and overhead, so we are thinking of switching to standby database. We are using version 9.2.0.4 SE.
I found that Data Guard is option in Enterprise edition only, but for Standby database, i didn't find any notes - so is it possible to use Standby database with 9i standard edition?
One more question, all DDLs are applied on standby database or there are exceptions? (or my question is could it be configured that 'drop table' can not be done on a standby db)?
Thanks!


Tom Kyte
October 21, 2004 - 3:38 pm UTC

there is no MANAGED standby in SE, only "do it yourself" (you ship the logs, you apply them).

using a logical standby (only in 9i EE), you could do this (the drop table thing)

Effect of IMP and Bulk operation in primary db on standby db

Laxman S Kondal, October 22, 2004 - 12:50 pm UTC

Hi Tom

I am working on creating DG which will have bulk opations and import user schema - and found this in one of your comment above you said:
'In a standby environment, using "non logged" operations is contrary to your goal (of having a DR site)'

What about import and bulk operations in primary db? Will standby db be real DR site?

Thanks

Tom Kyte
October 23, 2004 - 9:05 am UTC

bulk operations and import both LOG by default and import cannot really help "but log" in any case.

bulk operations (direct path load, insert /*+ append */, etc) generate REDO by default in archivelog mode -- they are DR safe.

it is when you set nologging on an object and perform an operation that can skip logging when nologging is set that you have to take care.

Laxman S Kondal, October 25, 2004 - 8:12 am UTC

Hi Tom

So I need to make sure that we do not set nologging on an object in primary database as long as we want standby as DR.

Important tip for succesful DG for DR.

Thanks

Tom Kyte
October 25, 2004 - 8:21 am UTC

or, you can set force logging at the database level in 9ir2 (so even if someone says "nologging", the database just ignores them and logs it anyway)

Standby DB as real DR

Laxman S Kondal, October 25, 2004 - 9:12 am UTC

Right, that will be the ultimate solution for "NOLOGGING" in 9iR2 for ODG to be real DR.
Thanks, your suggestion is always worth for ever.

Thanks

A reader, October 27, 2004 - 9:11 am UTC

Hi Tom,

I am following documenatation... However, please clarify these

1. How to know if it is a physical or logical standby
2. I am sure activate command performs failover, is that any different in physical or logical standby db?
3. I think when activate is performed, current standby db becomes prod. Say in 10 mins if instantiate -v is given a new standby will be created. What happens to the old prod db? All the datafiles, space, where will it go?


Tom Kyte
October 27, 2004 - 9:25 am UTC

1) umm, you set it up didn't you? a logical standby instance would be up, open, read write. a physical standby will be either open read ONLY or in managed recovery mode.

so, if you can log into the standby and say create a table, it is a logical one.

2) chapter 7 covers this (9ir2 doc set chapter 7, might be different in different releases over time) It goes through all of the failovers

3) nowhere, it is not touched, it is your problem to deal with. you decide what happens to it.

A reader, October 27, 2004 - 11:40 am UTC

Tom,

After activate (standby@location B becoming production) could you please clarify these

1. If a sql is run on primary@location A, it actually runs on standby@location B
2. Former primary stays idle. Nothing is performed on it. But then how are its datafiles updated (from timestamps) even after activate is done??
3. The former primary can never be made primary again. It is the old standby (standby@location B) that is the new primary. To create a new standby, instantiate -v has to be run


Tom Kyte
October 27, 2004 - 12:31 pm UTC

the two databases are independent entities. there is no "primary", there is no "standby" anymore if you activate the standby (while still having the primary runnable). you have two databases up and running.


the former primary-- if it is still running -- is JUST A DATABASE. Think about it. they are separate entities. once you break the standby off and fire it up -- it is just another database. You'd have TWO independent databases running.



A reader, October 27, 2004 - 1:34 pm UTC

Thanks Tom,

That helped a lot. Lastly,

Prim A --- Standby B
Activate standby
Users will be still using former primary (Prim A)


Following things happen now .....

a. Users are making some changes to (Prim A)
b. Changes are being made in standby B.
c. After the changes are complete, instantiate -v is used to create a new standby. (makes the primary-standby environment)

1. When instantiate -v makes the standby a new primary, what about the changes made by users to old primary?
2. How is the new standby made?

Please clarify these. Thanks for your patience!



Tom Kyte
October 27, 2004 - 1:37 pm UTC

you don't activate a standby until the primary has blown up, burnt down, washed away...................

so not really following you here. don't know why you would activate b, make changes, and then erase it.

A reader, October 27, 2004 - 1:54 pm UTC

Tom,

The standby is used for testing. There is not test box. I am not the designer of this idea -- but when I found out activate is being used, testing is done, and instantiate -v is used later, I thought I should look into it. Is something wrong with the concept? I know when activate should be used. The documentation makes it clear.


Tom Kyte
October 27, 2004 - 4:05 pm UTC

if this is test, i would not bother the production system at all. i would just restore a backup of prod to test, recovery it -- open it, test.

just like shampoo --

lather
rinse
repeat


I don't see why you would impact the production box with the noise of having to support a standby that isn't a standby when all you need to do for test is restore and recover?

A reader, October 27, 2004 - 4:37 pm UTC

Tom, sorry for the typo. "There is no test box".
The standby is occasionally used for testing (there is a test instance on it that gets refreshed whenever some testing needs to be performed because there is no test box). In that case, it is activated, testing performed and instantiated back using instantiate -v. Otherwise in normal conditions it is a standby database. I am still confused what the old primary database will do (and current changes) after all this setup. Because I think the activated standby will be primary (after activate) and a new standby has to be created.

Tom Kyte
October 27, 2004 - 5:55 pm UTC

then it sounds reasonable.

Problem with having physical Standby DB on the same box as Primary

A reader, December 07, 2004 - 2:34 pm UTC

Hi Tom,

I am using Oracle 9.2 on HP-UX. 

I did the following:
1- Set Primary DB in Archivelog mode
1- Shutdown primary1 DB. 
2- Connected to the primary database and create the standby control file. ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/db/pdb/control/stbycf.ctl'
3-Copy the backup datafiles and standby control file from the primary db directory to the standby db directory. /db/pdb/data to /db/sdb/data and /db/pdb/control/stbycf.ctl to /db/sdb/control/stbycf.ctl
4- set up TNS and Listener
5- Start the standby instance and mount the standby database.

SQL> startup nomount pfile=/db/9.2/sdb/pfile/initsdb.ora
ORACLE instance started.

Total System Global Area   72313448 bytes
Fixed Size                   735848 bytes
Variable Size              58720256 bytes
Database Buffers           12582912 bytes
Redo Buffers                 274432 bytes
SQL> alter database mount standby database;
alter database mount standby database
*
ERROR at line 1:
ORA-01102: cannot mount database in EXCLUSIVE mode

Both databases have the same SID name. and remote_login_passwordfile=NONE for standby DB and Exclusive for primary DB.

And if I use different SID name for standby DB, I can not mount the database with the way I created standby control file.
Total System Global Area   72313448 bytes
Fixed Size                   735848 bytes
Variable Size              58720256 bytes
Database Buffers           12582912 bytes
Redo Buffers                 274432 bytes
ORA-01103: database name 'SDBO' in controlfile is not 'SDBD'

My questions are:
1-    Is it doable to have standby DB on the same box with different SID name? If so , how should I create the control file for ST Database?
2-    If I use the same SID, how can I fix the ‘cannot mount database in EXCLUSIVE mode’ ? can I have two database on HP-UX with the same name?

Thank you for your outstanding and valuable information.
- Arash



 

Tom Kyte
December 07, 2004 - 8:50 pm UTC

have you followed the step by steps in the docs?

A reader, December 08, 2004 - 9:58 am UTC

Yes sir! I have followed step by step. Could you please tell me this:

1- can I have stdb with the same sid as primary on the same HP-UX box?
2- can I have stdb with the different sid name of primary on the same HP-UX box?

Thanks,
- Arash



Tom Kyte
December 08, 2004 - 10:52 am UTC

1) yes, oracle_home+oracle_sid = unique key, as long as they have different homes....

2) yes, the sid isn't really relevant here -- it is used to find files in the os and generate a key for the SGA but that is about it.

A reader, December 08, 2004 - 1:30 pm UTC

if oracle_home is the same, what would be the answers to these questions now:

1- Is it doable to have standby DB on the same box with different SID name?
If so , how should I create the control file for ST Database?
2- If I use the same SID, how can I fix the ‘cannot mount database in
EXCLUSIVE mode’ ? can I have two database on HP-UX with the same name?

Thank you so much for your time.
- Arash

P.S I know, it's almost useless to have stdb and primary db on the same box with the same dbname+oracleHome. However I am trying to learn and understand stdb concept in more detail.


Tom Kyte
December 09, 2004 - 12:46 pm UTC

not really -- the sid is used to find files, directories. both would want the same spfile name, both would want the same init.ora name, both would want the same alert file names etc etc etc etc etc.




re: physical standby in same HOME as primary

Jeff, December 08, 2004 - 5:07 pm UTC

You can have the primary and standby database share the same home (on the same box or in a cluster) by using the init.ora parameter LOCK_NAME_SPACE to specify an "alternate" name for the second database.

Search in metalink for LOCK_NAME_SPACE.

Ora-16000

totu, February 22, 2005 - 8:37 am UTC

Dear Tom.

Ihave a table as

create table t1 as select OBJECT_ID f1, OBJECT_ID + 200 f2 from all_objects;
then shutdown database,
then startup mount.
then alter database open read only;
then
1. Query: select * from t1 where f1 < 1 order by f1, f2; returns error:
ORA-16000: database open for read-only access,
2. But query select * from t1 where rownum < 10000 order by f1, f2; returns requested results.

Thanks in advance.

Tom Kyte
February 22, 2005 - 8:55 am UTC

well, not really a question phrased here, so, let me extrapolate one

"why do I sometimes get an ora 16000 for some queries and not others"

I'll guess, you have an "old fashioned" temporary tablespace. The first query wants to use temp the second did not.

but tell us, how is temp set up

About the ora-16000 error post

Arun Gupta, February 22, 2005 - 9:18 am UTC

Tom,

What is "old fashioned" temporary tablespace?

Thanks



Tom Kyte
February 22, 2005 - 9:52 am UTC

not locally managed with tempfiles.

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96653/manage_ps.htm#1014517 <code>

but actually.....

I think we didn't get the entire story from Totu above, I wonder - is there an index in play and is the index "monitoring"


Totu -- do this

alter session set sql_trace=true;
run your query that fails
exit sqlplus

and review the trace file, tell us the dml you see in there.

SQL Trace Results

totu, February 23, 2005 - 12:42 am UTC

1. Table Indexes:
select index_name, table_name from dba_indexes where table_name = 'T1' and
owner = 'SCOTT';
no rows selected

2. SQL_Trace

select *
from
t1 where f1 < 1 order by f1, f2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 4.67 14.30 19465 21630 1 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 4.67 14.30 19465 21630 1 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 59
********************************************************************************

declare
l_text ora_name_list_t;
l_n number;
begin
if ( is_servererror(2291)
)
then
dbms_output.put_line( '--------------------' );
dbms_output.put_line( 'statment causing error: ' );
l_n := ora_sql_txt(
l_text );
for i in 1 .. nvl(l_text.count,0)
loop
dbms_output.put_line(
l_text(i) );
end loop;
dbms_output.put_line( 'error text: ' );
for i in 1
.. ora_server_error_depth
loop
dbms_output.put_line( ora_server_error_msg(i)
);
end loop;
dbms_output.put_line( '--------------------' );
end if;
end;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)



********************************************************************************



Tom Kyte
February 23, 2005 - 2:05 am UTC

review the TRACE file, tell us what DML you see in there.

you want to read the TRACE file, you want to look for statements that modify the database in there.

Works

Totu, February 23, 2005 - 6:47 am UTC

Dear Tom. Thanks for help.
Really, I could find any other lines that might be useful for your. But I again shutdown+alter db open read only. Now it works and result is correct:
no rows selected.

Any idea?

Tom Kyte
February 23, 2005 - 9:19 am UTC

nope, something changed - perhaps you dropped some other system trigger.

if it happens again, you know what to do.

Works

Totu, February 23, 2005 - 6:48 am UTC

Dear Tom. Thanks for help.
Really, I couldn't find any other lines that might be useful for your. But I again
shutdown+alter db open read only. Now it works and result is correct:
no rows selected.

Any idea?

althaf, February 23, 2005 - 7:57 am UTC

can you briefly explain the SOLUTION for block corrution error
ie. ORA-01578

Tom Kyte
February 24, 2005 - 4:22 am UTC

goto metalink.oracle.com, the support website.

SALEEM, February 23, 2005 - 8:02 am UTC

CAN WE LOAD DATA FROM other DATABASE(SQL-SERVER,MS-ACESS) TO ORACLE 9I
IF "YES" THEN
PLEASE EXPLAIN STEPS

Tom Kyte
February 24, 2005 - 4:23 am UTC

here are at least 3 ways..


</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4406709207206#18830681837358 <code>
would be one way, insert into oracle_table select * from table@sqlserver;

you can use the migration workbench, freely available on otn.oracle.com.

you could bcp the data out, sqlldr it back in

Different Database name

Tim, March 06, 2005 - 9:46 am UTC

Tom,
I have a questions about Stnadbay Database.

Is it possible to have different database name(not SID or service name) in standby database than primary database name ? I know I can have different SID not not sure about db name.

Thanks in advance.

Tim



Tom Kyte
March 06, 2005 - 3:02 pm UTC

physical standbys are bit by bit, byte by byte copies -- and the db name is in the datafiles. logical standbys could be different if you like as they are not bit by bit, byte by byte copies.

Logical VS Physical

Tim, March 07, 2005 - 10:53 am UTC

Well, my option is physical standby database only as we are still in 8i. So can I take you answer as it is okay to have different SID but database name must be same for physical standby database? Could you please confirm ?

Thanks very much !

Tim

Tom Kyte
March 07, 2005 - 7:47 pm UTC

by definition it will be, yes.

Recovery of standby database, if i have backup of primary only

Edgar, May 10, 2005 - 8:59 am UTC

Hi,

Right now i am working on migration plan of one production system from the old server and disks into new one. Also, new release of 3rd party application must be applied during migration.
One of the goals is to minimize downtime during migration.

Biggest problem for that goal is low network throughput ( only 100Mbs between servers, but database is 300GB big, 10GB or redo daily).

My initial idea for that plan was:
"A"
----- Start day before downtime window ----
1.) set log_archive_dest2 to shared network disk on new server
2.) make disk-to-disk online backup of production (estimated time ~12hours) to shared network disk on new server
3.) get controlfile
SQL> alter database backup controlfile
4.) startup mount on new server, rename files
5.) start recovery 
SQL> recover database automatic using backup controlfile until cancel

---- Start of downtime at time X ----
6.) Shutdown production, cancel recovery on new server, shutdown. 
7.) Copy online redo, controlfile from old to new server
8.) startup mount, rename files again, recover, open with NORESETLOGS
9.) Apply new release of application, make sure all is O.K.
10.) If O.K. then open application interfaces; otherwise cancel migration, start production on the old server.
---- End of downtime, estimated downtime 1 hour ----

The alternative "B" i see, is to setup standby database on the new server, following Oracle documentation, with no downtime instead of steps 4,5. Then activate standby instead of step 8.

One problem with "B" i see is: just after entire migration there is no backup of the activated standby.

So, i must take backup of standby db before activation and before start of downtime,
 or is there possibility to use backup of primary to recover standby?

Regards,
 


 

Tom Kyte
May 10, 2005 - 9:23 am UTC

yes, you can build standbys from backups of primary.

and the backups in a physical standby solution can come from either or -- they are mirror images of eachother.

P.S.

Edgar, May 10, 2005 - 9:04 am UTC

Oracle version is 8.1.7.4, OS on old server WinNT, OS on new server Win2K

My question was, actually, about "Graceful failover"

Edgar, May 13, 2005 - 7:28 am UTC

Hi,

I find the answer to my question on Metalink forum.

In order to use backup of primary to recover standby after failover (activation), i should open standby with no reset logs.
It is called "graceful failower".
It is impossible to do that with "alter database activate standby database".
Instead, i need to use the same technics as in case "A":
1. shutdown databases
2. transfer unarchived (online) redo, transfer/recreate controlfile
3. startup "standby" as "normal" database, recover if needed, open with noresetlogs.

If any, i am about to setup the test of this scenario.

Sincerely,


Migration of database

Raj, May 18, 2005 - 1:52 am UTC

how to migrate oracle 9i database from windows to unix.

Tom Kyte
May 18, 2005 - 8:50 am UTC

it'll involve a database rebuild -- exp, imp would be one method.

or, get to 10g and you can transport datafiles from operating system to operating system without having to rebuild.

not able to connect to standby database

reader, July 22, 2005 - 6:54 am UTC

$ sqlplus dcmhl@ebsdbstd

SQL*Plus: Release 9.2.0.6.0 - Production on Fri Jul 22 16:35:17 2005
HI

We are trying to get connected to the physical standby database(for reporting tasks) opened in READ ONLY mode. There is an UNDO Tablespace present, and the undo segments are all online..
Can you please tell me the reason of this problem and how to solve it..

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Enter password:
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-01552: cannot use system rollback segment for non-system tablespace 'USERS'
ORA-06512: at line 2

not able to connect to standby database

reader, July 22, 2005 - 7:00 am UTC

HI

We are trying to get connected to the physical standby database(for reporting tasks) opened in READ ONLY mode. There is an UNDO Tablespace present, and the undo segments are all online..
Can you please tell me the reason of this problem and how to solve it..



$ sqlplus dcmhl@ebsdbstd

SQL*Plus: Release 9.2.0.6.0 - Production on Fri Jul 22 16:35:17 2005

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Enter password:
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-01552: cannot use system rollback segment for non-system tablespace 'USERS'
ORA-06512: at line 2

Tom Kyte
July 22, 2005 - 9:10 am UTC

is auditing enabled?
any DDL triggers being fired off here?

it is read only, there shouldn't be any MODIFICATION dml being done that requires a rollback segment.

Thanks..

Puja, July 25, 2005 - 6:13 am UTC

We did have a database logon trigger - We have disabled it, and now are able to log in...

Thanks and regards,

Puja...

Problem in adding datafile

Ram, September 25, 2005 - 9:00 am UTC

I added a datafile to my indx tablespace on the primary,and it created successfully. But, the path which I had defined in primary is not available in Standby, hence, the archive is not getting applied. I am getting the following error:
"ora 1274 - Cannot add datafile
some recovered datafile may be left media fuzzy
Background media recovery process shutdown"
The standby_file_management is auto. Can I do the following
I am working on Oracle 9.2.0.6
1. Cancel the recovery mode by
alter database recover managed standby database cancel;
2. Create a datafile in index tablespace in standby database with different path
3. Shutdown the standby database
4. Give db_file_name convert = ('primary file name', 'secondary file name')in the init.ora file
5. create new spfile
6. Start the secondary database in nomount
7. alter database mount standby database
8. Finally, alter database recover managed standby database disconnect from session;

Is this ok ? Am I missing out something

Regards,
Ram

Tom Kyte
September 25, 2005 - 9:47 am UTC

well, you would not really create a datafile in as much as copy over the existing datafile (need something to get started with, something that redo can be applied to)

You can modify the spfile directly, no need for the init.ora


I'd walk this through with support unless you have a test system to test with just to verify the step by steps - they are pretty good at situations like this. I don't have a data guard set up to play with.

DB Standby setup

jp, September 25, 2005 - 10:07 am UTC

Dear tom,

Can you point me a link where I can learn how to setup a standby DB in 9.2

Thanks



Tom Kyte
September 25, 2005 - 10:46 am UTC

Re:Problem in adding datafile

Ram, September 25, 2005 - 11:56 am UTC

Hi,
You had mentioned that

"you would not really create a datafile in as much as copy over the existing datafile"
Sorry, I didnot really understand what you meant
Since, the drive which I had created the datafile in primary doesnot exist in secondary db, I was getting the error. What do you suggest that I do now ? How do I create the datafile ?

Regards,
Ram

Tom Kyte
September 25, 2005 - 12:09 pm UTC

My point was you won't "create a new empty datafile on the standby", you would have to restore a datafile from the production machine (there are bits and bytes that would never find their way into the file otherwise).

I'm also suggesting, since I don't have a test environment set up right now for this, that you walk through this with support - unless you have a play/test environment to work through the precise steps - do not test/guess on the real thing.

Recovering standby database

Sanjaya Balasuriya, November 09, 2005 - 4:55 am UTC

Hi Tom,
I have created a standby database on the same host as the primary database for testing.

I'd like to know whether there are any ways to transfer archived log to the standby site within RMAN.
And.. how can we recover the standby database without user interaction ? I'm using "recover standby database" command. But it needs user interaction

Thanks.

Tom Kyte
November 10, 2005 - 5:13 pm UTC

you don't use rman to transfer logs with data guard, arch and/or lgwr does it.


you put the database into managed recovery mode.

Have you read through the data guard docs?

</code> http://docs.oracle.com/docs/cd/B14117_01/server.101/b10823/toc.htm <code>


nologging and data guard 10gR2

A reader, November 10, 2005 - 1:47 pm UTC

Hi

I am going to implement data guard in a mission critical application. There are many batch processes which uses nologging operation because these database is huge (2TB for an OLTP system) and the amount data to be processed is huge as well, it extensively uses insert /*+ append */ for temporary tables but if I use data guard these operations cant be used correct? I am currently in 9iR2 but migreating to 10gR2 next month.

Thank you

Tom Kyte
November 11, 2005 - 11:58 am UTC

if you are planning on data guard, using non-logged operations isn't going to be advised, no.

The generation of redo on a system with sufficient, balanced IO, will in general (especially with LARGE BULK OPERATIONS) not affect the perform of the individual statements. LGWR will write the generated redo out in the background whilst the large bulk statements are processing. If would only be if you have insufficient IO resources that this will be problematic (and network resources with data guard, that'll be important too).


but you said "into temporary tables" - that'll be OK, they can be nonlogged, you don't care if they make it to the failover site do you?


nologging and data guard 10gR2

A reader, November 13, 2005 - 5:10 pm UTC

Hi

We use insert append to temporary tables (but not global temporary tables).

If I use nologging then I would see error messages in the alert correct? How can I tell Oracle to ignore those messages?

Tom Kyte
November 13, 2005 - 5:17 pm UTC

just put the permanent tables ( please don't call them temporary tables, they are not, they might be "work tables", "scratch tables" - but they are certainly permanent tables!) into their own tablespace(s), away from "real stuff"

and realize that when you failover - you'll need to basically truncate them all after starting up to "clean them out".

things in alert logs are already ignored by Oracle - whatever might go into the alert doesn't affect the running of the database, they are just messages to you from the database.


of course test your failover a couple of dozen times to make sure you everything you need to do and what the procedures will be.

checking primary and standby in sync 8.1.7.4

Susan, November 14, 2005 - 3:58 pm UTC

We use a script that checks v$log_history on the primary and standby and echos out the max sequences. Does v$log_history get it's information from the control file? Here's why I ask. I ran a hot backup on primary a week ago to build a new standby on new hardware. I missed a couple of tablespaces and I think I may have applied logs after mounting with missing tablespaces because I received a error message that recovery will require a resetlogs. OK I think I can fix this by just creating a new standby controlfile off the primary and apply the logs since the backup was run. I mount the standby and run my scripts to see which logs I need to bring over and v$log_history is showing it's in sync with the primary but it really can't be because the backup was from a week ago. So maybe I shouldn't use v$log_history because that pulls from the control file? Thanks.

Tom Kyte
November 15, 2005 - 7:20 am UTC

resetlogs is needed after an incomplete recovery.

You could try "patching" this - or doing it right from the get go.


I'm not sure of what your state is. recreating a controlfile doesn't sound like it will be part of your solution however.

A reader, November 17, 2005 - 12:19 pm UTC

Tom,

Can we run queries againt standby database (i.e can we use stand by database for reporting purpose. just queries)

Thanks


Tom Kyte
November 18, 2005 - 9:37 am UTC

Yes, both a physical and logical standby support that.

A physical standby may be in one of two modes:

a) managed recovery mode - no queries, applying redo.
b) open read only, accumulates redo but does not apply it

You can be in mode (b) during some reporting window and mode (a) all other times.


A logical standby is always open read/write (you read the data from the production site, but you can write information into other new tables such as audit trails and the like)

Error in trace file

Ram, November 28, 2005 - 6:35 am UTC

Hi,
I am seeing this message in my standby database bdump trace file:
*** 2005-11-27 03:39:22.906
*** SESSION ID:(14.4898) 2005-11-27 03:39:22.890
SGA initialization / DB open not complete even after 5 minutes, QMN0exiting
error 604 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-00604: error occurred at recursive SQL level 1
ORA-01219: database not open: queries allowed on fixed tables/views only
Dump file f:\database\bdump\bv_qmn0_1052.trc

There is no error message in the alert logs, as well as in the primary db. Standby is running fine and the logs are getting applied. What could be the problem ?

Tom Kyte
November 28, 2005 - 7:47 am UTC

looks like you have advanced queues set up (aq_tm_process != 0) and that process finds that it cannot start and thus exits.

AQ and physical standby on 10.1

Charlie, November 28, 2005 - 2:07 pm UTC

alter system set aq_tm_processes = 0 scope=both;

From Oracle Support.

aq_tm_processes

Ram, November 28, 2005 - 11:55 pm UTC

Hi,
Thanks. I am working on Oracle 9.2.0.6 and aq_tm_processes is set to 1 in both primary and standby. Should I set to 0.


Thanks and Regards,
Ram

Tom Kyte
November 29, 2005 - 12:08 am UTC

if you don't want this message, sure, do so on the standby instance - but remember, if you fail over..... you need to reset it back to 1.

guard_status

Matthias Rogel, November 29, 2005 - 9:18 am UTC

SYS> select guard_status from v$database;

GUARD_S
-------
ALL


TEST> select count(*) from all_objects;
select count(*) from all_objects
*
ERROR at line 1:
ORA-16224: Database Guard is enabled


TEST> alter session disable guard;
ERROR:
ORA-01031: insufficient privileges


1. why does select on all_objects gives ORA-16224
(select count(*) from all_tables; doesn't)

2. which privilege is needed for
alter session disable guard;

Version is 10.2.0.1.0.

Tom Kyte
November 30, 2005 - 10:33 am UTC

contact support and reference bug 4663793

privilege?

Matthias Rogel, November 30, 2005 - 10:53 am UTC

Hallo Tom,

thx very much for mentioning 4663793

However, I still don't know
which privilege to grant in order to be able
to disable guard during a session.
grant dba works, however I don't want to
grant dba to everyone.

doc states:
(</code> http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_2012.htm#SQLRF00901 <code>
"You do not need any privileges to perform the other operations of this statement unless otherwise indicated."

Nothing else is indicated.
What am I missing ?


Tom Kyte
November 30, 2005 - 3:01 pm UTC

ops$tkyte@ORA10GR2> drop user a cascade;

User dropped.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> grant create session, alter database to a identified by a;

Grant succeeded.

ops$tkyte@ORA10GR2> connect a/a
Connected.
a@ORA10GR2>
a@ORA10GR2> alter session disable guard;

Session altered.


looks like a doc bug if it did not mention that. 

thank you

Matthias Rogel, November 30, 2005 - 3:59 pm UTC


Bulk data loading on Primary database

Charlie Zhu, November 30, 2005 - 6:19 pm UTC

Hi Tom,

What's your stratergy to do Bulk data loading
using direct path insert from SQL*Loader or External Table?

I load 4G flat files in 10 minutes on a 2CPU 2G RH Linux 3,
Oracle 10.1.0.4,
with /*+ append nologging */, it may take 10 hours with table logging enabled, I did not test it yet
(I know you ask for design, implement, test and deploy).

Base on Physical standby user guide,
1. Make database/tablespace nologging
2. Loading data
3. Copy tablespace/datafiles to standby site

Is this the only way to do it?

Thanks, Charlie

Tom Kyte
November 30, 2005 - 9:07 pm UTC

nologging is not a hint, it is an attribute of the table, alter table t nologging and then use the /*+ append */ hint on it.


I doubt, with correctly implemented IO, that loading 4gb of data would take 10 hours.

I say "try it first" before you try to "tune something that doesn't need it"

JDBC row base data load took a few days

Charlie Zhu1, November 30, 2005 - 6:47 pm UTC

It took a few days to load data by row based Java process,
Tested by our Java developers, load the same 4GB flat file.

Tom Kyte
November 30, 2005 - 9:08 pm UTC

then they did something HORRIBLY wrong. I think in a few days I could load 4gb of data using sqlplus and a keyboard - typing it in :)

Creating physical standby database

Olaf, December 19, 2005 - 10:48 am UTC

I am trying to create standby database following Oracle documentation. As far as I understood, I should NOT copy online redo logs from primary to standby site during creation. How can I open the standby in read only mode or in read/write mode in disaster (by activate standby database) if I have NO online redo logs?

TID


Tom Kyte
December 19, 2005 - 12:15 pm UTC

when you activate (failover) you'll open reset logs which will create them.

The standby can either have just ARCHIVE logs or with recent software - it can in fact have standby redo logs for realtime application of log data - but this is not necessary.

Data Guard supports Drop User?

Luong, January 14, 2006 - 6:37 am UTC

Hi Tom,

We have installed a physical standby database with 9.2.0.7 and it runs fine. It's great

However, in some cases we might need to drop and re-create users in primary database and import data for those users from dmp files. In that case does the standby work? Are all changes imported from dmp files propagated to the standby database?

Thanks,
Luong.

Tom Kyte
January 15, 2006 - 3:28 pm UTC

yes, everything is. physical standby is bit for bit, byte for byte a replica of the original site. It is just like a restored backup. It is so much an exact replica that you can take the datafiles from the standby and use them to restore on the production site (like backups)

Can we create standby database from a hot backup

Amit, February 17, 2006 - 6:25 am UTC

Tom,
I wish to create a standby database. Is it possible to create it from hot backp of last week and archived logs from that period till now.



Tom Kyte
February 17, 2006 - 2:06 pm UTC

see Note 278371.1 in metalink to get started...

Logging Clarification

Vikas, February 27, 2006 - 9:35 am UTC

Hi Tom,

Just to be safe, wanted to know if I have understood the concept of Logging under the DG environment, for MAXIMUM PERFORMANCE.

The database as itself has been running in the FORCED_LOGGING mode.

SQL> Select force_logging from v$database;
FOR
---
YES

So any operation on this database, bulk operation OR index rebuild with NOLOGGING clause OR a local index rebuild with UNUSABLE INDEXES will not only log the dictionary effects but also log the index entries to propogate  to the Standby server, where they will be applied.

Is mine understanding correct, or I have to take extra care in the Primary database that no user should use the  NOLOGGING Keyword in any of these statements, else standby gets corrupted as it woould not be in sync with its primary.

Thanks
 

Tom Kyte
February 27, 2006 - 10:58 am UTC

correct, when force logging is on - then everything is forced to log.

Logging, truncation and Data Guard

Mark Gold, March 28, 2006 - 12:00 am UTC

So if I then truncate a table is this logged and propagated to the standby? My standby is a physical standby in maximum performance mode.

Tom Kyte
March 28, 2006 - 7:50 am UTC

the changes to the dictionary are - and hence the table will likewise be truncated on the remote standby site as well.

Export on Physical standby database

Amar, April 11, 2006 - 9:43 am UTC

Hi Tom,
Can you please help me know that is it possible to do a full/schema level export from a physical standby database when it is opened in read-only mode.

Tom Kyte
April 11, 2006 - 4:14 pm UTC

yes, you can use exp on a read only database.

Switchover Vs Failover

Vikas, April 20, 2006 - 12:29 pm UTC

Hi Tom,

Just curious question?

We are trying to go in for an experimentaion where one datacenter will just be put to off, where the company's primary database do reside for an hour.

Just before this we will get the pagers to swithover to the standby database.(may be we are allowed to do this in 10 mins)

Many of my colleagues do debate that in this time we need to copy the online redo log files to the standby machine and register them, but my point is Why to COPY instead issue a command Alter system archive log current, and the log transport service will automatically ship the stuff to the Primary database.

As soon as the log applied on the standby database, the two come in sync. wrt that time and the datacenter goes off.

Now if we have to act upon in the real situation, and the data center has been burnt off then ofcourse we need to do failover and activate the standby to behave like primary.

But since we know that the datacenter has just gone off due to power outage, it will be up in next 1 hour can we do switchover of Standby database to act as primary, so that the standby database need not be created again.

My question, again coming to mind,

1.Can we do the switchover,if yes then what will happen when the Primary comes up after 1 hour. Auto Start of the instance will get the database to a state from where it was left, opened in READ WRITE environment.

I think, on the Ist hand we can't do the switchover unless we have the Primary converted to standby, we can live with 2 standby's with no primary for a moment but not 2 primary's in any case.

Is that justified?

Your thoughts on what needs to be done during this exercise, so that it looks to us as a real exercise but not doing a failover, thus re-creating the standby again.

Thanks





Tom Kyte
April 20, 2006 - 12:38 pm UTC

switchovers are "graceful", they sync up - switch the users over - and reverse the roles. Then you would do it in reverse to put the production back the way it was.

Funny, you should have already done this quite a few times before this, when testing. You do not want to find out how to switchover or failover the DAY YOU NEED TO. This is like not restoring your backups on a test machine (just to show you actually a) can, b) know how to).


Time to setup your test environment and start testing this till you can do it in your sleep.

Rebuilding Standby Database.

Vivek, April 26, 2006 - 4:56 pm UTC

Tom,
I have a standby Database on a different server than primary for my client. Recently we found that standby DB is not getting updated from Primary and is out of sync for last 3 months. I have to rebuild the standby DB again. I want to confirm with you that following steps are correct:
1. Move the hot backup from primary site to standby.
2. Move the archived redo logs from primary site for the period to Standby .
3. Mount the standby DB
4. Recover Automatic standby DB.
following steps i don't think need to be performed (please confirm)
a) Make a standby control file also, assuming that there are no changes on Primary.
b) Need to copy the redo log files from Primary.

Thanks

Vivek



Tom Kyte
April 27, 2006 - 2:47 pm UTC

well, answer this first:

how did you build it in the first place (and secondarily - 3 months!!! it took you 3 months to find this???)

Rebuilding Stand by DB

A reader, April 27, 2006 - 5:31 pm UTC

It was build couple of years back and from documents available to me it was build from a cold back up of the Primary and then moving the archive log via ftp to standby site. Redo logs are on raw device, they are copied to standby site as well.Oracle 9i standard edition is installed on this. We regained the contract for the site this year and thats how we come to know.



Tom Kyte
April 28, 2006 - 1:49 am UTC

Ah, so this isn't really data guard - this is a "do it yourself" standby. Then I cannot really advise, because I don't know what software you developed yourself to facilitate this (SE doesn't have data guard).

online redo logs would never be part of this situation - you would only be using archives. copying the online redo logs would online be done in the event of a failure of the primary system - that is, if you can get access to them.



Rebuilding Standby DB

vivek, May 01, 2006 - 5:06 am UTC

Yes this setup is on Standard edition so no "DATA GUARD" and its all manual setup, thats why i was confirming with you the steps required to bring the standby DB in sync with Primary ie
1. Move the latest hot backup from primary site to standby.
2. Move the archived redo logs from primary site for the period to Standby .
3. Mount the standby DB
4. Recover Automatic standby DB.
5. check all archive logs been applied.

Thanks Tom.



Tom Kyte
May 02, 2006 - 2:37 am UTC

see
</code> http://www.niall.litchfield.dial.pipex.com/ <code>
for "do you really need dataguard" for the step by steps on setting up SE this way.

V$log for Standby Database

Vikas, May 15, 2006 - 4:18 am UTC

Hi Tom,

Can you please explain why the V$log on the standby database shows CLEARING & CLEARING_CURRENT as the values for the Status column(s).

The status "CLEARING" means that the log is being re-created after the alter databas clear logfile statement is issued.

& the CLEARING_CURRENT means that the current log is getting cleared.

I have seen that the log which is getting applied during recovery process MRP0 is getting the status of CLEARING_CURRENT and the rest of them have a status of CLEARING.

Can you please clarify this doubt!

Switch back to primary site after activating standby

A reader, May 16, 2006 - 11:27 am UTC

Tom,

We are using physical standby database at the remote site connected via 1MBit WAN line. The db is Oracle 10gR2 Standard Edition. The database is pretty big (30GB), but writes not much archive log.

Is there any way to recreate the primary database after activating standby and making standy to the prod database without copying the whole standby database trhough the WAN.
I saw some scenarios to recover through resetlog. The activating standby does reset log as well. So is it possible to recover primary and apply the archive logs from standby site?

Tom Kyte
May 16, 2006 - 11:41 am UTC

if you do a graceful switchover - yes, that is what switchovers allow for.

if you failover, in general "no".

10gR2 Failover

A reader, May 16, 2006 - 3:27 pm UTC

If the primary has "flashback database" enabled, then it could allow the option of flashing back the primary DB to a point in time before the standby was activated. At that point, you can use the standby's redo logs to recover through the resetlogs / activation point. Data Guard broker may have a nice UI to make that operation fairly straightforward. But these nice features are part of Enterprise Edition and not Standard Edition.

Haven't tried any of this myself though :-).

ORA-600 On standby Database

Vikas, June 28, 2006 - 1:20 pm UTC

Hi Tom,

I am getting an ora-0600 error on my standby database and MRP is not getting activated

Media Recovery Log /arch-01/databases/admw/redolog-9990-1-583227512.arc
Wed Jun 28 17:00:25 2006
Errors in file /dumps-01/databases/admw/bdump/admw_mrp0_27138.trc:
ORA-00600: internal error code, arguments: [kcrfr_resize2], [71778816], [72351744], [67096576], [], [], [], []
MRP0: Background Media Recovery terminated with error 600
Wed Jun 28 17:00:26 2006
Errors in file /dumps-01/databases/admw/bdump/admw_mrp0_27138.trc:
ORA-00600: internal error code, arguments: [kcrfr_resize2], [71778816], [72351744], [67096576], [], [], [], []
Recovery interrupted!
Wed Jun 28 17:00:26 2006
Errors in file /dumps-01/databases/admw/bdump/admw_mrp0_27138.trc:
ORA-00600: internal error code, arguments: [kcrfr_resize2], [71778816], [72351744], [67096576], [], [], [], []
Wed Jun 28 17:00:26 2006
Errors in file /dumps-01/databases/admw/bdump/admw_mrp0_27138.trc:
ORA-00600: internal error code, arguments: [kcrfr_resize2], [71778816], [72351744], [67096576], [], [], [], []
Wed Jun 28 17:00:26 2006
Completed: alter database recover managed standby database d
Redo Shipping Client Connected as PUBLIC

Please let me know what can be done to get this process activated, if not how can we best break the relationship between Primary & standby database so that the Standby database can be configured again sometime afterwards.

Thanks


Tom Kyte
June 28, 2006 - 4:56 pm UTC

please utilize support for support issues like this...

Ver 8.1.5.

Sen, August 25, 2006 - 10:11 pm UTC

Dear tom,

Can you please point me a link where I can learn how to setup a standby DB in 8.1.5.

Thanks


online redo logs for standby database

A reader, September 25, 2006 - 9:14 am UTC

Hi Tom,

could you please explain the following context

=========cut===============
The SQL ALTER DATABASE statement used to perform the switchover automatically creates online redo logs if they do not already exist. This might significantly increase the time required to complete the COMMIT operation. Therefore, Oracle Corporation recommends that you always manually add online redo logs to the target standby database when you create it. Use one of the following methods to manually add the online redo logs if they do not already exist:

-Copy the existing online redo logs from the initial primary database site to the target standby database site and define the LOG_FILE_NAME_CONVERT initialization parameter to correctly associate the standby site path names to the new online redo logs (see Section 3.2.6).
-Drop any existing online redo logs at the target standby site and create new ones using the ALTER DATABASE ADD STANDBY LOGFILE statement
================end====================

my question:
what is the relation between the standby redo logs and the online redo logs ?

i guess standby redo logs are used when the database is in standby role , but once the database is switched over to the primary role , the standby redologs wont be used and the database will need to have online redo logs , am i correct?

will the database automaticly create my online redologs when switchover or should i create them manual(could you please tell how i could create them manual)

many thanks



Tom Kyte
September 25, 2006 - 3:02 pm UTC

normally, the redo is shipped to a standby and put into the archives directly and then applied.

therefore, the standby need not have "their online redo logs", but they are recommending you create them so that they are there when you need them - instead of having to create them when you need them.

the standby redo logs are the online redo logs for the standby, they need not exist until you open the standby for read/write.


Online logs vs. standby logs

Roderick, September 25, 2006 - 3:22 pm UTC

Actually the difference is documented in section 2.5.2 of Oracle® Data Guard Concepts and Administration
10g Release 2 (10.2)
Part Number B14239-04
A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database.
It then goes on to describe the Data Guard configurations where standby redo logs are required.

more details please

A reader, September 25, 2006 - 4:48 pm UTC

Hi Tom,
does that mean that when i swithover (to the make my standby database to be the primary) the standby redo logs are converted to be normal online redo logs " i use the word standby while creating those standby redo logs , therefore that look to be confusing"

will you please give us some lite ?

Tom Kyte
September 26, 2006 - 2:07 am UTC

see Rodericks comments above and a pointer to some documentation

physical standby DB lagging behind,

A reader, October 15, 2006 - 12:39 pm UTC

Apparently the max(sequence#) from v$log on primary database is 200 more than max(sequence#) in v$archived_log where applied='YES' on physical standby database.

On primary
SQL> select max(sequence#),thread# from v$log group by thread#;

MAX(SEQUENCE#)    THREAD#
-------------- ----------
         22540          1
         27343          2

On STANDBY,
SQL> l
  1  select * from
  2  (select dest_id,thread#,sequence#,applied,status,first_time,next_time from v$archived_log
  3* order by sequence# desc) where rownum < 32
SQL> /

   DEST_ID    THREAD#  SEQUENCE# APPLIED   STA FIRST_TIME                 NEXT_TIME
---------- ---------- ---------- --------- --- -------------------------- --------------------------
         2          2      27305 NO        A   13-oct-2006 00:22:03       13-oct-2006 01:22:04
         1          2      27305 NO        D   13-oct-2006 00:22:03       13-oct-2006 01:22:04
         2          2      27304 NO        A   12-oct-2006 23:22:02       13-oct-2006 00:22:03
         1          2      27304 NO        D   12-oct-2006 23:22:02       13-oct-2006 00:22:03
         2          2      27303 NO        A   12-oct-2006 22:22:03       12-oct-2006 23:22:02
         1          2      27303 NO        D   12-oct-2006 22:22:03       12-oct-2006 23:22:02
         2          2      27302 NO        A   12-oct-2006 21:22:17       12-oct-2006 22:22:03
         1          2      27302 NO        D   12-oct-2006 21:22:17       12-oct-2006 22:22:03
         2          2      27301 NO        A   12-oct-2006 20:22:03       12-oct-2006 21:22:17
         1          2      27301 NO        D   12-oct-2006 20:22:03       12-oct-2006 21:22:17
         2          2      27300 NO        A   12-oct-2006 19:22:05       12-oct-2006 20:22:03


The first time and next_time are still pointing to 13th Oct.  I guess the archive log files are not applying to physical standby database.

In this case, how do we find out what the problem is?
And how to fix it?  

I am on 9.2.0.6 DB.

If rebuilding DR is the solution, how to do it without bringing down by primary database.

thanks,
 

Tom Kyte
October 15, 2006 - 1:12 pm UTC

this could be something support would be really good at doing.... don't you think? Helping you with an install/configuration type of issue....

ora-00354

Parag Jayant Patankar, November 06, 2006 - 11:34 am UTC

Hi Tom,

I am in Process of creating physical standby database with standby redo logs on 9.2.0.3 on Windows box. I am facing ora-00354 on standby database. I am not able to trace the cause of an error.

Primary Init File

*._always_anti_join='OFF'
*._always_semi_join='OFF'
*.aq_tm_processes=1
*.background_dump_dest='F:\oracle\admin\orcl\bdump'
*.compatible='9.2.0.0.0'
*.control_files='F:\oracle\oradata\orcl\CONTROL01.CTL','F:\oracle\oradata\orcl\CONTROL02.CTL','F:\oracle\oradata\orcl\CONTROL03.CTL'
*.core_dump_dest='F:\oracle\admin\orcl\cdump'
*.db_block_size=8192
*.db_cache_size=367001600
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='orcl'
*.java_pool_size=33554432
*.job_queue_processes=10
*.large_pool_size=8388608
*.log_buffer=10485760
*.open_cursors=300
*.optimizer_index_caching=90
*.optimizer_index_cost_adj=10
*.optimizer_max_permutations=1000
*.pga_aggregate_target=524288000
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_max_size=838860800
*.shared_pool_size=367001600
*.sort_area_size=10485760
*.star_transformation_enabled='FALSE'
*.timed_statistics=FALSE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='F:\oracle\admin\orcl\udump'

### PRIMARY DATABASE PRIMARY ROLE PARAMETERS
*.log_archive_dest_1='location=F:\payprodarchive MANDATORY'
*.log_archive_dest_2='SERVICE=dr LGWR SYNC AFFIRM'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format=%d_%t_%s.arc
*.log_archive_start=TRUE
*.remote_archive_enable=TRUE

### PRIMARY DATABASE STANDBY ROLE PARAMETERS
*.FAL_SERVER=dr
*.FAL_CLIENT=orcl
*.STANDBY_ARCHIVE_DEST=F:\payprodarchive
*.standby_file_management=auto

Standby init file

*._always_anti_join='OFF'
*._always_semi_join='OFF'
*.aq_tm_processes=1
*.compatible='9.2.0.0.0'
*.db_block_size=8192
*.db_cache_size=367001600
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.java_pool_size=33554432
*.job_queue_processes=10
*.large_pool_size=8388608
*.log_buffer=10485760
*.open_cursors=300
*.optimizer_index_caching=90
*.optimizer_index_cost_adj=10
*.optimizer_max_permutations=1000
*.pga_aggregate_target=524288000
*.processes=150
*.query_rewrite_enabled='FALSE'
*.sga_max_size=838860800
*.shared_pool_size=367001600
*.sort_area_size=10485760
*.star_transformation_enabled='FALSE'
*.timed_statistics=FALSE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'

### STANDBY DATABASE STANDBY ROLE PARAMETERS
*.control_files='D:\PAYPRODDR\CONTROL09.CTL'
*.db_file_name_convert=('f:\oracle\oradata\orcl\','d:\payproddr\')
*.log_file_name_convert=('f:\oracle\oradata\orcl\','d:\payproddr\')
*.log_archive_dest_1='location=d:\payproddr\Archive MANDATORY'
*.log_archive_dest_state_1='enable'
*.log_archive_format=%d_%t_%s.arc
*.log_archive_start=TRUE
*.FAL_SERVER=orcl
*.FAL_CLIENT=dr
*.STANDBY_ARCHIVE_DEST=d:\payproddr\Archive
*.standby_file_management=auto
*.background_dump_dest='c:\oracle\admin\orcldr2\bdump'
*.core_dump_dest='c:\oracle\admin\orcldr2\cdump'
*.user_dump_dest='c:\oracle\admin\orcldr2\udump'
#*.remote_login_passwordfile='EXCLUSIVE'
#*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.remote_archive_enable=TRUE
*.lock_name_space='orcldr2'
*.instance_name='orcldr2'

### PRIMARY DATABASE STANDBY ROLE PARAMETERS
#*.log_archive_dest_2='location=F:\payprodarchive MANDATORY'
#*.log_archive_dest_state_2='enable'

after adding standby redo logs on standby, If I do shutdown and startup standby database, I am getting following error in alert of standby site

on Nov 06 20:41:32 2006
RFS: Successfully opened standby logfile 4: 'D:\PAYPRODDR\STANDBYLOGFILES\LOGFILE1.LOG'
Mon Nov 06 20:41:32 2006
ARC1: Evaluating archive log 4 thread 1 sequence 111
ARC1: Beginning to archive log 4 thread 1 sequence 111
Creating archive destination LOG_ARCHIVE_DEST_1: 'D:\PAYPRODDR\ARCHIVE\1133324794_1_111.ARC'
Mon Nov 06 20:41:32 2006
RFS: Standby redo log 4 thread 1 sequence 111 is being archived
Mon Nov 06 20:41:32 2006
ARC1: Log corruption near block 116 change 0 time ?
ARC1: All Archive destinations made inactive due to error 354
Mon Nov 06 20:41:33 2006
Errors in file c:\oracle\admin\orcldr2\bdump\orcldr2_arc1_1384.trc:
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 116 change 0 time 11/06/2006 20:37:02
ORA-00312: online log 4 thread 1: 'D:\PAYPRODDR\STANDBYLOGFILES\LOGFILE1.LOG'

ARC1: Archiving not possible: error count exceeded
ARC1: Failed to archive log 4 thread 1 sequence 111
Clearing standby logfile 4 thread 1 sequence 111 due to error 354
ARCH: Archival stopped, error occurred. Will continue retrying
Mon Nov 06 20:41:33 2006
RFS: Successfully opened standby logfile 4: 'D:\PAYPRODDR\STANDBYLOGFILES\LOGFILE1.LOG'
Mon Nov 06 20:41:33 2006
ARCH:
Mon Nov 06 20:41:33 2006
ORA-16038: log 4 sequence# 111 cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log 4 thread 1: 'D:\PAYPRODDR\STANDBYLOGFILES\LOGFILE1.LOG'

Mon Nov 06 20:41:33 2006
Errors in file c:\oracle\admin\orcldr2\bdump\orcldr2_arc1_1384.trc:
ORA-16038: log 4 sequence# 111 cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log 4 thread 1: 'D:\PAYPRODDR\STANDBYLOGFILES\LOGFILE1.LOG'

Mon Nov 06 20:41:40 2006
Media Recovery Log D:\PAYPRODDR\ARCHIVE\1133324794_1_111.ARC
Media Recovery Waiting for thread 1 seq# 112 (in transit)
Mon Nov 06 20:44:40 2006
Archiver process freed from errors. No longer stopped
Mon Nov 06 20:46:24 2006
alter database recover managed standby database cancel
Mon Nov 06 20:46:27 2006
MRP0: Background Media Recovery user canceled with status 16037
MRP0: Background Media Recovery process shutdown
Mon Nov 06 20:46:28 2006
Managed Standby Recovery Cancelled
Completed: alter database recover managed standby database ca
Mon Nov 06 20:46:44 2006
alter database open read only
Mon Nov 06 20:46:44 2006
SMON: enabling cache recovery
Mon Nov 06 20:46:45 2006
Database Characterset is WE8MSWIN1252
replication_dependency_tracking turned off (no async multimaster replication found)
Completed: alter database open read only
Mon Nov 06 20:47:00 2006
***Warning - Executing transaction without active Undo Tablespace
Mon Nov 06 20:47:04 2006
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Nov 06 20:48:06 2006
***Warning - Executing transaction without active Undo Tablespace
Mon Nov 06 20:52:16 2006
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Nov 06 20:53:19 2006
***Warning - Executing transaction without active Undo Tablespace
***Warning - Executing transaction without active Undo Tablespace

Kindly help me regarding ora-00354 error.

thanks & regards
PJP








Tom Kyte
November 06, 2006 - 11:48 am UTC

please use support for install/configuration stuff.

Alex, December 04, 2006 - 2:18 pm UTC

Hi Tom,

I'm trying to understand the differences between streams and dataguard for when they should be used.

If I want a HA failover site, what problems could I encounter by using streams for this? E.g., what kind of disaster scenario might "hose" me, where dataguard would save the day. Thanks as always.

Tom Kyte
December 05, 2006 - 9:33 pm UTC

failover - dataguard - period, no questions. DG uses streams as a foundation technology


data guard IS failover

streams is NOT.

Alex, December 05, 2006 - 10:29 pm UTC

Ok well thank you, and I do understand that. However, I was hoping you could explain why this is. I ask because I would like to be able to discuss the issue intelligently and share ideas with co-workers, (and not just say "Because Tom Kyte says so.....). I have the general idea of how they both work, I'm just missing the piece to makes the light bulb come on.

I appreciate your response, but it's kind of like asking for help on a math question and just getting the answer. You don't learn anything that way.

With backups for example, you would say exports are not backups because you cannot apply archive logs to them thus rolling forward to the up to date changes. I'm looking for that type of a reason for these two technologies. I hope I made sense and thanks for the time.

Tom Kyte
December 06, 2006 - 9:36 am UTC

the explanation is simple:

data guard is a product designed to provide for a failover site.

streams is a feature you could use to build a product like data guard.




Here is analogy:

Oracle is a database, you could build an HR system with Oracle. Oracle the database however is not an HR system. It is a core technology you could use to BUILD an HR system.


Streams is a data sharing technology, you could build a failover solution with streams. Streams the technology however is not failover. It is a core technology you could use to BUILD a failover solution with.




That is the only answer - data guard IS FAILOVER. streams IS NOT FAILOVER.

Alex, December 06, 2006 - 11:17 am UTC

Ah, it finally clicked. I think I left out a crutial point, and that is our environment is using streams combined with RAC. So RAC would failover to the other instance, where streams is piping the data too.

I see your point clearly now, streams does no failing over of any kind, and I knew that, I just forgot RAC was providing failover.

But that would be instance failover, not database failover. So I really don't know what we have at the moment, other than it's wrong and it should be data guard.

Temp tables of standby

Salee, October 23, 2007 - 7:07 pm UTC

If the temp tablespace of standby is having more files and different names than production DB,will there be any impact due to this setting?

standby database

Rahul, December 20, 2007 - 5:30 am UTC

Please send me the complete steps to create the standby database in Oracle 9i.

THANXS
Tom Kyte
December 20, 2007 - 10:11 am UTC

there is only one step (aren't you lucky :) )

read this:

http://docs.oracle.com/docs/cd/B10501_01/server.920/a96653/toc.htm

from start
to finish

understand it


and then realize one could not even begin to send you the steps since you just said "standby" and that is ambiguous on the face of it - logical or physical?

In any case, there is an entire chapter with the STEP BY STEPS for each in that document.

online redo logs for standby database

Manjula, April 06, 2008 - 2:50 pm UTC

Tom,
Can you give me more detailed clarification on Rodericks comments and his pointers to the documentation. I am still confused on the roles of Online Redolog files and Standby redolog files.

My Primary DB is 10.2.0.2
-Created the Physical Standby DB using RMAN (duplicate target database for standby dorecover nofilenamecheck;)
- Was successful in creating it. The only problem is the logfiles views which populate the online redologs file information even though they physically do not exist.
- I have also created the standby redo logs.

Questions:
1) Do i have to drop and recreate the online redo logs on the Physical standby database so that the logfile views reflect the real availability information of the online redo log files.
This would also make sure that the online redologs are available to the database when the switchover takes place.

OR

2) Is the creation of standby redologs on the Standby database good enough for the database to function normally when the switchover takes place. Does the switched over database use the standby redologs as the regular online redo log files.

To be precise i would like to know, what would happen when a physical standby database which does not have any online redo log files , but do have standby redologs create is switched over to become primary and is opened for read/write.

Executing transaction without active Undo Tablespace

A reader, April 29, 2008 - 9:40 am UTC

I am getting the following error
swlk53:[tpw1]:[/u01/app/oracle/admin/tpw1/bdump]> tail alert_tpw1.log
***Warning - Executing transaction without active Undo Tablespace
***Warning - Executing transaction without active Undo Tablespace
***Warning - Executing transaction without active Undo Tablespace


Any idea why I am getting this?

Thanks

Tom Kyte
April 29, 2008 - 10:55 am UTC

Note 202102.1

sounds like you are using data guard and this is "normal" during a read only open...

DataGuard question in 9.2.0.6

Murugu, June 17, 2008 - 6:08 pm UTC

Tom:

I will be creating new DB schema (user)/new tablespaces in the primary database. Will it be automatically replicated in Physical standby database ?

Or do i need to create a user/tablespace in standby.

Thanks for clarifying my doubt.
Tom Kyte
June 18, 2008 - 12:50 pm UTC

All very much documented:

http://docs.oracle.com/docs/cd/B10501_01/server.920/a96653/manage_ps.htm#1015423

    8.4 Managing Primary Database Events That Affect the Standby Database

        8.4.1 Adding a Datafile or Creating a Tablespace
        8.4.2 Dropping a Tablespace in the Primary Database
        8.4.3 Renaming a Datafile in the Primary Database
        8.4.4 Adding or Dropping Online Redo Logs
        8.4.5 Altering the Primary Database Control File
        8.4.6 NOLOGGING or Unrecoverable Operations


logs are not in sync (sync problem)

sandhya, June 20, 2008 - 2:21 am UTC

Dear tom,
i had created the standby db as procedure given.
it was working fine for fine
as 'm observing from few months, the logs are not getting applied. ( i mean it won't be the sync.)
every time, i was manually copying and register the log.
After manually apllying the log on DR side, It will be in sync for days. ( i mean for 15 days).
later again, It won't be in sync.
can u plz guide me..
plzz help
Tom Kyte
June 20, 2008 - 10:16 am UTC

... can u plz guide me.. ....

no, "U" isn't available.


You may contact support, and they'll collect tons of relevant information and help you diagnose what is happening.



A reader, April 15, 2009 - 12:03 pm UTC

We have a production data guard environment with a primary + a physical + a logical. We recently went through all our non prod environments and applied the latest cpu patch. Now we have to draft a plan to apply the same patch in our production environment as well. In the past, while doing similar patches, we have taken the approach of shutting both the databases down (declaring an outage), patching the binaries in both sites (primary + standby), bringing up the primary, patching the database and letting the logs catch up on the standby. We cannot do this anymore because

1. we have to try for no outage and
2. we have a logical standby in place as well.

In such a scenario, what is the most common way that the patches are applied.
Tom Kyte
April 15, 2009 - 2:17 pm UTC

Assuming current software....

rolling upgrade via your logical standby.

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14210/hafeatures.htm#CJAFCCFJ

In short, you'll upgrade logical standby, synchronize it with production, switchover (minimal 'pause' here, you have to switchover- not a failover), upgrade production, synchronize it with the database you switched over to, switch back.

A reader, April 15, 2009 - 2:28 pm UTC

We are using Oracle 10.2.0.4.

From your response

"In short, you'll upgrade logical standby, synchronize it with production, switchover (minimal 'pause' here, you have to switchover- not a failover), upgrade production, synchronize it with the database you switched over to, switch back. "

after we upgrade logical standby, we do a switchover to logical standby or physical standby because if we switchover to logical standby it invalidates the physical and your response does not mention about the physical standby. Can you clarify that please ?
Tom Kyte
April 15, 2009 - 8:30 pm UTC

This is from Larry Carpenter - a lead guy on Data Guard/MAA

If you have a RAC primary database then most CPU's and one off patches are rolling upgradeable anyway so you wouldn't need use any kind of rolling upgrade or switchover to apply the CPU.

In the case of a single instance Primary (or the increasingly fewer cases where a CPU or one off patch is not RAC rolling upgradeable) then as long as there is nothing in the README of the CPU or one off patch that requires that the patch is applied on both sides, and the process does not require any SQL be run on the database then for purposes of patching the standby the following is OK to do with a Physical standby.

So, if the CPU you want to apply meets the requirements above (i.e. no SQL that needs to be run) then you can do the following:

* Shutdown the Logical standby first and apply the CPU.
o Restart the Logical standby and let it get synchronized
* Shutdown the Physical standby and apply the CPU.
o Restart the Physical standby and let it get synchronized.
* Switchover to the Physical standby
o The Logical will follow along nicely.
* Shutdown the original Primary (now a Physical standby) and apply the CPU
o Restart the new standby and let it get synchronized with the new Primary
* Switchover again to return to the original configuration.

Total downtime if everything is setup correctly and you follow the best practices paper
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_SwitchoverFailoverBestPractices.pdf
(Data Guard Switchover and Failover) would be 1 minute per switchover.

If the CPU you are trying to apply does have some SQL that needs to be run (verify this in the README of the CPU) then you would use the normal rolling upgrade procedure (follow the MAA paper -
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_RollingUpgradeBestPractices.pdf
Rolling Database Upgrades Using Data Guard SQL Apply) to apply the CPU. In this case the Physical would follow along as a physical standby of the new Logical standby during the upgrade process. If your concern is that the Broker will invalidate the Physical standby if you switchover to the logical standby then you needn't be worried, you cannot use the Broker to perform rolling upgrades anyway. Non-Broker managed configurations do not invalidate the physical standby database as long as you have defined your parameters correctly. The MAA paper addresses this very nicely.



A reader, April 17, 2009 - 1:20 pm UTC

Thank you very much for the answer and direction. This CPU requires catbundle.cpu to be run, so I have to go with the MAA paper.

I read the paper and the example in appendix A. This is my understanding and I also have some questions that require clarification.

1. Apply the CPU on logical standby (includes upgrading the binaries and running catbundle.cpu)
2. Switchover to logical standby => At this point, will the original primary and physical standby both be physical standbys to the new primary (original logical ) ?
3. Upgrade the binaries on the original primary and physical standby.
4. We cannot run catbundle.cpu on the standbys because that requires a "startup upgrade" => can startup upgrade be run on physical standbys ? Will the log apply catch up from the current primary (original logical) to both the standbys instead of applying catbundle.cpu
5. Switch back to original configuration.

Please let me know if this doesnt add up.

Thanks again for your help.

Tom Kyte
April 20, 2009 - 9:05 am UTC

Larry Carpenter wrote back to me:

So. He said (look for >>>):

Apply the CPU on logical standby (includes upgrading the binaries and running catbundle.cpu)

>>> Correct.

Switchover to logical standby => At this point, will the original primary and physical standby both be physical standbys to the new primary (original logical ) ?

>>> No, the Original Primary will be a Logical standby to the new Primary (the former Logical standby) The Physical standby will now be a physical standby to the new Logical standby. He will have to ensure that he has his LOG_ARCHIVE_DEST_n parameters setup correctly beforehand.

Upgrade the binaries on the original primary and physical standby.

>>> Correct, just make sure to shutdown both the new Logical standby and the Physical standby first.

We cannot run catbundle.cpu on the standbys because that requires a "startup upgrade" => can startup upgrade be run on physical standbys ?

>>> There is no need to do any upgrade to the physical standby. Once catbundle.cpu is run on the new Logical standby the redo generated by that will be sent to the physical standby and 'upgrade' the database . All he has to do is mount the physical standby in the upgraded home (probably the same home in a CPU because most people do in place upgrades) and restart the apply. Data Guard will do everything else.

Will the log apply catch up from the current primary (original logical) to both the standbys instead of applying catbundle.cpu?

>>> No, he has to apply catbundle.cpu to the new Logical standby as he did to the original Logical standby. As I said, that will generate the redo which will upgrade the Physical standby database.

Switch back to original configuration.

>>> Yup.

A reader, April 17, 2009 - 4:27 pm UTC

I saw that this was updated, but I dont see any updates from you.
Tom Kyte
April 17, 2009 - 4:28 pm UTC

correct, I touched it so it would pop to the top so I could find the link easy and send it to someone else to comment.

A reader, April 18, 2009 - 11:41 pm UTC

Hi Tom
I have two questions, regarding the standby issue:

1-)
Assume I configured physical standby database.
If I add a datafile to production database, I will also have to add a datafile to standby database.(STANDBY_FILE_MANAGEMENT is manual)

After I add a datafile to production database, is it necessary to take a copy of the controlfile and replace it with
standby databases`s controlfile?

2-)If the primary database is configured on RAC, Can I make a standalone database as a standby database instead of RAC.
Tom Kyte
April 21, 2009 - 12:27 pm UTC

1) why would you consider replacing a production control file with a standby's controlfile like that?

2) yes

A reader, April 20, 2009 - 11:48 am UTC

Tom/Larry - Thank you very much for your responses to my question on rolling upgrades in a mixed standby configuration. I have all the answers that I was looking for except one. In his response Larry says that I have to make sure that LOG_ARCHIVE_DEST_n parameters are configured correctly in order to ensure that when I switch over to my logical standby my primary will become the logical and the physical will follow to the new primary. Is there any white paper or Oracle doc which specifies what the settings should be in order to have this work ? Also, at the beginning of all this work, I should disable my broker configuration and re-enable after I switch back to my original configuration correct ?
Tom Kyte
April 21, 2009 - 2:30 pm UTC


What about using a Transient Logical Standby

Robert, April 21, 2009 - 3:23 pm UTC

Tom,

Are you familiar with the rolling patch apply or upgrade method that uses a "Transient Locical Standby" database?

This is for Data Guard configurations which only have a single *physical* standby (and no logical standby).

Here is link to Oracle white paper on this for 10g

http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_TransientLogicalRollingUpgrade.pdf

Not much documentation other than this.

Have you seen this done very often?

Thanks,

Robert.

Tom Kyte
April 21, 2009 - 4:01 pm UTC

yes, people do this - do you have a specific question about it?

we were just talking about it above?

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1354203041395#1635006200346007007


Transient Logical Standby

Robert, April 21, 2009 - 9:41 pm UTC

Tom,

What you were discussing above isn't the same thing as a *Transient* Logical Standby as described in the link I posted.

This technique uses only a 2 database configuration... a primary and a *physical* standby.
The physical standby is turned temporarily into a logical standby (transiently) so Data Guard can use SQL Apply to keep DG setup in sync.
Through a series of steps using flashback database and swapping control files, things are juggled around until at the end you have your original 2 database physical standby configuration upgraded and/or patched, with only 1 or 2 minutes downtime.

It is fairly 'complicated'... just wondering if you have heard much about it.

Thanks,

Robert.
Tom Kyte
April 23, 2009 - 12:00 pm UTC

the linked to documents certainly did? The MAA papers...

The steps are documented, if you have step by steps - it isn't really 'complicated', no more so than "creating a database" is (which is just a set of steps executed in order)

Scofield, April 22, 2009 - 3:32 am UTC

Respected Mr Tom

Assume I am using pysical stand by database which is in mounted state.

In the event of disaster,I issue " alter database open " in the standby database.
and then there will be transactions at this database.
When my primary database is up, how can I synchronize them?
and make the process like before?
Tom Kyte
April 23, 2009 - 12:17 pm UTC

In general, after a failover (a FAILOVER - a disaster, requiring FAILOVER) you rebuild production (because production burnt up, flooded out, whatever)

but, you may be able to use the production instance as a physical standby for the failover site without rebuilding and then SWITCHOVER to put people back

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1050055

suggestion: read the data guard docs from start to finish...

On Rolling Upgrades and Multiple Standbys

Larry M. Carpenter, April 22, 2009 - 4:03 pm UTC

When performing a rolling upgrade using a Logical standby database (the normal procedure not the transient one)as outlined in the MAA paper:
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_RollingUpgradeBestPractices.pdf

you must remove the Broker configuration before starting and redefine it after you have finished all of your switchovers as part of the upgrade process. This means that you will have to manually define all of your LOG_ARCHIVE_DEST_n (LAD_n) parameters so that redo continues to ship. Assume Prd (Production), Phy (Physical standby) and Log (Logical standby).

Prd would ship to Phy and Log. When you switchover to Log (which becomes the primary) you want Log to ship to Prd and Prd to continue shipping to Phy.

A reader, April 24, 2009 - 4:12 am UTC

Hi Tom

Pysical standby db is in mount state.
Assume I open the pysical stand by database for only 3 seconds and then mount again,
I think I cannot apply the archives of primary database anymore.
What is the main reason that prevents applying archives anymore?
Tom Kyte
April 27, 2009 - 11:38 am UTC

assume you opened it read only. No problems, you can do that for 3 seconds or 3 days and still be a standby.

assume you opened it read/write. You just simulated a failover, a loss of production. The nano-second the database is opened read-write, it starts writing stuff, it DIVERGES from the production system. It becomes immediately *different*

Scofield, April 28, 2009 - 2:55 am UTC

Respected Mr tom,
Assume I open the pyhsical standby in two seconds and closed.
Probably no chekpoint will occur in this interval and checkpoint SCN of the datafiles and control file will not change.
So what makes recovering it anymore impossible?

Tom Kyte
April 28, 2009 - 10:04 am UTC

I said:

The nano-second the database is opened read-write,

Meaning the INSTANT it is open, the act of OPENING makes it different from production.

forget checkpoint - it is not at all relevant.

A reader, May 30, 2009 - 3:06 am UTC

Dear Sir,
I know you mentioned it several times. Sorry about this silly question, but I still have doubts.

You told me that:
"Even the standby is opened in nanoseconds, it will be diffrent "

I am still wondering what makes recovery impossible after this?

Assume the checkpoint scn of datafiles are 999 in primary.
and It is 800 in standby.
When I open the standby in nanosecond and close, lets say it will become 850. I can still do recovery so that it will be equal to 999.


Tom Kyte
June 01, 2009 - 7:45 pm UTC

it is the act of opening, the database - as soon as you open - starts doing it's own transactions. immediately. right away. very quickly.

as soon as you open read write, we write.

as soon as we write, you are different, you have diverged.

you would have to FLASHBACK the database to before the "open read write"

Scofield, June 23, 2009 - 5:30 am UTC

Respected Mr Tom;

I have been thinking about the below scenerio for the last two weeks, but still have doubts.

I have a primary database and a standby database ( not a real standby,it is in mount state, and
I apply archivelogs manually).

I will rebuild some tablespaces from DMT to LMT.

After I move the data inside the test1 I will issue:


drop tablespace test1;

create tablespace test1
datafile 'C:\oraclexe\oradata\XE\test1.dbf' reuse;


Move the data back to the test1;


Again, I move the data inside the test2 and issue:

drop tablespace test2;

create tablespace test2
datafile 'C:\oraclexe\oradata\XE\test2.dbf' reuse;


Move the data back to the test2;


So on...



What I want to ask is, during recovery I hit an error message in standby database.
What should I do in standby database in this issue?

If I copy the controlfile from primary database and issue:


alter database create datafile 'C:\oraclexe\oradata\XE\test1.dbf' as 'C:\oraclexe\oradata\XE\test1.dbf';
alter database create datafile 'C:\oraclexe\oradata\XE\test2.dbf' as 'C:\oraclexe\oradata\XE\test2.dbf';
...
...
and then start recovery, will it solve the problem?

I am not sure if I need to copy controlfile or something else?


Tom Kyte
June 26, 2009 - 9:06 am UTC

why would you do that - just

a) create new tablespace NEW_test1 as LMT
b) move segments from test1 into this new_test1
c) drop test1
d) rename new_test1 to test1


http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76995/standbym.htm#27175

is what you would do to manage the 'standby' bit

A reader, June 29, 2009 - 12:30 am UTC

Sir,


I used to have three indexes in nologging mode in primary database.
When I scan the datafiles of these indexes in standby database, I notice that datafiles are logically corrupted.

Today, I dropped these indexes in primary database and recreated them with logging. Archivelogs of these steps were also applied to standby.

Even there is no object in nologging mode, When I scan the datafiles in standby, somehow they are still logically corrupted !!!

What is the reason for that? Here is the dbv output of one of the datafile:


DBV-00200: Block, dba -2126348425, already marked corrupted

DBV-00200: Block, dba -2126348424, already marked corrupted

DBV-00200: Block, dba -2126348423, already marked corrupted

DBV-00200: Block, dba -2126348422, already marked corrupted

DBV-00200: Block, dba -2126348421, already marked corrupted

DBV-00200: Block, dba -2126348420, already marked corrupted

DBV-00200: Block, dba -2126348419, already marked corrupted



Tom Kyte
July 06, 2009 - 6:01 pm UTC

are those blocks actually mapped to any active segment.

when you did the nologging thing, the blocks would appear corrupt in the standby since there was no redo to populate them. You ran something that said "hey, these are not good blocks". So you dropped the original segments (those blocks do not belong to a segment now) and you created new segments. The old blocks that were previously marked corrupt are still "logically corrupt" - but - they are not used by any segment so it doesn't matter.

So, please - check out what blocks they are, and run a query to verify yourself that they are not actually used by any segment.

And then enable force logging for your primary site so no one can mess you up in the future.

A reader, August 17, 2009 - 3:44 am UTC

Respected Sir;
I know that temp tablespace is not used during media recovery.How about undo tablespace? Is it used during media recovery?
Tom Kyte
August 24, 2009 - 8:47 am UTC

it could be, yes.

A reader, October 03, 2009 - 4:33 pm UTC

Dear Tom;

When I restore the controlfile and then datafiles and perform media recovery.(all archivelogs and redologs)
Why do I always need to open resetlogs?
What is the reason for opening with resetlogs in this scenerio? If I use the current controlfile I can open without resetlogs.
Tom Kyte
October 08, 2009 - 6:08 am UTC

if you have the current controlfile - the last one - the up to date one - we know "we are at the end of the road - everything that can be applied has been applied - the logs do not contain a glimpse of a future that will never be - we are exactly where we left off the last time we were up and running"

if you have an old or create a controlfile by hand then "the logs might contain a glimpse of a future that was - but will never be again - the logs *appear* to want to go forward more, we are not sure, and in order to restore sanity - we reset the logs to say 'here we are, we are set to go, if there is any garbage in the logs going forward - we've reset it"

A reader, October 17, 2009 - 6:29 pm UTC

Thanks sir;
According to what information controlfile knows the end of the road ? and it doesnt know in backupcontrolfile..
Tom Kyte
October 22, 2009 - 3:54 pm UTC

because everything would sync up - the information in the control files would coincide with the datafiles, the logs - everything would be as of the same point in time.

and in the backup control file - which was taken say YESTERDAY and being used against files that were updated TODAY - that "sync" is not possible.

A reader, January 07, 2010 - 3:57 pm UTC

Respected Sir;

Assume log shipping method is set as default: archiver(ARCH)

When a archivelogs is generated in primary, rfs reads the archivelog and creates the same archivelog in standby db and mrp applies this to database.
Why oracle simply copy the generated archivelog from primary to standby rather than rfs reads it and recreates the same,etc..?

Tom Kyte
January 11, 2010 - 8:47 pm UTC

I don't know what you mean in this question, can you be a bit more specific?

but basically, we need a bit of infrastructure that knows to record what it is doing, when it is doing it, how it is doing it, when it completed and be able to respond to specific errors.

Standby Database

Ali Ahmed, January 29, 2010 - 1:29 am UTC

Hi Tom,
We are doing some big loads with sql loader using direct=y option on the primary site. In what ways it could affect a physical standby database? Are these loads going to corrupt the physical standby database or possibly can create any other issues?
Thanks
Ali
Tom Kyte
January 29, 2010 - 4:09 pm UTC

as long as you do them with redo log generation, it is fine.

otherwise, if you use non-logged operations - see
http://docs.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1018551

two standbys on same server

A reader, June 02, 2010 - 2:35 pm UTC

Hi Tom,

For our version 10.2.0.4, we have primary and standby running on separate servers.

We want to create a second standby by on the same same server where the 1st standby exists.

Do you see any downfalls in creating a 2nd standby on the same server? Resources/disk space etc is not a problem.

We want to use this 2nd standby as a reporting database. Logical standby is not a choice.

Thanks for your advice in advance!
Tom Kyte
June 08, 2010 - 10:02 am UTC

no reason it would not work - just beware of file conflicts - make darn sure of the setup of the second one. Might even be worth setting up a separate Oracle software account just to hold this so that it doesn't have the privileges necessary to damage the other one (that would just be a paranoid "just in case" move)

streams

A reader, August 14, 2010 - 3:03 pm UTC


data guard and dblink

A reader, July 26, 2011 - 4:15 am UTC

Dear Tom,

I am using Physical standby database for DG operation. Now my primary database is updating any table through any dblink from a remote database.Now if my standby database doesn't support that dblink (i.e. standby host cannot connect to that remote database), then can I get that updated table in standby side?
Tom Kyte
July 28, 2011 - 6:24 pm UTC

physical standby only needs redo log information from production, it doesn't use SQL (or remote databases) to update the data, it is just in recovery mode.


data guard and dblink

A reader, July 29, 2011 - 1:39 am UTC

Dear Tom,

Thanks for the response. But I was wondering the internal meaning of recovery mode apply for physical standby database.Can you please explain a bit?
Tom Kyte
August 01, 2011 - 11:27 am UTC

we are just applying redo - like we would for media recovery if you restored from backup. We are in recovery mode. We are not running SQL, we are applying binary redo to catch the datafiles up.

same instance_name on both primary and standby

rizwan, August 22, 2011 - 5:05 am UTC

Can i have same instance_name for both primary and standby databases on same server ? Could you please elaborate this for both unix and windows server ? I am not able to find required information in net ..
Tom Kyte
August 30, 2011 - 2:11 pm UTC

if they have different oracle homes... on unix... yes.

On windows - I have never tried - you'd have to test that one out. The instance name is used in the service name - that might conflict.

waiting for answer

rizwan241, August 25, 2011 - 5:55 am UTC

i daily check this link to see if my above question has been answered .. This is the only confusion i have in standby databases .. so kindly elaborate in detail ..
Tom Kyte
August 30, 2011 - 4:15 pm UTC

sorry, I was traveling extensively with full days of work. Had to take some timeaway... Just catching up now.


You know, it would have been easy to test....

let him tell us

kingy, August 30, 2011 - 4:38 pm UTC

i agree. please don't bother testing this tom. let this guy do the work and report back. it's easy to set up VMs to test this.

kindly elaborate in detail...

need experts advice

abhijit, September 07, 2011 - 1:07 pm UTC

I have a 10g database of say around 350 GB, with a DR located at a long distance
(preferrably to say that my network link is not upto the mark). The last archive
that was applied on my standby was say arch_100.log. Now due to some human error
or maybe by my carelessness I deleted archive sequence 101,102 from my
production database server. So, since my archives are not available, my standby
database would never be in sync, until and unless I provide arch sequence
101,102....

In this scenario, how will I get my standby database is in sync with the
production database without restoring my latest backup on my standby server???
Tom Kyte
September 08, 2011 - 4:56 pm UTC

You'd have to basically do what you ended with there - you'd have to get more current copies of datafiles there so that we would no longer need archives 101 and 102 to catch them up.

If you are running a DR site, it would be entirely inconceivable that 101 and 102 could just go "missing", you broke all of your backups, you broke your DR site, you shouldn't be running the systems there in short - you did the one thing a DBA cannot do - you removed the ability to recover the database.

Purpose of standby control file

Tom, October 25, 2011 - 8:42 am UTC

Hi Tom,

If setting up a manual standby database with Standard Edition is there any reason to use a standby control file as opposed to a standard control file?

Can I not just simply restore last nights backup onto a second machine, set up a process to copy across the archive logs regularly and a job to apply them using "recover database"? Then, if the production box fails, I simply issue a final "recover database" to apply remaining archivelogs, and then "alter database open resetlogs".

As far as I can tell, nothing in this process can be simplified by the use of a standby control file given data guard is not supported in SE, but would like confirmation there isn't some killer reason to use a standby control file that I'm not aware of.

Thanks

Tom Kyte
October 25, 2011 - 9:24 am UTC

if you are not using manual standby (which you can do)

http://www.databasejournal.com/features/oracle/article.php/3682421/Manual-Standby-Database-under-Oracle-Standard-Edition.htm

but rather you are doing it 100% by yourself (restore backup and manually recover the database every so often) - you would just use normal backup and recovery options.

Manual standby vs manual restore

Tom, October 26, 2011 - 12:29 pm UTC

I guess what I'm trying to get at is whether there is any difference between a "manual standby database" and simply restoring a backup and periodically rolling it forward if I am using standard edition.

The only thing I can think of is the ability to use "restore automatic standby database disconnect from session" to automatically apply archivelogs.

Are there any other differences?
Tom Kyte
October 26, 2011 - 1:29 pm UTC

you'll find the manual standby to be a bit easier, you'll have v$ tables you can consult, the recover standby database; command to automatically apply redo and so on.

If you want a standby, suggest you read the above link and do what it shows to do - it is pretty straight forward.


Two standby databases on same host

Anand, November 08, 2011 - 1:42 pm UTC

Windows 2008 R2, Oracle 11g 64 bit 11.2.0.2.0

Server1 - primary PROD
Server2 - We need two standby databases (e.g. STBY1, STBY2)

1. Is it possible to create two standby databases on same host (e.g. STBY1, STBY2) ?

2. How to name two separate SPFILE files for two standbys?

3. What can we use for parameters ( sid, instance_name, db_name, unique_name, lock_name_space ) for STBY1 as well as for STBY2 ?


I truly appreciate any help or hint.


Thank you.
Anand
Tom Kyte
November 08, 2011 - 1:51 pm UTC

1) yes

2) they each have their own sids, the spfile is named after that

3) they'll have different sids at the very least.

Standby to Primary

A reader, February 20, 2012 - 4:49 pm UTC

Due to an upgrade, I need to take a standby out of replication and make it a primary. Then once an upgrade is done on this database, take the original primary and make it a standby of this newly upgraded primary.

Do you have any suggestions on the best approach for this?

Tom Kyte
February 21, 2012 - 7:14 pm UTC

Just do a rolling upgrade.

http://docs.oracle.com/cd/E11882_01/server.112/e23633/intro.htm#UPGRD00190

you'll upgrade the standby, do a controlled switchover, upgrade the primary - and then you are done (you'll stop - you don't have to switch back over to the primary - but you can)

MADHU, February 23, 2012 - 9:59 am UTC

Hi,could i knw the apps pwd in r12.

Redo logfile

MANNOJ S V, February 23, 2012 - 1:15 pm UTC

Hi Tom,

When we create redo logfile and Standby redo logfile in primary is not replicated in standby database.

When i recreated standby redolog file in primary is not replicated to standby. I mannualy created in standby database.

Can you explain the concept .

Thanks in advance

Why does Renaming a Datafile in the Primary Database is not propagate to stand by database

Chakra, May 30, 2012 - 4:22 am UTC

Yo Tom,

I have a doubt and its simple but could not find the answer for the same.

"Why does Renaming a Datafile in the Primary Database is not propagate to stand by database" even if the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO.???
=======================================================
scenario:

Database 'A' is primary
Database 'B' physical standby

I want to move a datafile of a tabelspace USERS (USERS tablespace is having only one datafile) from one location to another location.

in primary 'A':

1) I put the tablespace USERS offline (This will be recorded in the online redolog of primary)
2) OS command executed to copy the datafile
3) Now alter database "ALTER DATABASE RENAME FILE" to reflect the changes in the controlfile (This is also recored in the online redo logfile of primary)
4) USERS tablespace is now online.

In standby 'B':
1) already the database will be in mount stage
2) from the step 1 of primary; the online redo record will be moved to standby as well
3) OS command is executed now
4) from the step 3 of primary; the "ALTER DATABASE RENAME FILE" redo log records will be moved from the primary
5) the USERS tablespace is brought to online.

I am not sure, why is this not happening automatically when STANDBY_FILE_MANAGEMENT parameter is auto
Please help me out from this.
============================================================
Thanks,
chakra


Tom Kyte
May 30, 2012 - 7:32 am UTC

http://docs.oracle.com/cd/E11882_01/server.112/e25608/manage_ps.htm#i1034172

because standby file management only - ONLY - deals with adding/dropping files - not renaming.

chakra, May 30, 2012 - 9:37 am UTC

Tom,

I do understand that, this works like this only.
But I want to know Why is it so? Why cant it do automatically when standby_file_management=auto..
Tom Kyte
May 30, 2012 - 9:43 am UTC

because the people that make the product data guard decided to do it that way - it was a design decision they made.

In order to do the file rename - you'd have to actually move a file. Moving a file is quite different from creating a new file (easy), removing an existing file (trivial). Moving a file on another system - where you might not have the same directory structure, might not have the right permissions, etc etc etc is something they decided to not support.

chakra, May 30, 2012 - 10:09 am UTC

Tom,

That was really fast, fantastic and very important information you have given.
Now I understood that “because the people that make the product data guard decided to do it that way - it was a design decision they made. “

I wonder here because ‘When they made automation(standby_file_management=auto) for creating a datafile (We might not have the same directory structure, might not have the right permissions, etc etc etc is something they decided to not support.), why could not they think about moving a datafile.

Regards,
Chakra


Tom Kyte
May 30, 2012 - 10:45 am UTC

the creation of the datafile is simple - they either have the same directory structures OR they have been explicitly remapped.

Moving a file is a nontrivial process. It is not something simple. You cannot just do a "mv" call.

they could have, the software could do anything, but in the grand scheme of things they decided "this will not be done", that is all.

Best way to transfer data between source database to target databse

Shiva Prasad, March 07, 2013 - 8:24 am UTC

Hi Tom,

We have an existing OLTP system and currently it is heavily loaded with the reporting queries and ad-hoc queries for analysis on applcation db. Hence there is a slow response and now we decided to go for a separate DB for reporting and analysis. As part of this, we are planning to replicate all the tables into new DB and in addition, we are planning to create materialied views, and dim/fact tables for reporting purpose. We are in a process of selecting a right approach for transferring data between source oracle DB to reporting oracle DB and found out below options for the same -

1) Using oracle streams
2) Datagaurd with logical standby
3) Oracle's golden gate
4) Use ETL tools

Management is looking for most efficient data replication ,relaible, NRT( max of 1 hr lag), cost effiecient :), and ensure source DB system is not impacted

Kindly request you to provide your expert opinion and also share any other option which makes more sense here.

Thanks in advance
Tom Kyte
March 07, 2013 - 9:04 am UTC

#2 or #3 would be the approaches I would look into.

streams is supported and will be supported but is not our direction going forward, it will not be enhanced much if at all going forward.

#2 comes with enterprise edition
#3 is an extra cost option

#3 will have more flexibility than #2, you can apply transformations and rules to the data as it flows from source to target.

I suggest you research both, and see which fits best in your unique environment.

A reader, March 07, 2013 - 10:43 am UTC

Hi - I have a 11.2.0.3 database which is about 4TB in size. I am looking into standby database using rman duplicate from active database. Is it feasible to use this approach on a large database ? Are there any best practises to follow in order to do this ?
Tom Kyte
March 07, 2013 - 12:59 pm UTC

is it feasible: yes
do you need to have the resources to do it: yes

you'll need the IO bandwidth on the source, the network bandwidth and IO bandwidth on the target of course..

Best way to transfer data between source database to target databse

Shiva Prasad, March 08, 2013 - 8:27 am UTC

Tom,

Thanks very much for the valuable feedback!

I come across one more option, using materialized view refresh based on scheduled interval time from source oracle Db instance to Target Oracle DB instance. We have around 65 source tables and 38 denormalized tables in target. Overall DB size is 6TB and on an average daily incremental data is approximately about 25 GB.

Kindly share your thoughts here.

Thanks in advance.


Tom Kyte
March 11, 2013 - 8:15 am UTC

"ensure source system is not impacted"


materialized views would require materialized view logs on all tables (impacts source system)

materialized view refresh process would need source to participate and work on a distributed two phase commit transaction to refresh (impacts source)


Standby database from a physical standby

prad, March 21, 2013 - 6:45 am UTC

Hi,

we are planning to create multiple standby databases.

DB1 - Primary
Db2 - Physical Standby
DB3 - Physical standby

we want to create DB3 as physical standby but the primary should be DB2. so archive apply will happen from Db1 to DB2 and then Db2 to DB3. we plan to drop DB1 in future and will retain Db2 as primary and DB3 as standby.

Is this kind of configuration possible.

Thanks
OracleO
Tom Kyte
March 25, 2013 - 3:53 pm UTC

monitoring readonly standby performance

Sal, April 02, 2013 - 4:49 am UTC

Hello Tom,

Since we can not use AWR/ADDM or even OEM to monitor the performance of the standby (we open ours for readonly/reporting and bring it down every couple of hours for recovery .. we are still at 10.2.0.4),

What tools could I use to analyze standby database performance for a given time period? Last night a lot of queries got "stuck" and I had no way to figure out what happened at that time. Alert log showed no unusually activity nor any errors.

Thanks for your help as always!
Tom Kyte
April 22, 2013 - 1:20 pm UTC

well, all of the v$ tables are available.

v$session_event
v$session_wait

might have been the first two I would peek at - that'll show what the session has waited on and is currently waiting on...


Alexander, May 06, 2013 - 4:19 pm UTC

Tom,

We recently used our data guard standby so we could do some changes on the primary. Because it was severed for a few days I had to do an incremental restore to try and get in synced up. I keep hitting this problem now:
ORA-00600: internal error code, arguments: [3020], [11], [544479], [46681823], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 11, block# 544479, file offset is 165404672 bytes)
ORA-10564: tablespace BL_INDX
ORA-01110: data file 11: '/ocp34p/data/ocp34p/datafile/bl_indx_02.dbf'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 1035061
Sat May 04 14:57:01 2013
Dumping diagnostic data in directory=[cdmp_20130504145701], requested by (instance=1, osid=17010 (PR0E)), summary=[incident=908482].
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Recovery Slave PR0E previously exited with exception 600
Sat May 04 14:57:01 2013
Errors with log /ocp34p/arclogs/2_23915_735305123.dbf
MRP0: Background Media Recovery terminated with error 448
Errors in file /oracle/product/diag/rdbms/ocp34p_dr/OCP34P_DR/trace/OCP34P_DR_pr00_16982.trc:
ORA-00448: normal completion of background process
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Dumping diagnostic data in directory=[cdmp_20130504145702], requested by (instance=1, osid=16980 (MRP0)), summary=[incident=908234].
Recovered data files to a consistent state at change 10146755362971
MRP0: Background Media Recovery process shutdown (OCP34P_DR)


We have done this process so many times in the past without fail and now we always get these when we try it. I can copy of datafiles it complains about, but then it will get past that and hit another, I can't do this for 40-50 30GB datafiles across 1000 miles. I also tried putting the database in BACKUP MODE and taking an incremental backup, that didn't help.

1) Do have any idea what might cause this problem (even though there are ORA-600s, from your experience just take a stab in the dark)?

2) What would you recommend as the fastest, simplest way to completely rebuild a standby since it seems I have no choice.
Tom Kyte
May 06, 2013 - 7:31 pm UTC

please utilize support for all ora-600's...

you might have another choice for #2, I would recommend getting in touch with support before doing anything (which might make things worse...)

Azhar, May 07, 2013 - 5:31 am UTC

Hi Tom,

I have some query related to standby db which is as below
1) What will happen if non-system datafile in standby database is removed from os (accidentally say using rm command)?
2)What will happen if system datafile in standby database is removed from os (accidentally say using rm command)?
3)What will happen if undo datafile in standby database is removed from os (accidentally say using rm command)?

Regards,
Azhar
Tom Kyte
May 07, 2013 - 3:30 pm UTC

1) that file will be unavailable - and if you tried to open the standby, the datafile would be unavailable - the tablespace would be pretty much gone.

2) the standby is dead until you restore it. you could not open it.

3) the standby is dead, we won't be able to open the database since the undo needed to roll back any in flight transactions will be gone.

in short - bad stuff. You have to manually sync up that file.

Is it possible to lose data with a switchover?

Robert, October 15, 2013 - 5:29 pm UTC

Tom,

We have Data Guard physical standby on 11.2.0.3.
Standby database is on a remote server.

Is it possible to do a successful switchover using Data Guard Broker.... and yet "lose" data because the most current logs have not been applied to the standby yet?

For example, consider the situation where you have heavy redo activity on the primary database and for whatever reason the network throughput to the standby server is very slow.... The standby database is working fine, receiving and applying redo, but due to heavy activity and network traffic it is about 1 hour behind the primary in applying logs.

What will happen if we try to do a graceful switchover at this point?
Will the switchover fail, or will it hang until all logs are applied?
I suppose if we really needed to switch to the standby at this point we would have to do a failover... and try to piece together the lost data manually via logminer, etc.?

Thanks for your help.

Robert.
Tom Kyte
November 01, 2013 - 6:42 pm UTC

a switchover is a controlled switchover - not a failover. the switchover even ensures all redo is sent and applied before the switching happens.


the switchover would have to wait for the redo.



http://docs.oracle.com/cd/E11882_01/server.112/e17023/sofo.htm#DGBKR3404

The act of switching roles should be a well-planned activity. The primary and standby databases involved in the switchover should have as small a redo lag as possible. Oracle Data Guard Concepts and Administration provides information about setting up the databases in preparation of a switchover.

Thanks Tom for verifying what I was only 80% sure of.... but which I needed to be 100% sure of!!

Robert, November 01, 2013 - 9:29 pm UTC


online redo log protection

Alan, November 29, 2013 - 5:31 pm UTC

hi tom,

i am using oracle standard edition 12c. for a complete recovery, i would need to protect my online redo logs as well.

right now, i am manually syncing the archivelog over to the standby.

how do i protect my online redolog ? i can't do manual sync as the synced copy might be corrupted (while LGWR is writing to it)


Regards,
Noob

ORA-16224 when trying to view the explain plan

No Plan for me, March 14, 2014 - 3:52 pm UTC

We have read only access on a secondary database (replicated using Dataguard). When I try to view the explain plan of any query, I get the "ORA-16224" error.

explain plan for select sysdate from dual;
ORA-16224: Database Guard is enabled

The database version is - Oracle Database 10g Enterprise Edition Release 10.2.0.5.0.