Donghang Zhang, January 23, 2009 - 10:39 am UTC
Thanks Tom. What do you mean by moving pieces, could you explain more? Besides the redo log files, are there other pieces?
Is the logical standby less stable and prone to errors when I don't use the unsupported data types or skip those not supported in replication ?
Is the underlying mechanism the same as stream in logical standby replication?
We are choosing DR solution with reporting ability, seems physical standby is better, then we need one pair of servers to be configured using physical, and the other pair using logical standby for reporting. So need more licenses.
January 23, 2009 - 1:02 pm UTC
more moving pieces...
physical standby: binary redo is shipped from production to standby and applied just like recovery would apply it.
logical standby: binary redo is shipped from production to standby, where it is processed and turned into logical change records, which are queued up and then processed again and turned into sql inserts/updates/delete, which are then applied.
It just take more setup, care, and feeding.
As I said, 11g allows physical standby to be open read only for report even while applying redo as well.
Downstream Change Data Capture
Bob, January 23, 2009 - 1:55 pm UTC
January 23, 2009 - 2:10 pm UTC
if I can kill two birds with one stone, I will - if I don't have a failover and someone forces me to create this reporting system I probably do not need, I'll want to make sure it is really buying me something.
So, I'd be leaning towards a standby implementation - since that provides value - first and foremost.
If the edict was "do not affect this production instance at all", downstream capture would make sense, as the processing is totally offloaded from the source system. It might be appropriate where the source system already exists and is at or near 100% capacity.
One more in favor of physical
Arup Nanda, January 25, 2009 - 5:46 am UTC
You can take backup off the physical standby database and never on the primary. In case of a failure these backups can be applied to the primary. This saves a lot (depending the size to be backed up) of CPU cycles on the primary. Logical standby can't do that.
network requirements for setup dataguard
Harry Zhang, May 14, 2009 - 10:00 pm UTC
Hi Tom,
Thanks for the great site and great answers to the questions.
When setting up dataguard, what network ports are needed sqlnet(e.g. 1521 port)? and is it possible to have one way connection from production to DR site suppose I don't need production becomes DR when DR is activated?
May 15, 2009 - 1:27 pm UTC
you pick the ports, we do not mandate a fixed port. You configure sqlnet to use any port you like.
The DR site needs to be able to contact production.
Production needs to be able to contact DR site.
I don't know what that last bit "suppose I don't need production becomes DR when DR is activated? " means.
Richard, March 21, 2010 - 11:07 pm UTC
To reduce the workload on the OLTP instance, we are planning to have a separate instance for reporting purpose. Data from OLTP instance will be replicated to reporting database using physical standby(11g feature). My questions are
1) Is it possible to create any additional indexes, materialized views on reporting instance(physical standby db –oracle 11g)?
2) If we delete the data (drop partitions) from OLTP database, will it delete from the same from the reporting instance(in physical standby database)?
March 22, 2010 - 9:26 am UTC
1) if you use logical standby, yes. physical - no, it is physical - in recovery mode, applying redo from production. It can only be what production is.
2) yes, that is what physical standby does.
If you want a historical reporting database - one that diverges from production, - you will be looking at STREAMS (so you can configure what happens when you "delete" or goldengate.
excellent
Richard, March 23, 2010 - 9:07 pm UTC
We have RAC oracle database 11g and we don't want to overburden the OLTP instance. So we thought of bringing the changed data from the OLTP instance to reporting instance via logical standby or streams and log the changes with the help of using total recall feature in the reporting instance.
Our idea is to retain the historical data using the total recall feature in reporting instance and bring the changed data from the OLTP instance to reporting instance via logical standby or streams
RAC database: OLTP instance ---> mine the changed data via logical standby or streams ---reporting instance (track all the changes made to a transaction using total recall feature in this instance)
My doubts are:-
1) Whether the total recall feature will work with work with logical standby database or streams?
2) How to transport the total recall feature enabled tables from one database to another. Whether the data pump will work with flashback enabled tables?
3) How to capture the changes from one instance to another with oracle streams. I could not find much documentation on how the streams work on RAC database. Any pointer /suggestion on this would be much appreciated.
If you have any other solution, please share with me.
Thanks a lot for your wonderful service to the oracle community.
March 24, 2010 - 5:53 am UTC
1) definitely will with streams, I have not tried it with logical standby, but I have doubts that it would work - that'll be something to research.
but streams, yes, because you can write your own custom applies if you want and the applies just do "sql"
2) you do not, that does not function as yet (transporting)
3) same as with non-rac really. We mine the redo logs. Just read the 'normal' documentation.
excellent answers..
Richard, March 24, 2010 - 8:55 am UTC
Much impressed by your crisp answers. Thanks for clarifying my doubts.
Is there any way to keep the history of changes (using total recall feature) in another instance without using oracle streams? Basically, we are trying to isolate the OLTP from the history database. Please shed some light on this.
March 26, 2010 - 10:57 am UTC
no, not using total recall, total recall works in a single database only.
11g Physical Standby
Mick, November 22, 2011 - 10:46 am UTC
Hi Tom,
in 11g , with physical standby whilst no writes are permitted against real tables, can we write to global temporary tables at all?
November 25, 2011 - 2:09 pm UTC
Not in 11g and before. global temporary tables are not supported in a physical standby database....
DataGuard Flashback
khurram, November 27, 2011 - 9:26 am UTC
I had to check to failover from primary to physical standby database as a drill in real environment, for that i planned to have dowtime and open the standby database in read/write mode after shutting the production database , i configured the flasahback at standby database and opened the standby database after saving restore point , but when i tried to revert the standby database at that restore point it gave me a error.
SQL> flashback database to restore point before_open_standby
2 /
flashback database to restore point before_open_standby
*
ERROR at line 1:
ORA-38769: FLASHBACK DATABASE failed after modifying data.
ORA-27103: internal error
Additional information: 27129
Additional information: -2
Additional information: 3
Following are the stpes which i followed to failover at standby database after clean shutting down of production database.
SQL> alter database recover managed standby database cancel
2 /
Database altered.
SQL> select open_mode from v$database
2 /
OPEN_MODE
------------------------------
MOUNTED
SQL> alter database flashback on
2 /
Database altered.
SQL> alter database recover managed standby database using current logfile disconnect from session
2 /
Database altered.
SQL> alter database recover managed standby database cancel
2 /
Database altered.
SQL> create restore point before_open_standby guarantee flashback database
2 /
Restore point created.
SQL> column SCN format 999999999999999999999
SQL> select scn,storage_size,time,name from v$restore_point
2 /
SCN STORAGE_SIZE TIME NAME
------------- ------------ ---------------------------------- ----
5970295936140 16613376 13-NOV-11 01.01.39.000000000 PM BEFORE_OPEN_STANDBY
SQL> alter database activate standby database
2 /
Database altered.
SQL> startup mount force
ORACLE instance started.
Total System Global Area 1.1476E+10 bytes
Fixed Size 1263428 bytes
Variable Size 2868906172 bytes
Database Buffers 8589934592 bytes
Redo Buffers 15511552 bytes
Database mounted.
SQL> alter database set standby database to maximize performance
2 /
Database altered.
SQL> alter database open
2 /
Database altered.
After that one of our user connected to application EBS R12 which is also configured for the that standby database and he entered some records and commit.after that i tried to revert back standby database to that restore point which i captured before opening of standby database as primary.
SQL> flashback database to restore point before_open_standby
2 /
flashback database to restore point before_open_standby
*
ERROR at line 1:
ORA-38769: FLASHBACK DATABASE failed after modifying data.
ORA-27103: internal error
Additional information: 27129
Additional information: -2
Additional information: 3
I confused and did not get any solution for that on that time for that i had to recreated the standby database again from the very scratch.Please let me know what i did wrong and why it was not reverted to its restore point.
November 29, 2011 - 7:10 am UTC
please open an SR with support - that is the first thing you should always do after an internal error, ora-3113, ora-600, ora-7445 - all internal errors should go that route.
materialized view on physical standby db
A reader, July 10, 2012 - 11:52 am UTC
Hello,
If the application performs complete or fast refresh of materialized view on primary database, what are the actions that take place on physical standby database?
We have noticed that when MV refresh happens, the standby gets delayed in applying log files. We haven't proven this effect by relating to MV, but I want to first understand the concepts here.
Appreciate your help.
July 11, 2012 - 5:06 pm UTC
we just apply redo. the materialized view looks like everything else, it is nothing special.
could the delay be attributable to the refresh in that the refresh causes a flood of redo that you don't normally have to deal with?
follow up,
A reader, July 12, 2012 - 10:58 am UTC
Hello,
You are correct. During MV refresh, we are seeing 6-7 times more redo generated. We are suspecting this could be the cause for slowness in apply process on standby. We have ruled out any network latency while shipping redo files. However, our redo log file size is 250MB. Logfile switch is happening every minute (some times even less). Could increasing the logfile size helps in addressing this problem?
Thanks,
July 12, 2012 - 6:23 pm UTC
let us say that during "normal" times you are running at a good rate, say at 20-40% of 'full speed'.
if you start generating 6-7 times the workload - you'll not be able to apply it as fast anymore.
so, maybe it is just physics? are you sure you have the ability on the standby machine to apply 6-7 times the amount of redo in the *same* amount of time it takes to do that normal workload?
follow up,
A reader, July 12, 2012 - 8:12 pm UTC
Looking at the present situation, standby probably doesn't have the ability to catch up when there are lot of redo files.
About my other question - Would increasing size of redo files to reduce number of log switches help?
Thanks,
July 16, 2012 - 1:25 pm UTC
if the issue is "you have generated more redo than the standby can apply in that amount of time", then it wouldn't really matter if you had more, or less, or the same number of redo log files.
It is the sheer volume of redo, the standby fell behind for a short while due to the large burst of activity.
follow up,
A reader, July 13, 2012 - 2:48 pm UTC
In continuation to my previous question, does increasing the capacity by adding a node to primary and standby help in addressing this problem? We increased the redo log files to 700MB (almost 3 times than the size before) and we still see gap in apply rate.
Thanks,
July 16, 2012 - 3:50 pm UTC
it is not a problem of number of log files.
it is a problem of volume, of cumulative size of redo.
it wouldn't matter if you had 100,000 or 1 log file - if they added up to the same size.
Will adding capacity in the form of an additional node help? It depends, are you starved on cpu/memory? If so, maybe (but maybe not)
are you starved on IO, if so, almost certainly not, in fact extra cpu and memory might make it much slower.
follow up,
A reader, July 16, 2012 - 5:12 pm UTC
Thank you. It makes lot of sense now.
But if my redo generation spikes up on some part of day, what can we do to make sure the standby still keeps up with primary?
We tried working on parameters on the standby database like parallel_execution_message_size (set to 32k), recovery_parallelism set to 32, parallel_max_servers was increased to 64 (was 32 before. Ours is 16 CPU box, 120GB memory), increased the buffer_cache to 32G (it was 24G before). Any other parameters that can be used to tune this?
We write data synchronously to disks due to OS/filesystem limitation we have (Veritas on HP-UX doesn't support asynchronous IO).
Thanks,
July 17, 2012 - 8:00 am UTC
what sort of delay are we talking about here - how far behind does the standby get?
for the reader
Keir, July 17, 2012 - 12:45 am UTC
For the reader above. What about your log shipping time? Does that lag behind as well causing a knock on affect to the apply rate?
If you're shipping is late then is it possible to increase bandwidth?
If not and everything is being shipped in a reasonable time, then is it a problem if there's a delay applying? The data is safe and at your standby. If a disaster happened then by the time you wanted to switch standby, it might have already caught up and applied.
just a thought. cheers
Keir
apply delay - followup,
A reader, July 17, 2012 - 9:04 am UTC
Hello,
When the amount of redo on primary increases by more than 70GB per hour we start seeing delay in apply. Shipping redo files to standby database server is fine as we observed information in alert.log (no network latency). The only time we see delay is when the amount of redo increases on primary. I have put the sample information from v$archived_log. You notice here the size of redo varies.
The apply lag would normally be 1 second and it starts going up and I have seen going up to 2 hours. The active apply rate when there is high amount of redo to be applied is around 20MB/sec (which is 72GB per hour).
COMPLETION_TIM LOGSWITCHES TOTAL_SIZE_IN_GIG
-------------------- ----------- -----------------
14-jul-2012 13:00:00 121 73.82
14-jul-2012 14:00:00 299 175.07
14-jul-2012 15:00:00 232 134.08
14-jul-2012 16:00:00 52 27.67
14-jul-2012 17:00:00 26 11.47
14-jul-2012 19:00:00 4 1.65
14-jul-2012 21:00:00 2 1.11
15-jul-2012 00:00:00 48 21.44
15-jul-2012 02:00:00 8 .27
15-jul-2012 13:00:00 4 .91
16-jul-2012 02:00:00 4 .77
Here are the some information that I extracted about apply lag:
NAME VALUE
-------------------------------- -------------
apply lag +00 00:00:19
apply lag +00 00:00:19
apply lag +00 00:07:34
apply lag +00 00:19:07
apply lag +00 00:31:28
apply lag +00 00:42:39
apply lag +00 00:53:03
apply lag +00 01:03:35
apply lag +00 01:14:29
apply lag +00 01:26:51
apply lag +00 01:36:58
apply lag +00 01:46:50
apply lag +00 01:57:00
apply lag +00 02:04:06
apply lag +00 02:03:35
apply lag +00 01:47:48
apply lag +00 01:05:40
apply lag +00 00:05:34
Thanks,
July 17, 2012 - 9:27 am UTC
and how is the IO system on the other site, it is capable of keeping up, is it as good as the primary?
apply delay - followup,
A reader, July 17, 2012 - 9:36 am UTC
Yes. The OS and Storage team didn't see any disk latency during the high volume apply time. The storage of primary and standby shares the similar architecture and hardware.
If there is a IO issue on standby, what would be the wait events if we run statspack?
Thanks,
Logical vs Physical stnadby for upgrading 10g to 11g
Rohit, January 01, 2013 - 12:13 pm UTC
Hi Tom, As this is a post on logical and physical standby databases, i have few questions on their usage for upgrading and migrating 10g databases. We have few 10gR2 databases, few 11gR1 and few 11gR2 databases. We need to upgrade 10gR2 and 11gR1 databases to 11gR2 and migrate to a new hardware/server and 11gR2 databases to migrate as-is to new servers. I was planning to use physical standby for this but instead I have been reading a lot about using logical standby for such a upgrade.
1. Why can't i use a physical standby for such a migration because physical standby is an exact copy of my primary database. The only documentation i found is about using SQL Apply for database upgrades/migration.
2. Can we upgrade between different versions using logical standby or physical standby? I believe moving existing databases as-is without upgrade is possible using physical standby also, isn't it?
3. Can't i just use this strategy for existing 11gR1 and 11gR2 databases - create a physical standby on new server with desired oracle version, ensure both primary and standby are in synch, switch over to the standby and do a role transition.
Will really appreciate your prompt response sir.
Thanks
Rohit
January 04, 2013 - 1:38 pm UTC
1) because we only support using logical or transient logical standby's for rolling upgrades. a physical standby database must have homogeneous versions. Think about it - the second you upgrade one of them - the other is immediately upgraded data dictionary wise. A logical standby doesn't mirror the system tablespace so they can be at different releases.
2) we call it rolling upgrades and it has been available since 10.1.0.4
http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-upgrades-made-easy-131972.pdf 3) a database is upgraded not only by changing the binary code (the oracle executable) but mostly by upgrading the data dictionary. You are missing that step entirely in your thought process there. Since a physical standby is bit for bit - byte for byte the same as the production copy - it would still be a 10g or 11gr1 database!
Upgrade using logical standby
shyamakanta, January 05, 2013 - 8:40 am UTC
@Rohit
Physical stby has certain limitations like both has to be in same version, so we can't use for upgradation process.
Logical stby does not replicate changes in sys/system schemas.
This supports cross oracle version replication and an be used in upgradation.
Currently we are following same approach to upgrade 10gr2 to 11gr2.
Follow-up question on upgrades
Rohit, January 07, 2013 - 5:42 am UTC
Thanks for the clarification Tom
@Shyamakanta - We are actually planning to do some 10gR2 to 11gR2 ugrades and migration from AIX to SUSE Linux platform. Would you be able to share your experience/planning regarding the changes necessary at the DB code level and app code level w.r.t 10g to 11g upgrade? Since this thread is not meant for upgrades/migrations, can you please drop me a mail at rohit.gupta04@gmail.com where i can follow up with you on some specific questions
January 14, 2013 - 11:23 am UTC
you won't be using data guard for that, data guard does not go across platform.
You can use goldengate for near zero downtime database upgrades coupled with OS migration - but not dataguard.