A reader, August 28, 2001 - 1:03 pm UTC
A reader, May 07, 2002 - 8:19 am UTC
Hi,
The database is running in ARCHIVELOG mode in a Windows NT server.
The value for log_archive_dest_1 is a folder in the same server. As a precaution I wanted to
specify one more location for archived logfiles in a different system.
So, I created a mapped network drive (G:\) for a folder Archivelogs in a different system.
Then in the Init.ora file if I give
log_archive_dest_2 = "Location=G:\test"
This fails, stating that Oracle cannot translate the destination.
I tried by giving the name of the system too "\\sys002\archivelogs\test". This also failed with the
same error message.
Is it possible to configure log_archive_dest_2 to a different system?
Thanks.
Vijay.
LOG_ARCHIVE_DEST_2
Will, December 18, 2002 - 4:14 pm UTC
I have made the LOG_ARCHIVE_DEST_2='location=\\other_machine\folder\' work in NT by doing the following in test:
o Using LOG_ARCHIVE_DEST_1 instead of LOG_ARCHIVE_DEST
o Setting the logon user of the instance and listener services to a domain account that is also a local admin
It seems to do what I want, which is:
o Put the archived log in both places
o Continue with an error in the alert log if either destination goes bad (both DESTs are optional)
o Stop everything if both destinations go bad
Unlike what I am used to from past experience, when I make a destination good again (i.e. make space), logs do not automatically get written there again. In V_$ARCHIVE_DEST, the "fixed" destination still showed a status of ERROR. It was not until I reissued, alter system set log_archive_dest_2 = 'location=\\other_machine\folder\' that subsequent logs got written there. Is this expected behavior? Is there a better way of resetting the error status of the destination? I am working in 8.1.6.
This feature seems to meet our needs pretty well and allows us to bring into the database some functionality that is currently done outside. Further, the alert log output would trigger notifications (pages, emails, etc.) for log archive problems as it does for other problems.
Your statement:
"It is possible but not highly recommended (too many inter-system dependencies)"
does not give me a warm and fuzzy feeling about using this feature in production. Can you provide some examples of inter-system dependencies, especially those that would be a factor even when we are writing the logs locally as well?
What if you're writing logs to another server for a standby database?
Thanks Tom.
December 18, 2002 - 4:44 pm UTC
The inter-system dependency is actually presented in your case here.
the remote system goes down.
we stop copying. there are archives NOT on the other system
you fix problem.
it is up to you to MANUALLY fix the missing archives (hence the fact that you need to manually FIX the database to make it start moving logs again)
Same problem with local dest
Will, December 18, 2002 - 5:24 pm UTC
I changed LOG_ARCHIVE_DEST_2 from a remote machine to a folder on the local machine and experienced the same problem: I have to kick it to make it work again after DEST_2 goes from bad to good.
I guess I'm not quite seeing how this is an inter-system thing.
Also, isn't having to fix a broken secondary log location better than not having one at all?
My external routine for copying the logs would be in the same boat (and so would I) if the remote machine goes down.
Still not sure why it's better to use the external method instead of the database.
Thanks in advance.
December 19, 2002 - 7:11 am UTC
You want to have to manually kick -- you have to sync them up anyway (eg: if this remote archival location is to be of any use, after a failure, you need to manually intervene in order to get the missed logs over there -- else, if it "just fixed itself" and started archiving again, well, you would be missing archives there. You would have a false sense of security).
The systems are dependent on eachother, they need to work together.
Yes, having a secondary location is better then none at all -- but if it is sometimes up, sometimes down -- maybe not. Rather have something always there (like my tape drive)...
I don't know what this "external method" is.
getting your point
Will, December 20, 2002 - 9:58 am UTC
Given that I have seen this:
...
Fri Dec 20 05:03:39 2002
Thread 1 advanced to log sequence 91
Current log# 2 seq# 91 mem# 0: D:\REDO_CTL\TPAC\REDO2B.LOG
Current log# 2 seq# 91 mem# 1: F:\REDO_CTL\TPAC\REDO2A.LOG
Fri Dec 20 05:03:39 2002
ARC0: Beginning to archive log# 1 seq# 90
ARC0: Completed archiving log# 1 seq# 90
***** NO LINES DELETED FROM HERE *****
Dump file D:\Oracle\Ora81\RDBMS\trace\tpac\tpacALRT.LOG
Fri Dec 20 06:12:05 2002
ORACLE V8.1.6.1.0 - Production vsnsta=0
vsnsql=e vsnxtr=3
Windows NT Version 4.0 Service Pack 6, CPU type 586
Starting up ORACLE RDBMS Version: 8.1.6.1.0.
...
two mornings in a row since I started the testing described above, I think I'll continue to use the external method which is just a simple batch file like so:
:repeat
title Archive files copying now
xcopy <LOG_ARCHIVE_DEST_1>\*.* \\other_machine\folder /d /v >> d:\sql\archivecopy.log
title Archive waiting to copy
sleep 1800
goto repeat
Any ideas on why the database apparently crashed (quietly) and restarted? I am 99.999% percent sure that no person restarted the database at 06:12:05 this morning.
December 20, 2002 - 10:34 am UTC
the database would not restart automagically -- someone or something did that. The service would not just kick in.
did someone reboot the server?
If you look in the event log, (event log, not alert), you should see sysdba connections being made (auditing -- mandatory auditing that cannot be turned off). do you see any around the time of this mystery -- if so, i would suspect shutdown abort, startup.
disregard
Will, December 20, 2002 - 10:07 am UTC
Apparently the box blue-screened and restarted.
December 20, 2002 - 10:34 am UTC
windows, always up (well, always getting up anyway...)
REOPEN
reader, September 04, 2004 - 1:14 pm UTC
In log_archive_dest_n parameter, how does 'reopen' parameter affect?
For example, I have 2 redolog groups and one arch destination that is mandatory. When arch process tries to archive to this destination and for some reason, it could not access, will it try again after n number of seconds. Does it mean that for n number of seconds, database is in a 'hung' state as redolog file cannot be reused? Thanks.
September 04, 2004 - 1:44 pm UTC
arch will retry the archival operation on mandatory destinations. whether the database is "hung" depends on whether the file needed archival is also required to be reused -- it won't "hang" until
a) it (that file) needs to be reused
b) at least one of the mandatory destinations has not completed archiving.
Remote archiving
Joris, December 21, 2004 - 2:35 pm UTC
Hi,
I find myself in the same situation as Will.
Just 1 place to store archive logs - on the same server as the database - seems dangerous.
If the server is destroyed, recovery to the point of failure is no longer possible because we've lost the archive logs.
Even Will's batch file to copy all archives every 30 minutes could destroy 30 minutes of work.
What would be the best approach to get the archive logs away from the db server (other than complex data guard stuff)?
Kind regards.
Joris.
December 21, 2004 - 3:11 pm UTC
data guard complex?
you can set up optional archive destinations over network disks (optional so a network/machine failure of a single machine doesn't bring it all to a halt)
Remote archiving
Joris, December 22, 2004 - 2:00 am UTC
> you can set up optional archive destinations over
> network disks (optional so a network/machine failure
> of a single machine doesn't bring it all to a halt)
This is exactly what I do now, but from the above discussions I gathered that tis approach is not advised (why else is Will moving the logs himself).
Kind regards,
Joris.
December 22, 2004 - 9:26 am UTC
why did you gather that?
Remote archive
Joris, December 22, 2004 - 12:49 pm UTC
From your followup above : 'It is possible but not highly recommended (too many inter-system dependencies)', and from Will who choose his 'external' method to move this archive logs to another machine.
Anyway, I'll continue to let the arch'er do the moving for me. I don't like to start batch files when Oracle can do it for me.
Kind regards,
Joris Struyve.
December 22, 2004 - 1:57 pm UTC
these statements seem to stand on their own...
archiving to a network disk has the issues discussed above.
data guard, with all of its complexities actually addresses many (almost all -- if the downtime for the standby is short -- no manual catching up at all) of them.
network archive destination/manual intervention
Mark, November 16, 2006 - 12:23 pm UTC
Hi Tom,
A few years ago you wrote:
----
You want to have to manually kick -- you have to sync them up anyway (eg: if this remote archival location is to be of any use, after a failure, you need to manually intervene in order to get the missed logs over there -- else, if it "just fixed itself" and started archiving again, well, you would be missing archives there. You would have a false sense of security).
----
I am curious why the database couldn't "just fix itself" and attempt to re-write all failed logs. In other words, automatically catch up if something fails. The PostgreSQL docs claim to have something like this (WAL = write-ahead log, basically the same thing as an archived log). Is there a problem with this approach, or does Oracle just not have this feature?
"It is important that the archive command return zero exit status if and only if it succeeded. Upon getting a zero result, PostgreSQL will assume that the WAL segment file has been successfully archived, and will remove or recycle it. However, a nonzero status tells PostgreSQL that the file was not archived; it will try again periodically until it succeeds."
Thank you,
Mark
November 16, 2006 - 3:35 pm UTC
and what happens to the postgress database when the redo it tries to archive no longer exists on the system? I have a feeling you are comparing an apple feature to an orange here.
so, does postgress "hang" the database in order to "try again later" or what?
network archive destination/manual intervention
Mark, November 16, 2006 - 4:58 pm UTC
I think redo works differently on Postgres. Write-Ahead Logs (WAL) are written to the local log destination, pg_xlog/, sequentially, and can accumulate indefinitely until archived. If any of the logs fail to reach the archive (network) destination, they are not removed from pg_xlog and another copy attempt is made. When they finally make it through, they are removed from pg_xlog/. If your network isn't able to keep up the pace, pg_xlog/ builds up until your drive explodes. :)
"The speed of the archiving command is not important, so long as it can keep up with the average rate at which your server generates WAL data. Normal operation continues even if the archiving process falls a little behind. If archiving falls significantly behind, this will increase the amount of data that would be lost in the event of a disaster. It will also mean that the pg_xlog/ directory will contain large numbers of not-yet-archived segment files, which could eventually exceed available disk space. You are advised to monitor the archiving process to ensure that it is working as you intend."
That's from - </code>
http://www.postgresql.org/docs/8.1/interactive/backup-online.html <code>
So I hope that answers your question about what happens when redo no longer exists - it simply isn't removed until it's been archived. Also, I don't believe there is any reason for the database to hang. It seems that the network destination may not be "good" but your database is still operational as it tries to catch up.
Now I suppose Oracle cannot use this approach because online redo logs are of a limited size and therefore cannot accumulate indefinitely? Different architectures...
As always, any insight would be much appreciated.
Thanks,
Mark