Skip to Main Content


Question and Answer

Tom Kyte

Thanks for the question, Vladimir.

Asked: February 08, 2001 - 8:58 pm UTC

Last updated: August 17, 2012 - 10:08 am UTC

Version: 8i

Viewed 10K+ times! This question is

You Asked

Hi Tom,
thanks for your help before and I've some smalls questions for you:
-I would like to know what is the different between create DB in archivelog mode or not?
-When I create a DB with Oracle Database Assistant, it create me in which mode ARCHIVE or NOT ARCHIVE,
-How I can know if certain DB is running in ARCHIVE or NOT ARCHIVE mode
-To change DB to ARCHIVELOG MODE, I must to logging like system and made:
alter table v$database set log_mode=ARCHIVELOG;
-If I 've a DB running in NOT ARCHIVELOG MODE and change to ARCHIVELOG MODE, i'll loss some data?

Thanks very much

and Tom said...

o typically a database is CREATED in noarchivelog mode to avoid the generation of tons of archives from the create itself. After the database is created and loaded and you are ready to go production, you would turn on and configure archivelog mode. Prior to that, it is typically easier to rebuild then to spend the time generating and archiving all of that redo log (you'll generate lots of it just to do the create and initial load).

o ok that is a fact.

o select log_mode from v$database;

o see
</code> <code>
for the proper procedure. You cannot "alter table v$database"

o no, you'll be PROTECTING yourself from the potential loss of data. It is a database in NOarchivelog mode that will lose data some day (due to media (disk) failure)


  (24 ratings)

Is this answer out of date? If it is, please let us know via a Comment


Need your expert Advice

Yogeeraj, March 20, 2002 - 5:03 am UTC

(Hope that my question is relevant here ;))

Is there a measurable performance degradation when operating in ARCHIVELOG Mode compared with NOARCHIVELOG?

My story is:
We are setting up a server which will be hosting an online real time (both CPU and Disk intensive) application.
NT server 4.5
Oracle 7.3.4
IBM E-server with RAID 5

The Application was developed on Delphi and uses DCOM objects and uses considerable amount of memory for keeping real time data and providing them to online/automated processes (Programmable Logic Controllers). Frequent database operations are performed only to synchronise "memory" data with tables in the Database (using SQLNET(?)). Hence, the database is used only for temporal synchronisation during job start/completion, System startup and system shutdowns. The 3rd party application has its own way to keep its data in memory/files.

According to the "external" engineer installing the system, their company do not recommend use of ARCHIVELOG mode - reason being performance deterioration.

So far so good....

Now, my Main production system has a link to their database, and does frequent data uploads and downloads. (upload: send jobs; download: get job status, etc). Thereby, synchronising everything.

We know that ARCHIVELOG mode increases robustness of the system and make recovery relatively easy in cases of crashes.

My seniors are more keen to go for NOARCHIVELOG mode as recommended by the external Engineer than what i recommend...

What should i do?

thank you for your patience, time and precious guidance.

Best Regards

Tom Kyte
March 20, 2002 - 11:35 am UTC

gee, why would you use 7.3.4 (not supported, ancient software, missing thousands of new features) today???

Oh well.
The "external engineer" should be fired immediately.#b I will say it simply -- he is so wrong, so redicously wrong, I would call him stupid. Their company must LOVE to lose data as that is what happens in noarchivelog mode. You will lose data.

Archive logging, in a properly configured system, adds 0% performance degradation. It is all about IO. If you have the right number of devices and IO channels -- no worries, no "degradation".

So far so bad if you ask me.

Don't do it. Just don't do it.

Performance in Archivelog Mode

mohammad taha, March 20, 2002 - 1:01 pm UTC

"If you have the right number of devices and
IO channels -- no worries, no "degradation".

Would you please explain with examples how to decide
on what should be the right number of devices and
IO channels for a database of around 50 giga.

Tom Kyte
March 20, 2002 - 2:48 pm UTC

50 gigs, 5meg -- size of the database has nothing to do with it.

Transaction volume, that has everything to do with it.

Archiving is very CPU deintensive -- it involves copying a file from one disk to another disk.

You need sufficient devices to avoid contention here. You want to make it so that when LGWR is writing to a device, ARCH is *not* reading that device. So, you would have log group 1 on dev1 (mirrored to dev3). log group 2 on dev2 ( mirrored to dev4). log group 3 on dev1/dev3, log group 4 on dev2/dev4 and so on.

Well, LGWR writes to dev1/dev3. Arch is reading dev2/dev4 and writing to dev5. Arch finishes and waits. LGWR not writes to dev2/dev4, Arch reads dev1/dev3 and writes to dev5. No contention, there you go -- smooth operation, no degradation.

Only if lgwr and arch start stomping on each other or the archiver cannot keep up with the logs that lgwr is producing would there be an issue and then like everything else in the database world -- you tune that process, configuring sufficient disks -- making sure they don't contend with eachother.

The trade off -- eg not using arch, being in noarchivelog mode -- is a 100% surefire way to LOSE data. It is not a 10% chance, not a 50/50 chance -- you WILL lose data (raid 5, raid whatever, you WILL lose data. Point in time recovery is done for dozens of reasons, media recovery being but one of them. Even raid 5 fails (believe it or not))

multiple archive destinations

Reader, July 19, 2003 - 2:14 pm UTC

Tom, If I had specified multiple archive destinations, do I have to have corresponding number of ARCH processes? Say, for example, if i have 3 archive destinations, should have 3 ARCH processes to archive to these destinations. Could you give me a guideline how many ARCH processes one should have in the database and what dictates the specification of multiple ARCH processes? Thanks.

Tom Kyte
July 19, 2003 - 6:16 pm UTC

"have to", no

"should", you might consider it -- they can all work at the same time together.

If you find arch running behind (eg: you get "cannot allocate new log, archival required" in your alert log), you might want more arch processes.

How does it do?

Reader, July 20, 2003 - 10:59 am UTC

Assuming that I have configured 3 ARCH processes and I have three members each in my 3 log groups. I have, say, three arch destinations. How does oracle divide the workload of copying the online redo log file to my 3 arch destinations? My understanding is that arch process copies from only one "valid" redo member of the group to arch destinations. Now, as I have three arch destinations, is it like three ARCH processes that I have configured will simultaneously copy the redo member to all of these three arch destinations?
How is it different from one arch process doing the same to three arch destinations. Is there a one to one mapping for number of arch processes and number of arch destinations. Thanks for your time in clarifying this.

Tom Kyte
July 20, 2003 - 3:53 pm UTC

it is not documented how the work is divvied up....

Article-ID: <Note:73163.1>

has some info tho

SAN and archive dest

reader, June 08, 2004 - 12:16 pm UTC

What is the use of specifying multiple archive destinations if I used SAN? Thanks.

Tom Kyte
June 08, 2004 - 1:21 pm UTC

what does a SAN have to do with anything? SAN is just a way of not having to have direct connect disks.

So -- tell us why you think a SAN would obviate the need for multiple archive destinations?

Strategy for Data Warehouse doing daily full reloads

Justin, January 28, 2005 - 7:19 am UTC

We have a very large database which is almost fully reloaded every night because the developers didn't invest any effort in doing incremental loads. Essentially it's like doing a first time load every day. I'd like to run the database in archivelog mode to protect against data loss and maximise availability, but not sure how to deal with the tons of redo log. Any thoughts or recommendations?

Tom Kyte
January 28, 2005 - 8:09 am UTC

is the database modified during the day?

is part of the reload involving a "bounce" of the database?

Strategy for Data Warehouse doing daily full reloads

Justin, January 28, 2005 - 9:28 am UTC

There are some modifications during the day, 'though not very many. The reload does not involve a bounce.

Tom Kyte
January 28, 2005 - 2:21 pm UTC

are the modifications "mission critical" -- I mean, they would really be the only things in need of protection here.

The data can just be reloaded (apparently) in the event of a failure.

One approach could be:

a) stage database else that does the "reload" (noarchive)
b) place the read only data in read only mode
c) transport these files off
d drop the tablespaces in the other database (that is in archive log mode)
e) import these files (they are in read only tablespaces, leave them that way)
f) copy the read only files to tape

In the event of a failure -- restore read only files or recover the small amount of data that is read write.

Does that sound within the realm of "feasible"?

Strategy for Data Warehouse doing daily full reloads

Justin, January 31, 2005 - 2:45 am UTC

Yes it is. At least it gives me something to think about. I was wondering if I was missing anything obvious... but i get the sense now that i need to be creative :-)

Archiving Concepts

Totu, February 24, 2005 - 9:20 am UTC

Dear Tom.
May be my language is not English, that is why I met with difficulty in understanding the concepts of relation between data changes<>writing to redo log<>archiving.
For example: Does LGWR write to redo log before commit operation?
Or, I inserted 100.000 records to table. But couldn't commit, because Windows restarted suddenly. Database in archvelog mode. Should that data will be recovered by Oracle automatically?

Thanks in advance.

Tom Kyte
February 24, 2005 - 9:26 am UTC

when you commit, one of the things that happens is lgwr writes your redo log to disk.... so it is part of the commit operation.

the 100,000 records will be "rolled back" upon restart -- since you did not commit them it will be as if they didn't ever get inserted (because technically, they did not)

Norachivelog and Large insertes without commit

totu, February 25, 2005 - 1:10 am UTC

Lets say in noarchivelog mode we inserted(not committed) 100.000 records to table.
After it we want to do another insert of 100.000. Beginning that insert the redo log buffer is filled and LGWR flushes all of the redo log entries in the redo log buffer to an online redo log file, even though the first 100.000 redo records are not committed. Now it is possible to resume insert operation.
we dont commit. Again another insert of 100.000....
So we did 3 inserts but commit at the end of third one. The 1st and 2nd one lets say are overwitten in redolog. I think The last insert has written to datafiles.
But this lines from DB Admin Guide:
Filled online redo log files are available to LGWR for reuse depending on whether
archiving is enabled.
1. If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once the changes recorded in it have been written to the datafiles.
2. If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR once the changes recorded in it have been written to the datafiles and once the file has been archived.

I didn't commit after 1st and 2nd one and their records in redolog are overwritten, why all of them must be written to datafiles. I dont think so, I think Oracle will rollback the 1st and 2nd one before overwritting them.

Please, let me know where I am correct and incorrect.

Thanks a lot.

Tom Kyte
February 25, 2005 - 4:54 pm UTC

you have a transaction -- not three statements, you have a TRANSACTION

It began with your first modification. It ended when you committed. (don't think statements, think "transaction"

Oracle is designed to protect transactions. If you committed and got a response back from the server saying "committed", it was.

Do you have Expert one on one Oracle? I cover this sort of stuff in great detail, but in short, before we advance into a redo log, we make sure ALL of the blocks protected by that redo log have been CHECKPOINTED to disk -- so we do not need that redo information for crash recovery (the modifications are in the datafiles already). This is true in archive and noarchive log mode.

more clarification

Reader, March 02, 2005 - 1:15 am UTC

Dear Tom,

i am reading Database Administrator’s Guide for Choosing Between NOARCHIVELOG and ARCHIVELOG Mode.

"The choice of whether to enable the archiving of filled groups of online redo log
files depends on the availability and reliability requirements of the application
running on the database. If you cannot afford to lose any data in your database in
the event of a disk failure, use ARCHIVELOG mode. The archiving of filled online
redo log files can require you to perform extra administrative operations."

Which extra administrative operations he means to say to perform ?


Tom Kyte
March 02, 2005 - 7:10 am UTC

making sure you have space for the archives, making sure the archives are copied from that space to tape or another system in a timely fashion (for the goal is to have a copy of them for using with a restoration on another system in the event of media failure, if your disks go bad -- it would not be a good thing to have your archives sitting on the same disks)

I can put the above much more simply:

if you run in noarchivelog mode, you will lose data someday. You will lose all work that occured between the time of the failure and your last full cold (offline -- with the downtime needed to do so) backup. Make sure your end users and management is fully aware of this fact, you will lose it.


Jing Mandia, March 10, 2005 - 10:07 pm UTC

I liked the discussion that went along with this topic. It's a lot of help especially for an Oracle newbie like me. Thanks.


Excellent discussion,Tom you are really gr8

Neeraj Ranjan Rath,Hyderabad, May 17, 2005 - 11:12 am UTC

I would like to ask 1 question
How is the performance getting effected when we are using archivelog mode for data warehouse?
I feel as there are less number of DML operations happening i has least effect on DWH databases
Please correct me if am wrong.

Thanks in Advance

Tom Kyte
May 17, 2005 - 2:09 pm UTC

DWH tend to do lots of modifications (bulk loads).

on a well laid out system, archive log mode can have hardly any effect, a matter mostly of ensuring sufficient IO bandwidth.

in a DW, if you operate in noarchive log mode, many bulk operations take place with DIRECT IO and no logging, you can avoid this IO all together. It can have a large impact on the sizing and IO considerations for a data warehouse

noarchivelog and buffered I/O of OS

nazim, March 12, 2007 - 12:55 pm UTC

If the OS is using buffered I/O, and the database is in
noarchivelog mode and after checkpoint the online redo is
overwritten, and the data is not actually writtent to
datafiles because the OS has buffered it, and then the
instance crashes, then upon restart how will oracle
do instance recovery, because the redo is over written and
the datafiles were not yet actually written?

Tom Kyte
March 12, 2007 - 10:05 pm UTC

we never use buffered IO on the online redo logs. The OS not-withstanding, we tell it "do not buffer, thanks"

buffered I/O

A reader, March 13, 2007 - 3:11 am UTC

Yes, I know, but what about dafafiles (.dbf).
The OS may report to DBWR that it has written (when it actually didnt) and then the online redolog will be
overwritten. Now system crashes, so the data is lost
because it doesnt exist in online redolog and also does not
exist in datafile.
Or the datafiles are also opened in un-buffered I/O mode.

Biel, February 13, 2008 - 12:52 pm UTC

I have a production DB (9.2) with two application schemas, the first is used to store transient data sent from external counters. This schema is constantly updated with update sentences just setting new counters readings.

Tables tipically do not grow (or grow little ex: 10 rows each month in an "update and if not found then insert"), so rows are just updated constantly in a loop fashion. 3GB of total DB size and 12GB of archivelog/day. Process is ill-designed without bind variables but its really difficult to get it changed as its external entity with its agenda.

The second schema is used by an application that timely reads current transient values in first schema and store it on its own tables, keeping historical grow data.

Most archiving is due to activity on first schema, and we are thinking on separating squemas on 2 different BD, the transient - no archivelog, and the historical - archivelog mode, and perform data extract through dblink (not much data). Furthermore we could isolate first BD and throw some cursor sharing = force without fear that CPU usage gains will sky rocket archiving usage.

What do you think about this setup?

As this is a 24x7 and if we have first DB in no archivelog, what options we have to perform backup of it?. I was thinking about not doing backup at all, but just a consistent export to preserve estructural changes in schema objects and some past data. Full "Recovery" would imply recreating instance, and schemas but I find it more acceptable than weekly downtime for offline consistent backup.

Second DB is archivelog + hot backup.

I'll apreciate your point of view.

Thank you,

Tom Kyte
February 13, 2008 - 1:20 pm UTC

I would be hard pressed to ever suggest multiple databases and distributed complexity

How much?

A reader, February 12, 2009 - 1:53 pm UTC

Hi Tom,

I know it may be hard, but just looking for tangible benefits in terms of loading time of NO ARCHIVE LOG in DW over ARCHIVE LOG.
Is there any way, I could have "guess" how much loading time, I could save if I run DW in NO ARCHIVE LOG MODE i.e., if it takes 3 hours to load data in NO ARCHIVE LOG MODE, will this may need at the maximum 3 hours (worst scenario) additional time in ARCHIVE LOAD MODE?
Tom Kyte
February 12, 2009 - 4:46 pm UTC

If you have IO correctly set up, archiving or noarchive should run *the same*

archiving is just copying a file from A to B, as long as lgwr has sufficient IO capabilities to write redo and arch has sufficient to read and write it - it happens in the background.


A reader, February 12, 2009 - 7:50 pm UTC

Hi Tom,

In that case, there is not benefit in terms of performance improvement during DW batch loading in NOARCHIVE log mode? If that is the case, is this myth that NOARCHIVE log can improve the data warehouse batch loading performance?
Tom Kyte
February 16, 2009 - 10:06 am UTC

In archive log mode, some operations that would generate redo log - such as a create index, create table as select - will not when in noarchivelog mode.

That is - in archive log mode, create table as select generates redo for the table being created. In noarchivelog mode - it does not.

Now that said, the fact that it creates redo should not necessarily impact the performance of the create table as select. As the redo is generated, the create table will put it into the redo log buffer, LGWR is constantly emptying this buffer. As long as lgwr empties it as fast as you fill it - it'll just happen, without necessarily slowing you down. And further, as long as archive can archive the file without impeding your process (eg: arch can keep up, you never see "cannot allow new log, archival required" in the alert log), it won't slow you down either.

Key to both of those things happening: sufficient IO capabilities for LGRW and ARCH.

The benefit? You are recoverable.

All I'll say is:

if you run in noarchive log mode, please expect to lose EVERYTHING since your last full cold backup. You must be able to recover whatever you want via any means you want - but you MUST expect to LOSE EVERYTHING (raid 5 - don't care, raid 10 - don't care - raid 42 - I don't care, you will lose everything one day, only a matter of time) and you must have some process to recover what you feel you must recover.

It is people running in noarchive log mode saying "oh, the disk is really good, we won't lose it, no worries" that are frightening.

A reader, February 21, 2009 - 4:40 pm UTC

I wanted to turn my database to archivelog mode.
I did
1)shutdown abort
2)startup mount;
3)alter database archivelog;

I got the error

ORA-00265:instance recovery required, cannot set ARCHIVELOG mode

Can you explain what caused this error? And also can you let me know what needs to be done to turn the database into archivelog mode?
Tom Kyte
February 21, 2009 - 9:33 pm UTC

umm, lose the abort. You crashed the instance, it needs recovery.

shutdown immediate;
startup mount
alter database archivelog;
alter database open;

A reader, February 24, 2009 - 8:44 am UTC

What are the steps I should take to recover an instance?
Tom Kyte
February 24, 2009 - 4:52 pm UTC

step 1: read the documentation

step 2: practice

you need to do step1 and step2 well in advance of actually needing to recover a database (instances recover themselves, you just start them up)


Global Consult Fouad, June 26, 2012 - 10:51 am UTC

Hi Tom,
Our third party application runs on Oracle AS 10g ( and Oracle database. The oracle database is in archivelog mode. PMWO process of the third party takes over 8 plus hours to complete. And this process is generating a lot of archive logs files in the database server. The PMWO process is generating 90 GB of archive logs in a span of 8 hrs. And sometimes this process fails as archive logs consume entire diskspace on the server.

We cannot change the database to noarchive log mode, as we are required to keep production databases in archivelog mode. Also adding additional diskspace is not an option – because currently the system has only 30 % of data. And this surge in archivelogs for this particular process is causing concern.

Question # 1:

How can we tune the PMWO process to minimize the generation of archive logs? This process is triggering a lot of DML ( inserts, updates) in the tables. Hence the generation of archive logs. Is it possible to modify the code for PMWO process so that DML happens in nologging mode? Please advise impact if this can be implemented.

Question # 2:

If Question # 1 is not appropriate, there is any way to solve the problem by a properly configured system and keep the oracle database in archivelog mode?

Tom Kyte
June 26, 2012 - 11:13 am UTC

DML always, always always happens with logging - there is no way to avoid that.

(insert /*+ APPEND */ can skip that, but that is a direct path load, not useful in a slow by slow process which this sounds like it is)

describe to me what this pmwo process does. It sounds like a slow by slow process that does thins in the least efficient manner. If we could turn some of the slow by slow processing into set based and even set based DDL process


Global Consult Fouad, June 27, 2012 - 8:47 am UTC

Hi Tom I would like to thank you for your reply
Regarding the Question # 2 of my previews Review, If we have the right number of devices and IO channels can’t that solve the problem?

As a temporary solution, the End user has to inform IT department before proceeding with the PM WO Generation. Then, IT clears the arch log and asks the User to proceed with the generation. This way, the arch log does not reach its maximum allocated disk space, so the database does not hang and the third party application is not shut down. And the process is completed with all expected PM WOs being properly generated.

The PMWO process is an automated process to create work request for the Preventive Maintenance schedule. This process is bulk creation for work requests that is due in certain range of time. This process is creating almost 20 000 of records in almost 8 hour + every 15 days.

Tom Kyte
June 27, 2012 - 9:47 am UTC

your problem is "we don't have enough disk"

a solution absolutely is "get more disk", of course.

This process is bulk creation for work
requests that is due in certain range of time. This process is creating almost
20 000 of records in almost 8 hour + every 15 days.

it sounds like it was written to be as inefficient as possible. To take 8 hours to create 20,000 records seems an absurd amount of time - and given the amount of redo it generates, I'm quite sure it is doing things as slow by slow (row by row by slow row by slow row) as possible.

Mount database for archivelog mode?

Chakra, July 21, 2012 - 12:44 am UTC


I would like to know why is it require to put a database in mount stage in order to configure archivelog mode?
(Internal thoughts would be appriciated)

Tom Kyte
July 30, 2012 - 8:07 am UTC

we need the control file to be opened so we can update it. mounting a database does that for us, we have the control files ready to go.

A reader, August 04, 2012 - 3:10 am UTC


Controlfile will also be open in open mode.
It could update the controlfile when its open right?
Why is it specifically need to update in mount stage?

Tom Kyte
August 17, 2012 - 10:08 am UTC

because when the database is open, it is already processing transactions - we need the controlfile OPEN but the database CLOSED to turn on this feature - transactions cannot be happening when we enable archivelogmode.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library