Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: March 02, 2010 - 5:00 pm UTC

Last updated: October 05, 2017 - 4:08 pm UTC

Version: 11.2

Viewed 10K+ times! This question is

You Asked


We have archived log mode on for 11.2 database

1)How to decide the optimal size for redo log files ?

2)How to resize the redo log files to 512 MB from its current size of 1GB. Is it possible ? Will this automatically reduce the newly generated archived logs too.

3) I created Redo log files as 1 GB so now even archived logs are of 1gb.
A 20 GB directory alloted to archived logs is now getting filled up fast.

a) Did this fast filling happen due to the size of the file selected or activity in the database


b)How often does redo logs get filled ?

c)Even if it is not filled does it get archived and goes to the next redo log file ?




Thanx

and Tom said...

1)

http://docs.oracle.com/cd/E11882_01/server.112/e25494/onlineredo.htm#ADMIN007

2) you cannot resize, you have to add new ones that are 1gb and then drop the old ones.

And while it will generate fewer NUMBERS of archives, you'll have the same amount of archived information! Instead of having 1,000 512mb files, you'll have 500 1gb files. You won't really be "saving" anything there.

3) and? so? I mean, you need what you need.

a) it is based on the redo generated, regardless of the redo log file size. You generate X-bytes of redo every second. If you use a 1gb file or a 512mb file, you still generate X-bytes of redo every second.

b) when X-bytes of redo are written to a file of X-bytes in size... I mean, the file gets filled when the file gets filled - not sure what else to say?

c) only in certain cases, and then we archive ONLY the data that was written to the file - not the entire file. See archive_lag_target (a parameter) for an example of when we would switch log files before the file was filled.


Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

how often should a database (10g) archive its logs

A reader, September 03, 2010 - 4:02 pm UTC

some document says every 15 minutes, or even some says every 30 minutes.

my question is does this really matter from a performance point of view? for example, one database switches its log every 2 minutes, and one switches its log every 15 minutes. does the 2 minutes one impact the database performance? why? (let's say we have 6 groups of redo files)
Tom Kyte
September 09, 2010 - 7:29 pm UTC

every time you switch logs - we initiate a checkpoint.

checkpoint often, the more time you spend writing to disk. If you checkpoint less often, we can cache blocks longer.

checkpoint not often enough, the more time you'll spend during instance recovery - your mean time to recover from a failure will be loooonnnnnggggger.

There is a fine balance between reducing the frequency of writing blocks to disk and bounding your recovery time.

are you suffering from lots of waits on IO that are in some part caused by DBWR writing to disk with a small archive switch period? If so, the switch period could be part of the problem. if not, then the switch period is fine.

are you suffering from long startup times after a failure and is this impacting your ability to do business? then your switch period might be too long (or even better - your fast_start_mttr_target parameter should be looked at...


as long as you are not seeing "checkpoint not complete" or "archive not complete" in your alert log - your switching is probably fine unless you fall into the above category.

Link gets a 404

Suz Olliver, October 03, 2017 - 9:51 pm UTC

I tried following the link but it gave a 404 not found.
Connor McDonald
October 05, 2017 - 4:08 pm UTC

I've updated the link.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database