Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Terry.

Asked: July 28, 2017 - 8:49 am UTC

Last updated: June 27, 2018 - 1:21 am UTC

Version: 11.2.0.4

Viewed 10K+ times! This question is

You Asked

Hi,

I've been a DBA for a very long time (since 7.3.4) and in the old days I remember 'rules-of-thumb' that stated that you should never switch logs more than 'x' times an hour. This was of course due to the fact that checkpoints take up a lot of resources,... Nowadays we have of course incremental checkpointing so my question is, is there still a reason to avoid switching logs too often? If so what is that reason and is there still a guideline or rule-of-thumb about how often you should switch logs? References would also be appreciated if you have them :)

and Connor said...

There's a huge list of checkpoint types nowadays (not just full and incremental) - we do them for all sorts of things now, but there is still the necessity to do "enough" of a checkpoint at log switch time to ensure those dirty buffers are protected by any redo log that is going to be reused.

That said, in normal operation you would expect with incremental checkpointing in action, the amount of *necessary* work required at log switch time would be manageable and thus a higher frequency could be sustained.

But conversely - what would be the benefit ? With DataGuard databases now getting redo logs in near real time, the old adage of "we're only as up to date as our last archive" becomes less of an issue.

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Recommended frequency?

Terry, July 30, 2017 - 11:15 am UTC

Hi Conner,

Thanks for your response! However I'm still wondering if there is 'still a guideline or rule-of-thumb about how often you should switch logs?'?

Terry
Connor McDonald
July 31, 2017 - 2:39 am UTC

If you are using fast start mttr, then you could look at OPTIMAL_LOGFILE_SIZE in v$instance_recovery

What if we're not using fast start mttr

John Keymer, June 26, 2018 - 9:50 am UTC

Hi Connor (et al),
We are having a similar issue where we are seeing a lot of entries in the alert log like this:

Thread 1 cannot allocate new log, sequence 160001
Private strand flush not complete


This is a data warehouse, so there is a lot of redo being generated during the ETL run, but as it's not using writeback etc, then outside of that window, nothing). I have a theory that we might be able to turn logging off on the data tablespace anyway, because realistically we'd never recover the DB from a backup - we'd just run a new full ETL. That's another conversation though, and I need to resolve this issue first as we think it might be causing JDBC timeouts (as per note 1543710.1).

Looking at the switches per hour on a daily basis, we have numbers like this:

DAY                  00  01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16  17  18  19  20  21  22  23
-------------------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
20-JUN-2018            1   4   3   6   1   0   4  20   5  10  11   6   0   0   0   0   0   0   0   0   9   1   1   1
21-JUN-2018            1   2   4   2   6   0   0  24   5  10  13   3   0   0   0   0   0   0   0   0   9   1   1   1
22-JUN-2018            1   3   0   1   0   0   0   0   0   3   4   4  15   1   0   0   0   0   0   0   9   1   0   1
23-JUN-2018            1   4   3   7   1   0   9  15  14   8   7   0   0   0   0   0   0   0   0   0   9   2   0   1
24-JUN-2018            1   4   3   6   1   0   7  17   0  15   9   8   0   0   0   0   0   0   0   0   9   1   0   1
25-JUN-2018            1   4   3   5   3  14   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0


Ignore the actual times - that's because we're not fully live yet, so running this at a different time.

Anyway, as you can see, there are periods where around 20 switches are done per hour, so one every 3 mins or so. This seems quite high?

Taken during idle time:

select * from v$log;



    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME         NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------------ ------------ ------------------
         1          1     160044  524288000        512          2 YES INACTIVE            5.9703E+12 26-JUN-18            5.9703E+12 26-JUN-18
         2          1     160045  524288000        512          2 YES INACTIVE            5.9703E+12 26-JUN-18            5.9703E+12 26-JUN-18
         3          1     160046  524288000        512          2 NO  CURRENT             5.9703E+12 26-JUN-18            2.8147E+14



So each log is 500MB - would you recommend maybe using something like 2GB instead?

Thanks
John
Connor McDonald
June 27, 2018 - 1:21 am UTC

Yeah, one every couple of mins is getting a little fast for my liking.

You could make them bigger (2G) and then set archive_lag_target to ensure you get log switches at regular intervals during quiet time.


Version

John Keymer, June 26, 2018 - 9:52 am UTC

Sorry, we are 11.2.0.4.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database