Hi Connor (et al),
We are having a similar issue where we are seeing a lot of entries in the alert log like this:
Thread 1 cannot allocate new log, sequence 160001
Private strand flush not complete
This is a data warehouse, so there is a lot of redo being generated during the ETL run, but as it's not using writeback etc, then outside of that window, nothing). I have a theory that we might be able to turn logging off on the data tablespace anyway, because realistically we'd never recover the DB from a backup - we'd just run a new full ETL. That's another conversation though, and I need to resolve this issue first as we think it might be causing JDBC timeouts (as per note 1543710.1).
Looking at the switches per hour on a daily basis, we have numbers like this:
DAY 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
-------------------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
20-JUN-2018 1 4 3 6 1 0 4 20 5 10 11 6 0 0 0 0 0 0 0 0 9 1 1 1
21-JUN-2018 1 2 4 2 6 0 0 24 5 10 13 3 0 0 0 0 0 0 0 0 9 1 1 1
22-JUN-2018 1 3 0 1 0 0 0 0 0 3 4 4 15 1 0 0 0 0 0 0 9 1 0 1
23-JUN-2018 1 4 3 7 1 0 9 15 14 8 7 0 0 0 0 0 0 0 0 0 9 2 0 1
24-JUN-2018 1 4 3 6 1 0 7 17 0 15 9 8 0 0 0 0 0 0 0 0 9 1 0 1
25-JUN-2018 1 4 3 5 3 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ignore the actual times - that's because we're not fully live yet, so running this at a different time.
Anyway, as you can see, there are periods where around 20 switches are done per hour, so one every 3 mins or so. This seems quite high?
Taken during idle time:
select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------------ ------------ ------------------
1 1 160044 524288000 512 2 YES INACTIVE 5.9703E+12 26-JUN-18 5.9703E+12 26-JUN-18
2 1 160045 524288000 512 2 YES INACTIVE 5.9703E+12 26-JUN-18 5.9703E+12 26-JUN-18
3 1 160046 524288000 512 2 NO CURRENT 5.9703E+12 26-JUN-18 2.8147E+14
So each log is 500MB - would you recommend maybe using something like 2GB instead?
Thanks
John
June 27, 2018 - 1:21 am UTC
Yeah, one every couple of mins is getting a little fast for my liking.
You could make them bigger (2G) and then set archive_lag_target to ensure you get log switches at regular intervals during quiet time.