Thanks for the question, Seung Chul.
Asked: April 29, 2025 - 6:09 am UTC
Last updated: August 13, 2025 - 5:18 am UTC
Version: 19.25.0.0
Viewed 1000+ times
You Asked
Hello.
We just upgraded our main database from 12cR2 to 19c(exactly 19.25).
Some of parameters are changed, including log_buffer.
Previously we set log_buffer parameter to 20M, and also speicified in parameter file.
After upgrade we removed because out consultant not recommanded to set it manually.
So now our log_buffer parameter has a default value, 990080K. Almost an 1GB.
And now we suffered a high log switch and 'checkpoint not complete' in DB alert log.
In 3-node RAC we have a 5 x 1G log file in each thread, but after that we have to create more, large log file.
Current we have 5 x 1G x 3 thread, and 5 x 3G x 3 thread, so it's 30 files now.
Right now checkpoint event is not happen any more, and switching count back to same as 12c.
Problem is, some docs or webpages adviced to DO NOT SET log_buffer parameter
but default value seems-definitely-too large compare to default value range(2M-32M).
I know there's no Shangri-La (just-in-fit) in database parameter however I have to find a right,or at least better answer.
Couid you please tell me which is a better value? Max 32M or Oracle-defined default value 1G?
---2025.05.08 update :
Currently out system has
OS : AIX 7.2
CPUs : Physical 40 cores, (Derived from cpu_count) 320
Memory :
- memory_target : 0
- sga_target : 0
- pga_aggregate_target : 30G
- shared_pool_size : 25G
- db_cache_size : 100G
- java_pool_size, large_pool_size : 2.5G
- streams_pool_size : 2G
Total DB size : data file 40.41TB, segment 35.15GB (ASM)
and Connor said...
Thanks for your patience
From the docs:
The log buffer size depends on the number of redo strands in the system. One redo strand is allocated for every 16 CPUs and has a default size of 2 MB. Oracle allocates a minimum of 2 redo strands per instance. When the log buffer size is not specified, any remaining memory in the redo granules is given to the log buffer.but asking around internally, the docs are a slight simplification. For (non-exadata) machines, the algorithm is more like (greatest(2mb, 128K * cpu_count)) * (round(cpu_count / 16)).
which would bring you to around 800M based on your cpu count of 320, but with a 25G SGA, memory is carved up 64M sizes so its quit possible that you're rounding up to be near your 1G mark.
And yes, this *can* have an impact on your log file switches. We reserve some space in the redo log files that is equivalent to some of the memory in the log buffer... which means a large log buffer might trigger much more frequent log switches. The way to detect this is seeing if the archived redo logs are smaller than the redo logs, that is, they seemed to switch before they were full.
Jonathan Lewis did an excellent exploration of this here
https://www.red-gate.com/simple-talk/databases/oracle-databases/are-your-oracle-archived-log-files-much-too-small/ but in a nutshell, the recommendation is (as you've already surmised) use bigger log files. I'd recommend against reducing the log buffer because you (theoretically) could have 320 cores worth of activity all smashing into the log buffer if your system got really busy.
Rating
(1 rating)