Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Jayadevan.

Asked: December 04, 2024 - 6:20 am UTC

Last updated: December 09, 2024 - 3:28 am UTC

Version: 19c

Viewed 1000+ times

You Asked

Hello,
We have a batch process that commits on completion of each insert, which involves just a few bytes. It should ideally be done in batches (commit after processing 1000 entities etc). It is a legacy application and modifying the code is not possible. Will reducing the log_buffer size improve performance? Right now it is at 28 MB. It is a large server with 700+ GB RAM and SGA set to 220 GB. Since each commit involves writing only a few bytes and if flushing the REDO involves flushing the entire log_buffer, reducing the size of log_buffer will improve performance. But I read somewhere that not all log_buffer gets flushed and only minimal data gets flushed to disk (online redo) on commit. Requesting expert opinion.

and Connor said...

It *might* yield some benefit, but it is likely to be in the order of maybe 1 or 2 percent at best. You could take a look at your 'log file parallel write' times and compare them with 'log file sync' to see what impact (and hence benefit) you might gain, but I expect you are going to be disappointed :-)

A couple of things you could explore

1) Using batch/noimmediate commits. See https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/COMMIT.html
2) Try using SQL patch to add a hint to use the memoptimize option



Important!!! Both of these options head down the path of potential transaction loss in the event of an instance crash, so restartability/replayability becomes a critical component of your batch process