Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Grandifer.

Asked: August 25, 2025 - 12:56 pm UTC

Last updated: August 27, 2025 - 3:22 am UTC

Version: 19.28

Viewed 100+ times

You Asked

I have an interesting request from our Warehouse Team.
They require us to enable supplemental logging on ALL COLUMNS on core tables, we already have force logging for our DG.
Our DB size 60TB.
Interestingly our SLA's on processing is 3/4 seconds.

Can I both Supp and Force looging running at the same time, and what are potential performance impacts.

and Connor said...

Can I both Supp and Force looging running at the same time

Yes

and what are potential performance impacts

Anywhere from negligible to extreme :-)

Remember that redo is basically a set of "instructions" on how to apply changes to a block. Supplemental logging adds more data into that mix, but this could have a dramatic effect

Here's a couple of scenarios as to why you need to carefully test this on your particular requirements.

1) Simple INSERT

The redo must (conceptually) already contain all of the new data, so adding supplemental log data won't really change much volume in redo

SQL> create table t as select * from dba_objects;

Table created.

SQL> set autotrace on stat
SQL> insert into t
  2  select * from dba_objects;

95476 rows created.


--
-- Without supplemental log data
--

Statistics
----------------------------------------------------------
         37  recursive calls
      15792  db block gets
      47076  consistent gets
          0  physical reads
   15862524  redo size
        204  bytes sent via SQL*Net to client
         41  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      95476  rows processed

--
-- With supplemental log data
--
Statistics
----------------------------------------------------------
         34  recursive calls
      15800  db block gets
      47077  consistent gets
          0  physical reads
   15937976  redo size
        204  bytes sent via SQL*Net to client
         41  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      95476  rows processed




2) UPDATE

However, consider an update that only changes a small part of the row. The redo is for that particular value, whereas supplemental logging must bring along unchanged columns as well.


--
-- Without supplemental log data
--
SQL> update t
  2  set object_id = 100;

190951 rows updated.


Statistics
----------------------------------------------------------
         32  recursive calls
     413454  db block gets
       7985  consistent gets
          1  physical reads
   95715028  redo size
        204  bytes sent via SQL*Net to client
         41  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     190951  rows processed

--
-- With supplemental log data
--
Statistics
----------------------------------------------------
         63  recursive calls
     211051  db block gets
       3966  consistent gets
         12  physical reads
  112953588  redo size
        204  bytes sent via SQL*Net to client
         41  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     190951  rows processed


That's a 20% bump, but depending on the number of columns and how they are populated, this could be almost any percentage. (Consider a wide table with 100s of columns and you update just one of them. The redo jump could be double or triple or even more).

So a few things...

1) It would definitely be worth asking the team for a justification (presumably they are looking at some sort of replication technology like GoldenGate or similar).
2) Try to pick the lowest level of supplement logging you can get away with
3) Test rigourously before turning it on