Skip to Main Content
  • Questions
  • FDBA Process Constantly Generating Lots of Redo

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: March 23, 2022 - 11:04 am UTC

Last updated: March 25, 2022 - 3:19 am UTC

Version: 19.3

Viewed 1000+ times

You Asked

Hi Tom, we've enabled FDBA on one table in our test environment and found that a lot of archive redo logs are being generated every minute although the table is not being modified by any means. After querying some dictionaries, we find that indeed the FDBA process is generating lots of redos every second and is probably responsible for the huge volume of archive data. Can you shed some light on the underlying reason and potential solution? Thanks in advance! The code and result are attached.
select se.sid, se.program,
  2        s.name, st.value
  3   from v$statname s, v$sesstat st, v$session se
  4   where st.STATISTIC# = s.STATISTIC#
  5   and s.name = 'redo size'
  6   and st.sid = se.sid order by value desc;

SID              PROGRAM                        NAME                    VALUE
---------- -------------------- ------------------------------ ------------------ 
 265            ORACLE.EXE (FBDA)              redo size             16524318028
 132            ORACLE.EXE (W001)              redo size             1989486996
  15            ORACLE.EXE (W003)              redo size             1986251684
 400            ORACLE.EXE (W005)              redo size             1916190820
  11            ORACLE.EXE (W006)              redo size             1912236400
..............................................................................................

and Connor said...

If you check things like v$active_session_history or do some tracing, you'll see that the FDBA has to continuous cleanup historical information both in the table itself and some dictionary metadata tables. For example, if you set retention to 1 year, then even with no modification to the table, then data in the archive that becomes 1year+1day old needs to be cleared out.

But ... *lots* of work has been put into flashback data archive in the latest 19c RUs. Some massive customers picked up some boundary issues in high volume environments, so some major efficiency rework has been done from 19.10 onwards. I'd strongly recommend going to the latest RU (I think its 19.14 or 19.15 currently) if you're planning on using flashback.

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.