Skip to Main Content
  • Questions
  • "Checkpoint not complete" messages with log ship to standby database, Oracle 19 (Standard edition)

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Grigory.

Asked: November 03, 2020 - 8:00 am UTC

Last updated: November 20, 2020 - 4:01 am UTC

Version: Oracle 19 Standard edition

Viewed 1000+ times

You Asked

Hello,

I found out, that there are "Checkpoint not complete" messages by each log switch in alert log of our Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0.
We are using log shipping to standby database, therefore we force log switch witch archive_lag_target parameter each 2 minutes (see PFILE below).
Our checkpoints seem to be quite long (~4 minutes). Also it seems, that all our checkpoints are not complete all the time, independent on load of the database.
But on other hand our processes can normally use DML statements without waiting significant time on event "Checkpoint not complete".

That is why I kindly ask you to explain:

1. What is the meaning of "Checkpoint not complete" messages in this case?
2. Why our DML statements do not hang during incomplete checkpoint event?
3. Would it make sense to setup incremental checkpoints each N seconds with log_checkpoint_timeout parameter?

Thanks in advance,



===============================================================================
alert.log:
===============================================================================
Beginning log switch checkpoint up to RBA [0x100a.2.10], SCN: 39289565
2020-09-25T14:50:37.557027+02:00
Thread 1 advanced to log sequence 4106 (LGWR switch),  current SCN: 39289565
  Current log# 2 seq# 4106 mem# 0: /data/oradata/PROD/redo2_0.log
2020-09-25T14:50:37.709741+02:00
ARC1 (PID:12141): Archived Log entry 22746 added for T-1.S-4105 ID 0xf432ed53 LAD:1
2020-09-25T14:52:40.171692+02:00
Thread 1 cannot allocate new log, sequence 4107
Checkpoint not complete
  Current log# 2 seq# 4106 mem# 0: /data/oradata/PROD/redo2_0.log
2020-09-25T14:52:40.469412+02:00
Completed checkpoint up to RBA [0x1009.2.10], SCN: 39288123
2020-09-25T14:52:43.269670+02:00
Beginning log switch checkpoint up to RBA [0x100b.2.10], SCN: 39290953
2020-09-25T14:52:43.270757+02:00
Thread 1 advanced to log sequence 4107 (LGWR switch),  current SCN: 39290953
  Current log# 3 seq# 4107 mem# 0: /data/oradata/PROD/redo3_0.log
2020-09-25T14:52:43.281539+02:00
ARC0 (PID:12135): Archived Log entry 22747 added for T-1.S-4106 ID 0xf432ed53 LAD:1
2020-09-25T14:54:42.798805+02:00
Thread 1 cannot allocate new log, sequence 4108
Checkpoint not complete
  Current log# 3 seq# 4107 mem# 0: /data/oradata/PROD/redo3_0.log
2020-09-25T14:54:43.377453+02:00
Completed checkpoint up to RBA [0x100a.2.10], SCN: 39289565
2020-09-25T14:54:45.873354+02:00
Beginning log switch checkpoint up to RBA [0x100c.2.10], SCN: 39291331

===============================================================================
Statspack:
===============================================================================
                                                             Avg          %Total
                                          %Tim Total Wait   wait    Waits   Call
Event                               Waits  out   Time (s)   (ms)     /txn   Time
---------------------------- ------------ ---- ---------- ------ -------- ------
log file sync                       2,363    0        143     60      1.4    3.2
direct path read                    2,266    0         69     30      1.3    1.5
log buffer space                      123    0         42    344      0.1     .9
Failed Logon Delay                     32  100         33   1036      0.0     .7
log file switch (checkpoint            11    0          9    783      0.0     .2
direct path sync                       14    0          7    529      0.0     .2
log file switch completion             25    0          7    262      0.0     .1
direct path write                   1,360    0          6      5      0.8     .1

===============================================================================
 PFILE:
 ===============================================================================

PROD.__data_transfer_cache_size=0
PROD.__db_cache_size=4177526784
PROD.__java_pool_size=0
PROD.__large_pool_size=16777216
PROD.__oracle_base='/opt/oracle'#ORACLE_BASE set from environment
PROD.__pga_aggregate_target=2147483648
PROD.__sga_target=6006243328
PROD.__shared_io_pool_size=134217728
PROD.__shared_pool_size=1627389952
PROD.__streams_pool_size=33554432
PROD.__unified_pga_pool_size=0
*.archive_lag_target=120
*.audit_file_dest='/data/oradata/PROD/adump'
*.audit_trail='NONE'
*.compatible='12.1.0.2'
*.control_files='/data/oradata/PROD/control01.ctl','/data/oradata/PROD/flash_recovery_area/control02.ctl'#Restore Controlfile
*.core_dump_dest='/data/oradata/PROD/cdump'
*.db_block_size=8192
*.db_domain='corp.de'
*.db_file_multiblock_read_count=128# 128*8K=1M
*.db_name='PROD'
*.db_recovery_file_dest='/data/oradata/PROD/flash_recovery_area'
*.db_recovery_file_dest_size=4G
*.db_unique_name='PROD'
*.diagnostic_dest='/data/oradata/PROD'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=/data/oradata/PROD/archive'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=1
*.log_archive_min_succeed_dest=1
*.log_checkpoints_to_alert=TRUE
*.open_cursors=300
*.optimizer_adaptive_plans=FALSE
*.optimizer_adaptive_statistics=FALSE
*.pga_aggregate_target=2G
*.processes=600
*.recyclebin='off'
*.remote_login_passwordfile='EXCLUSIVE'
*.sec_protocol_error_further_action='(drop,10)'
*.sga_max_size=5992435712
*.sga_target=5992435712
*.shared_servers=0
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'


===============================================================================
Redo:
===============================================================================

SQL> SELECT * FROM V$LOG;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARCHIVED  STATUS                                           FIRST_CHANGE# FIRST_TIME      NEXT_CHANGE# NEXT_TIME           CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --------- ------------------------------------------------ ------------- --------------- ------------ --------------- ----------
         1          1       4126  104857600        512          1 YES       ACTIVE                                                39299332 25-SEP-20           39300899 25-SEP-20                0
         2          1       4127  104857600        512          1 NO        CURRENT                                               39300899 25-SEP-20         9.2954E+18                          0
         3          1       4125  104857600        512          1 YES       ACTIVE                                                39298149 25-SEP-20           39299332 25-SEP-20                0
   
SQL> SELECT * FROM V$LOGFILE;

    GROUP# STATUS                TYPE                  MEMBER                                             IS_RECOVE     CON_ID
---------- --------------------- --------------------- -------------------------------------------------- --------- ----------
         1                       ONLINE                /data/oradata/PROD/redo1_0.log                   NO                 0
         2                       ONLINE                /data/oradata/PROD/redo2_0.log                   NO                 0
         3                       ONLINE                /data/oradata/PROD/redo3_0.log                   NO                 0

and Connor said...

DML *is* probably hanging, but probably only just a tiny amount of time. For example:

2020-09-25T14:54:42.798805+02:00
Thread 1 cannot allocate new log, sequence 4108

2020-09-25T14:54:43.377453+02:00
Completed checkpoint up to RBA [0x100a.2.10], SCN: 39289565


so in less than half a second you were "back in business". Its unlikely anyone even noticed the blip.

2 min switches is very very frequent (but I can see the motivation in terms of "home grown data guard"). So a couple of things I'd be looking at:

a) a lot more log groups. Your checkpoint issues will come when we've been on group 1 and flipped through 2,3 all within 4 mins (due to archive lag target). So I'd be looking at (say) 8 to 10 groups, which means a group will be reused in approx 20mins, bring it more into alignment with a "typical" redo log group scenario.

b) if you're still seeing the messages, look at 'fast_start_mttr_target' to give the incremental checkpointing a push.

Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Still get "Checkpoint not complete" independent on number of redo log groups

Grigory, November 12, 2020 - 11:26 am UTC

Hi Connor,
thank you for the answer. But to be honest I am not really satisfied with it.

1. In alert log for each SCN we have similar pattern:
Beginning log switch checkpoint up to RBA [0x100a.2.10], SCN: 39289565
2020-09-25T14:50:37.557027+02:00
Thread 1 advanced to log sequence 4106 (LGWR switch),  current SCN: 39289565
  Current log# 2 seq# 4106 mem# 0: /data/oradata/PROD/redo2_0.log
...
Checkpoint not complete

and then directly:
2020-09-25T14:54:43.377453+02:00
Completed checkpoint up to RBA [0x100a.2.10], SCN: 39289565


Is this normal, that the SCN duration is more than 4 minutes?

2. We have always "Checkpoint not complete"-message for each SCN independent on number of redo-groups (tried 3,6,10 - no difference). Is this normal for the log ship environment with standby database?

3. And the 'fast_start_mttr_target' parameter you recommended is not applicable because as I mentioned above, we have the Standard Edition Database, and this parameter works only for Enterprise...

I kindly ask you to review the issue.

Thanks in advance!

Connor McDonald
November 16, 2020 - 2:54 am UTC

Apologies for not seeing the Standard Edition mention - I skimmed over too quickly.

I'm going to make a hypothesis on a potential cause - you can confirm this with some checking of some data on your system that's not provided in this question.

With archive lag target at "nnn", there are two likely scenarios:

1) Your redo logs are typically nearly full (lets say 90%) and then they switch due to archive_lag_target, OR

2) Your redo logs get nowhere near full (lets say 20-30%) and then they switch anyway due to archive_lag_target

In the latter case, there is a good chance we will not be checkpointing aggressively enough. Our checkpointing tries to do "as little as possible, as late as possible" because

a) "as little" - its IO work that doesn't contribute directly to online performance, and
b) "as late" - because if we change a block 10 times, and only checkpoint once then we didn't have to flush it 10 times but only once, which brings us back to (a) (less work).

So the checkpointing will try leave as much outstanding work as possible until such point as we're getting to "running out" of redo log. So if you're in scenario (2) above, it never looks like we have come close to running out of redo so we're very passive on checkpointing...and then "splat" - we see you've cycled around to want to reuse a redo log again.

Can you tell us what is in V$INSTANCE_RECOVERY.LOG_FILE_SIZE_REDO_BLKS ?

In any event, you indeed might want to tinker with log_checkpoint_timeout and log_checkpoint_interval, but be aware that both have impacts (which we typically dont recommend them), ie

i) timeout - doesn't *guarantee* improved checkpointing because dirty buffer volume is not necessarily consistent per unit time, so you can still end up with the same issue
ii) interval - can bump your I/O volume because there's the probability we will (re)checkpoint the same blocks over and over - see (b) above.

What real impact on performance have checkpoint not complete events?

Grigory, November 16, 2020 - 1:48 pm UTC

Hallo Connor,
Thank you for the answer.

Our value for V$INSTANCE_RECOVERY.LOG_FILE_SIZE_REDO_BLKS is 829440.

We experience this issue on all our systems anytime, regardless on redo-load. Heavy stressed systems which sometimes on peak make logswitch every 20-30 seconds, middle load (logswitch each 90-120 seconds) or no-loaded systems (test environment).

What real impact on performance have these checkpoint not complete events in our case? What is bad on the "default" behavior now, what if we do nothing towards optimization on this issue?

And if we set up the log_checkpoint_timeout let's say to 30 seconds, would it make more stress, producing more I/O for incremental checkpoints or it would significantly reduce the wait times for apps, which could potentially hang on logswitch events? What would be your expectation?

Thanks in advance,
Grigory

Connor McDonald
November 17, 2020 - 2:17 am UTC

The ultimate impact is blips on your response time.

Historically we didnt checkpoint except at log switches (which is why you'll see a LOT of information out the net saying thats how it works - which it no longer does). So you'd see spikes of I/O at these points and everything "stopped" whilst it happened.

Now (assuming you do nothing) you run that risk not at *every* log switch, but in the situation where you wrap around your redo logs, and the checkpoint position has not advanced far enough, because we're never going to jeopardise our recovery capability.

Setting the log_checkpoint... parameters basically is a cost/benefit tradeoff. You burn a little more I/O focussing on ensuring you don't get sporadic response time blips at log switch time. Only you can really make the call on what is more important to you.

Ultimately you're trying to implement near real time DR without using our near real time DR product :-) I'm not being critical - I totally understand this position, but as a result, there will always be some sort of compromise that occurs as a result.

near real time DR product

Grigory, November 17, 2020 - 8:12 am UTC

Hi Connor,
Thank you for the answer. Which near real time DR product do you mean? Enterprise Edition? How it would solve this issue?
Best regards,
Grigory
Connor McDonald
November 17, 2020 - 11:58 pm UTC

DataGuard.

We don't need aggressive log switches because we transmit redo to the standby pretty much as its received on the primary. No archive_lag_target etc required.

Here's a walk through


Data Guard and standby log file in Standard Edition

Grigory, November 19, 2020 - 9:43 am UTC

Hello Connor,
Thank you for the answer,
As I understood from your video, you are using the Data Guard, which is a part of EE, not of SE.
We set the archive_lag_target parameter to 2 minutes in order to be secure, that our maximal data loss could be not more than 2 minutes. Are there better solutions for minimizing data loss in SE with the standby database without need to set the lag?
Best regards,
Grigory
Connor McDonald
November 20, 2020 - 4:01 am UTC

Not that I know of. As I said


"Ultimately you're trying to implement near real time DR without using our near real time DR product :-) ...as a result, there will always be some sort of compromise that occurs as a result."


I'm not sure what level of "real time-ness" some of the popular 3rd party apps do - maybe have a chat to

https://dbvisit.com/

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database