Skip to Main Content
  • Questions
  • RMAN - Full vs. Incremental - performance problem

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Judy.

Asked: November 05, 2001 - 12:28 pm UTC

Last updated: April 26, 2019 - 12:04 am UTC

Version: 8.1.7.0.0

Viewed 10K+ times! This question is

You Asked

Hi. I am testing RMAN. I have run an incremental 0 and an incremental 1 cumulative test without having any database activity in-between these test. I am using OmniBack media manager with RMAN. I have three databases. Here are the timing results and other stats:

TYPE INSTANCE TIME MB MB
BACKED UP per Hr

incremental 0 1 12 min 2492 12460
incremental 0 2 13 min 3481 16071
incremental 0 3 11 min 2539 13852

incremental 1 cumulative 1 12 min 74 370
incremental 1 cumulative 2 9 min 13 84
incremental 1 cumulative 3 9 min 37 247

Why would the timings be basically the same? Why are the other stats so different and apparently poor for the incrementals?

and Tom said...

Incremental is useful to conserve space -- much like an incremental in Unix against a file system -- it is not significantly faster.

Incremental backups must scan each and every block in the database -- read it, inspect it and see if it needs to be backed up. They can save tons of space but they read the entire instance anyway.

Unless you were really IO bound on the tape unit/backup device, it won't be any faster.

From the RMAN guide:

<quote>
Incremental Backups

An incremental backup reads the entire file and then backs up only those data
blocks that have changed since a previous backup. Oracle allows you to create and restore incremental backups of datafiles, tablespaces, or the whole database. Note that RMAN can include a control file in an incremental backup set, but the control file is always included in its entirety?no blocks are skipped.

The primary reasons for making an incremental backup are:

o To save tape when using a media manager.
o To save network bandwidth when backing up over a network.
o When the aggregate tape bandwidth available for tape write I/Os is much less
than the aggregate disk bandwidth for disk read I/Os.
o To be able to recover changes to objects created with the NOLOGGING option.

If none of these criteria apply, then full backups are usually preferable because the application of the incremental backup increases recovery time while the cost savings is negligible.
</quote>

Rating

  (37 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Is this performance acceptable?

Juan Carlos Reyes, August 22, 2002 - 7:30 pm UTC

Hi Tom, I have a question about performance and compression

looking for a best backup method I timed the following backups methods

COMPRESING WITH WINRAR
Time 18mn
Size 180MB

ORACLE FULL EXPORT
TIME 6MN
SIZE 136M

RMAN FULL BACKUP
TIME 33:00
SIZE 937MB

I don´t know if I´m doing something wrong.
Is there a way to optimize the time and compression of the RMAN full backup, or it is always so.

Thank you in advance Tom.

Tom Kyte
August 22, 2002 - 10:54 pm UTC

I have no idea what I'm comparing here but.

RMAN compression isn't really compression -- it simply skips blocks that aren't used by anything (so in a "full database" it'll all be there). we don't compress the blocks themselves.


You did a full export of a database in 6 minutes and it was only 136m of data? Wow. Not a very big database is it?


Not enough info here to speculate however on whether this is good or not (and not knowing what winrar is.....)


In any case, 33 minutes isn't so bad considering the database is never down anyhow.

A reader, August 23, 2002 - 8:56 am UTC

Thank you Tom

incrementals, better or wose to recover with?

rob, November 22, 2003 - 11:28 pm UTC

You wrote this with regards to the criteria for which incrementals are beneficial (from the docs):

"If none of these criteria apply, then full backups are usually preferable because the application of the incremental backup increases recovery time while the cost savings is negligible."

The docs seem to indicate it is advantageous to only have to apply the minimum amount of logs as possible by having available daily incrementals. What should be considered when discussing whether it would be quicker to apply several days of archives to a full backup vs the time it would take to apply multiple levels of incrementals from the week to an initial level 0? I am sure it depends on the level of activity, but isn't the volume of archive logs, likely to be less than space consumed by the incremental? The incremental backs up the entire block, where my archive logs may only hold a single transcation on that block. Yet the archives have to apply a transaction, while the incremental only has to copy a block.

I am trying to decide if the incrementals we take during the week are really worth it, or if we might be better served with just our weekly full backups. The total changed data is small relative to the overall system level data.

Tom Kyte
November 23, 2003 - 8:28 am UTC

(and remember -- things change, in 10g my comment is false -- incrementals are the way to go in that release!)

Lets put it this way -- you could:

sunday -- full backup
monday -- incremental
tuesday -- FAILURE

to recover: restore sunday, apply monday incremental, roll forward thru logs


OR

sunday -- full backup
monday -- full backup
tuesday -- FAILURE

to recover: restore monday, roll forward thru logs


my point can only be taken if the incremental was IN PLACE OF a FULL backup. I am assuming "you are going to backup, will it be incremental or full". Given the choice in 9i and before -- if nothing in that list means anything to me, i would probably opt for FULL.

Incremental Backup

Andrew, September 26, 2005 - 9:25 am UTC

Tom,

Please explain why incremental backups are the way to go in 10g. I am reading the RMAN 10g documentation and block change tracking looks like an interesting option.

Thank you.

Tom Kyte
September 26, 2005 - 9:40 am UTC

well, the block change tracking lets you back up (read) just the data that changed - if you modify 500MB out of 5TB, we read only 500MB (9i and before would read 5TB).

also, in 10g, you can catch your backups up, so if you use disk based backup, you can apply your incremental backup directly to the last backup -- catching it up. So it appears you always have a FULL backup even though you only take incrementals.

Standby Backup Using RMAN

sushil, September 27, 2005 - 9:48 am UTC

Tom,
what is your take on backing up a standby db using RMAN?Also why cant i hot backup a standby using rman while in the manged recovery mode while in a production db its possible,wherein both the databases are constantly doing the stuff.
Thanks

Tom Kyte
September 27, 2005 - 11:41 am UTC

my take - you can backup your physical standby, sure.

the standby is in RECOVERY mode, the production is NOT in recovery mode, hugely differently configurations.

the stuff they are doing is fundementally and wholly different.

Skip Read Only

Wayne, September 27, 2005 - 3:20 pm UTC

Tom,
We use Oracle 9.2.0.6. Using RMAN, if we skip backing up read only tablespaces (backed up once already), will it speed up the process (as well as save space)? I guess it will because it doesn't have to do the block level checking. I am not a DBA, but I want to recommend some strategies to our storage group.

Thanks in advance

Tom Kyte
September 27, 2005 - 4:22 pm UTC

sure it will -- less READ IO and less WRITE IO. All good. RMAN is "read only tablespace savvy"

Make sure you test!

Rob K, September 27, 2005 - 4:59 pm UTC

Make sure you test your backups. I have uncovered a bug with RMAN while performing a DR test that has been replicated by Oracle support. The problem pertains to read-only tablespaces and incremental backups. I manage a data warehouse (9i) and use RMAN with full and incremental backups. The problem occurs when you change the status of tablespace from read-only to read-write, apply some ddl then change the tablespace back to read-only. For some reason RMAN does not track the changes to the tablespace correctly during the incremental backups. When I try to do perform a full DR recovery, RMAN complains that the read-only datafile was not restored sufficiently from the backup. Running the RMAN ‘restore database validate check read only’ command on the production system did not report this error. The point I am trying to get across is to make sure you test your backups in a full DR scenario, i.e. restore the parameter file, control files, database files, recover the database and open with the reset logs. The total LOE for testing your backups is relatively small, not being able to recover your database is catastrophic.

Tom Kyte
September 27, 2005 - 8:20 pm UTC

Make sure you test your backups

better words have never ever been spoken.

Repeatedly, not once, regularly.

Do not rebuild all indexes on a schedule
Do test all backups on a regular schedule :)

Block Media Recovery

A reader, December 21, 2005 - 12:09 pm UTC

One of the big selling points for RMAN is it's ability to perform block media recovery. I agree with you that incrementals are the way to go in 10g, however block media recovery can't be performed with an incremental backup. This seems like a major limitation with implementing an incremental policy and may make one consider full backups??

What's your take on this limitation?

Tom Kyte
December 21, 2005 - 7:51 pm UTC

in 10g, you can use incrementals to catch up a backup - making it look like a full backup...

but I would say one of rman's MANY (not "the", the "the main", not even "the big" or near "the biggest") features is block level recovery.

Clarification...

A reader, December 21, 2005 - 8:38 pm UTC

I concur that block media recovery is one of RMAN's "many" features. However, just for clarification, even though 10g's rolling forward image copy is made to "look like a full backup", I would assume that you are still precluded from being able to perform Block Media Recovery with this backup strategy?

Tom Kyte
December 22, 2005 - 10:27 am UTC

Not sure, I'll have to test that out if I get a chance.

RC_BACKUP_SET

A reader, February 04, 2006 - 9:36 am UTC

Tom,

First of all - I wish if you could answer this ...

Please let me know, if the rc_backup_set can be used to get the backup details like start time and end time if the 
backups that cross midnight (example: starts at 19:00' and ends at 01:10' "next day"). 

I tried with the below query and i searched all through metalink but i could not get.  Its very confusing and i am
not able to arrive at the right answer. Please help !



SQL> select status,trunc(completion_time),max(to_char(completion_time,'YYYYMMDD HH24:MI:SS'))  CTIME,
min(to_char(start_time,'YYYYMMDD HH24:MI:SS')) STIME
from rc_backup_set where trunc(completion_time)>='03-FEB-06'
and db_key=751511 group by status,trunc(completion_time) order by 1;  2    3    4

S TRUNC(COM CTIME             STIME
- --------- ----------------- -----------------
A 03-FEB-06 20060203 21:03:34 20060203 00:30:38
A 04-FEB-06 20060204 23:15:43 20060204 00:30:38
A 05-FEB-06 20060205 00:30:48 20060205 00:30:43


in the above, actually the backup started at 1900 hrs but its not showing up that as the min time..what am i missing here...
please let me know.

Thanks for all your help and your great service to us! 

Backup timing

A reader, May 08, 2006 - 5:15 am UTC

Hi Tom,
Usually how long will RMAN take to backup a 250GB database?
We used 8 hours to complete the 250GB backup, it is very time consuming, it slow down other processes during the backup running. I try to surf through the backup timing listing (e.g. 50GB take 3 hours, 100GB take 5 hours... ) for RMAN but i couldn't find one, will greatly appreciate if you can guide me through, so that i will know our timing is reasonable. Database parameters: large_pool_size: 100MB, backup_tape_io_slaves: true. Hardware spec: hdd 10K rpm, HP MSA1000 SAN ARRAY, Oracle 9.2.

thanks


Tom Kyte
May 08, 2006 - 8:18 am UTC

depends - 10g with true incrementals, could be a couple of seconds, or minutes, or hours.

depends on the speed of your disks, the speed of your backup devices.

There is no such thing as "for 50gb rman takes so long..." since it is entirely dependent on your hardware, your resources. You can imagine it would take longer on a certain set of disk than another right?

8 hours seems long, maybe you are doing serial operations, it could be the thing you are backing up to - see what rman processes are waiting on.

failure of backup plus archivelog command

Dawar, May 24, 2006 - 2:24 pm UTC


Tom,

OS: Linux
DB: 10.1.0.5.0

>/opt/backup
> more dbackup.sh

export ORACLE_SID=sidname
cd $ORACLE_HOME/bin
./rman nocatalog << EOF
connect target
backup database plus archivelog delete input;
delete noprompt obsolete;
exit
EOF

I ran this script as below:
oracle@xyz:~/backup> ./dbackup.sh

Recovery Manager: Release 10.1.0.5.0 - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

RMAN>
connected to target database: sid (DBID=2427613067)
using target database controlfile instead of recovery catalog

RMAN>

Starting backup at 24-MAY-06
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=270 devtype=DISK
specification does not match any archive log in the recovery catalog
backup cancelled because all files were skipped
Finished backup at 24-MAY-06

Starting backup at 24-MAY-06
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00008 name=/opt/oracle/oradata/sid/wits_data01.dbf
input datafile fno=00009 name=/opt/oracle/oradata/sid/wits_indx01.dbf
input datafile fno=00014 name=/opt/oracle/oradata/sid/REPOSITORY_FOR_RMAN
input datafile fno=00007 name=/opt/oracle/oradata/sid/roolback01.dbf
input datafile fno=00006 name=/opt/oracle/oradata/sid/tools01.dbf
input datafile fno=00001 name=/opt/oracle/oradata/sid/system01.dbf
input datafile fno=00002 name=/opt/oracle/oradata/sid/undotbs01.dbf
input datafile fno=00003 name=/opt/oracle/oradata/sid/sysaux01.dbf
input datafile fno=00005 name=/opt/oracle/oradata/sid/example01.dbf
input datafile fno=00004 name=/opt/oracle/oradata/sid/users01.dbf
input datafile fno=00013 name=/opt/oracle/oradata/sid/FLOW_4.dbf
input datafile fno=00010 name=/opt/oracle/oradata/sid/FLOW_1.dbf
input datafile fno=00011 name=/opt/oracle/oradata/sid/FLOW_2.dbf
input datafile fno=00012 name=/opt/oracle/oradata/sid/FLOW_3.dbf
channel ORA_DISK_1: starting piece 1 at 24-MAY-06
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup plus archivelog command at 05/24/2006 10:19:24
ORA-19504: failed to create file "/opt/oracle/backup"
ORA-27038: created file already exists
Additional information: 1

RMAN>
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
using channel ORA_DISK_1
no obsolete backups found

RMAN>

Recovery Manager complete.

***************************************************

Here is the configuration of RMAN.

RMAN> show all ;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/opt/oracle/backup';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/ORACLE_HOME/dbs/snapcf_sid.f'; # default

what i am missing?
Any feedback will be appreciated.

Regards,
Dawar

Tom Kyte
May 25, 2006 - 1:13 pm UTC

look at your disk format string, it is a constant, it is saying "nope, file already exists"

use something that would - well - be different from run to run so you are not constantly trying to overwrite your backup.

objects could not be deleted for DISK channel(s) due

Dawar, May 25, 2006 - 10:58 am UTC

My database was on non archive mode.
So I change it to archive mode.
But I am getting this mesaage when I do backup

backup piece handle=/opt/oracle/backup/SID_1khjsm1d_1_1.1 recid=7 stamp=591288366
Deleted 7 objects

RMAN-06207: WARNING: 6 objects could not be deleted for DISK channel(s) due
RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212: Object Type Filename/Handle

***************************************************

RMAN> crosscheck backup;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=248 devtype=DISK
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/opt/oracle/backup/SID_1lhjsm1g_1_1.1 recid=8 stamp=591288369
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/opt/oracle/backup/SID_1mhjsmeh_1_1.1 recid=9 stamp=591288786
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/opt/oracle/flash_recovery_area/SID/autobackup/2006_05_24/o1_mf_s_591288787_279og3nn_.bkp recid=10 stamp=591288787
Crosschecked 3 objects


RMAN>

****************************************************************************************************************************************************

RMAN> crosscheck archivelog all;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=252 devtype=DISK
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17125_535674251.dbf recid=3 stamp=591288784
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17126_535674251.dbf recid=5 stamp=591292837
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17126_279sdn3k_.arc recid=6 stamp=591292837
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17127_535674251.dbf recid=7 stamp=591295293
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17127_279vsdvy_.arc recid=8 stamp=591295293
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17128_535674251.dbf recid=9 stamp=591299494
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17128_279zwogo_.arc recid=10 stamp=591299494
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17129_535674251.dbf recid=11 stamp=591303610
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17129_27b3x9sh_.arc recid=12 stamp=591303610
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17130_535674251.dbf recid=13 stamp=591307321
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17130_27b7k8ro_.arc recid=14 stamp=591307321
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17131_535674251.dbf recid=15 stamp=591311708
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17131_27bctd07_.arc recid=16 stamp=591311708
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17132_535674251.dbf recid=17 stamp=591314414
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17132_27bggy5l_.arc recid=18 stamp=591314414
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17133_535674251.dbf recid=19 stamp=591314435
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17133_27bghlx9_.arc recid=20 stamp=591314435
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17134_535674251.dbf recid=21 stamp=591314467
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17134_27bgjm6c_.arc recid=22 stamp=591314467
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17135_535674251.dbf recid=23 stamp=591314482
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17135_27bgk1os_.arc recid=24 stamp=591314482
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17136_535674251.dbf recid=25 stamp=591318565
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_24/o1_mf_1_17136_27bljns4_.arc recid=26 stamp=591318565
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17137_535674251.dbf recid=27 stamp=591323667
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_25/o1_mf_1_17137_27bqj243_.arc recid=28 stamp=591323667
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17138_535674251.dbf recid=29 stamp=591328621
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_25/o1_mf_1_17138_27bwbwgy_.arc recid=30 stamp=591328621
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17139_535674251.dbf recid=31 stamp=591332968
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_25/o1_mf_1_17139_27c0lq3z_.arc recid=32 stamp=591332968
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17140_535674251.dbf recid=33 stamp=591337168
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_25/o1_mf_1_17140_27c4ozkd_.arc recid=34 stamp=591337168
validation succeeded for archived log
archive log filename=/opt/oracle/backup/1_17141_535674251.dbf recid=35 stamp=591342270
validation succeeded for archived log
archive log filename=/opt/oracle/flash_recovery_area/SID/archivelog/2006_05_25/o1_mf_1_17141_27c9of9m_.arc recid=36 stamp=591342270
Crosschecked 33 objects

*****************************************************************************************************************************************************************************
RMAN> list expired backup;


RMAN> (Blank)



RMAN> list expired archivelog all;

specification does not match any archive log in the recovery catalog

RMAN>


Full database backup

Dawar Naqvi, June 12, 2006 - 3:03 pm UTC

Tom,

OS: Linux
DB: 10.1.0.5.0

On 06/12/06 I run following scripts for perform backup.
My plan to run this script on weekly basis to perform full DB backup.

And transfer full backup files to our backup server.

export ORACLE_SID=sid
cd $ORACLE_HOME/bin
./rman nocatalog << EOF
connect target
backup database plus archivelog delete input;
delete noprompt obsolete;
exit
EOF

Backup scripts created the following files as below:

total 2039709
-rw-r----- 1 oracle oinstall 796672 2006-05-24 14.53 1_17125_535674251.dbf
-rw-r----- 1 oracle oinstall 9494016 2006-05-24 16:00 1_17126_535674251.dbf

-rw-r----- 1 oracle oinstall 1316864 2006-06-12 09:55 1_17667_535674251.dbf
-rw-r----- 1 oracle oinstall 2073690112 2006-06-12 09:55 AB_23hle7o9_1_1.1
-rw-r----- 1 oracle oinstall 1318400 2006-06-12 09:55 AB_24hle850_1_1.1

Please note first two files created on 05/24/06.

Is it increamental backup?

OEM showing Archiving log Enable.

But I want full database backup.

What shuold I make change to perform full database backup.

Dawar


Tom Kyte
June 13, 2006 - 10:51 am UTC

it is a full database backup, not incremental

don't know why you think otherwise? You said "backup database"

Need sql to Generate RMAN backup report

Chakra, August 29, 2006 - 10:26 pm UTC

I want sql query to generate below report. Can you please send me...

TYPE INSTANCE TIME MB MB
BACKED UP per Hr

incremental 0 1 12 min 2492 12460
incremental 0 2 13 min 3481 16071
incremental 0 3 11 min 2539 13852

incremental 1 cumulative 1 12 min 74 370
incremental 1 cumulative 2 9 min 13 84
incremental 1 cumulative 3 9 min 37 247


A reader, January 05, 2009 - 12:35 pm UTC

We currently have a 150 GB database in production. We have a level 0 backup running every sunday which takes about 30 mins, we have level 1 cumulative incremental backups running every day. We have block change tracking enabled. On Monday, the time taken for incremental backups is about 3-4 mins, starting Tue it takes about 11-12 mins and on Fri it takes close to 20 mins. While level 0 backups takes 30 mins, how come the last incremental is also taking close to 20 mins.. The difference in time between level 0 and level 1 is not big. What is the purpose of running the incremental ? Are we missing something to make it run faster ?
Tom Kyte
January 05, 2009 - 1:43 pm UTC

how much data is backed up with each incremental?

why cumulative? with the cumulative - by the time friday rolls around you are backing up everything you backed up on monday plus tuesday plus wednesday plus thursday

A reader, January 05, 2009 - 1:54 pm UTC

Monday's incremental backs up about 3-4 GB, Tue is about 35 GB, Wed about 39 GB and by the time we get to Fri it is 46 GB.

We chose cumulative because then for the restore, we would just need a level 0 backup and the latest cumulative backup. Will that be quicker ?
Tom Kyte
January 05, 2009 - 2:36 pm UTC

well, you should sort of be able to answer your own question - as to why it takes longer and longer and longer.

you are hitting much of the database by the time friday rolls around. it takes less than a full backup - but more than when there is just a small percentage to backup.

What do you back up to? If you were to back up to disk (and then back up that disk to tape later) you could just do one full backup and then level 1 incrementals (differentials) and catch up that backup on disk. 150gb is tiny - might be something to give read consideration to if your goal is to minimize the time to backup and restore.

A reader, January 05, 2009 - 3:30 pm UTC

"If you were to back up to disk (and then back up that disk to tape later) you could just do one full backup and then level 1 incrementals (differentials) and catch up that backup on disk"

We do backup to disk and then backup to tape later. When you catch up on disk did you mean incrementally updated backups ?

Our goal is to minimise the time to backup and restore. What do you suggest for 10.2.0.4 ?
Tom Kyte
January 05, 2009 - 4:35 pm UTC

yes, that is what I mean. catch your backup up - you would do one full backup and then the equivalent of your monday backups every day (the small ones), you would apply these to your disk based backup - it would appear to be a full backup that you could restore from.

is Full backup any different than incremental level 0

Bala, September 30, 2009 - 4:33 pm UTC

Oracle document says that level 0 backups are essentially the same as FULL backups.

"A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. The only difference between a level 0 incremental backup and a full backup is that a full backup is never included in an incremental strategy.
"
We recently changed our backup script from level 0 to full and the first full backup reported block corruption in some datafiles, is this because the full backups reads all the allocated blocks including empty?

TIA
Bala
Tom Kyte
October 07, 2009 - 9:02 am UTC

they would both read the same set of blocks.

Please contact support to work through this issue - it is likely a coincidence that the corruption was reported after this change - they have nothing to do with each other.

Scofield, November 28, 2009 - 11:02 pm UTC

Hi Tom;

A quick question,
I know that oracle issues checkpoint before 'alter tablespace begin backup',
Does oracle also issues checkpoint before rman backup? (backup database or archivelog)?
Tom Kyte
November 29, 2009 - 8:53 am UTC

rman doesn't need to checkpoint like user managed backups (alter tablespace) does. It (rman) doesn't change what is happening in the database as far as data file header maintenance and redo generation like alter tablespace begin backup does.

incremental backup size estimation

Vijay Bhaskar, December 09, 2009 - 8:40 am UTC

Dear Tom,

Our database(10gR1) is currently ~6TB(sum of all segments) in size. And growing at the rate of ~10GB/day.

Currently, I am taking full database backup(RMAN compressed backupsets) twice a week. The size of this backup is ~1TB.
But, this way, system is not to sustain for very long, given the recovery requirements and application growth. And more importantly, due to sheer amount of time it's taking to backup(more than 2 days), the batch jobs are getting affected big time.
So, I am thinking about implementing L0/L1 policy without any further delay.

But, I have got limited FRA, which is forcing me to perform rigorous calculations/estimations(can't avoid guess work!) before I can implement the renewed backup policy on LIVE. Adding to the woes, I do not have equivalent test environment where I can test it!!

So, I am wondering, if there's a way to estimate size of an incremental and differential L1 backups. I won't mind even if it's rough estimate. However, I can't think of any.

Going through the Metalink, I could understand that, there's no defined way to estimate this figure. I appreciate that.

I am looking for a query which would tell me number of blocks changed after last L0(or particular SCN). Is there one?

Further, will employing Log Miner(to mine a set of archive logs) help?

Your expert comments on this are highly solicited.

Best Regards,
Vijay

Tom Kyte
December 10, 2009 - 2:11 pm UTC

.. But, this way, system is not to sustain for very long,...

that doesn't make sense, don't know what you meant by that.

LO/LI policy?

FRA ?? Frankfurt Airport?


In any case, the time to backup is heavily impacted by the amount of data you have to SCAN in the first place. Now, are you talking about using "real" incrementals or the old fashioned ones? (that is, are you going to enable changed block tracking?) that'll have a huge impact as we only read that which has changed.

I'm not aware of any way to estimate - we don't know if your incremental will read 6tb and find 10gb or 1000gb or more it needs to backup (at least I sure don't).

read about this:
http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup004.htm#i1032148



Flash Recovery Area

A reader, December 12, 2009 - 10:21 am UTC

Dear Tom,

Many thanks for your feedback.

I should have written the statement as "...system is not going to sustain..." rather than "...system is not to sustain...".

Anyway, what I meant by above is that, the current backup policy (i.e. compressed full database backup) is taking virtually 2 days to complete. The database size is 6TB(approx). Not only that, it runs for such a long period, but in the process, it also occupies and consumes the resources. This in turn, affecting the batch jobs.

So, the idea is to implement RMAN incremental backup policy(L1) as this would not only reduce the backup times(subjected to transactional activity though) but also covers many other recovery scenarios.

By FRA, I mean "Flash Recovery Area". As this acronym is widely used, I thought of referring to it here straightaway without any introduction. But, apologies if it has caused for any confusion and/or ambiguity.

My main question is, as it's very difficult(if not impossible) to estimate size of L1(cumulative or differential), Is basing the estimations on amount of "redo" being generated (currently 600GB/day) provide a good starting point? More so, as I am not after precise figures, but just indicative ones so that I can adequately size my FRA.

Once again, many thanks in advance, for spending your valuable time in answering our questions.

Best Regards,
Vijay
Tom Kyte
December 14, 2009 - 8:22 am UTC

.. As this acronym is widely used,...

In writing and professional communication - always - repeat: ALWAYS define your acronyms. I don't happen to use "FRA" acronym, haven't really heard it before (but as a frequent traveler, I immediately thought "Frankfurt Airport"). In fact, search for FRA in the 10g Release 2 documentation - 0 hits. Search in 11g - you get a hit, but FRA is used as the name of an example disk group (an EXAMPLE).


I answered your question however in full above. I think you should backup to disk one last time (one last full backup) and then just use the change tracking file and apply those incrementals directly to the backup on disk (no extra storage).


redo would be a very fuzzy measure of blocks that have been modified. Way off - either way way under or way way over. Depends on the accesses to data (do you tend to update the same small set of blocks over and over or is each update a different block, what about non-logged operations, etc).



are you referring to image copies?

A reader, December 18, 2009 - 12:57 pm UTC

Dear Tom,

Many thanks for your response.

....."apply those incrementals directly to the backup on disk"....

In this context, I think you are referring "image copies". Correct me, otherwise. If yes, then don't you think it would make some of the scenarios impossible to recover?

Let's say, I have taken full uncompressed (being image-copy) backup on Sunday. This is followed by daily incremental backups(9AM to 12Noon) from Monday onwards and subsequently update/merge full backup with these daily incrementals.

Now, suppose that today is Friday and I have to recover a table dropped(with "purge" option) on Wednesday morning
8 A.M. If I have to recover this table then I must have to perform either TableSpace Point-In Time Recovery(TSPITR) on an auxiliary instance and then perform incomplete recovery. However, this is not possible as I have already updated my full backup with incremental on Wednesday. Further, "flashback" is not enabled on the database(DB) either. How can I recover from this situation?

.....(no extra storage).....

I couldn't get this statement. Obviously, "image copy" based backups would increase demand for storage on Flash Recovery Area(FRA) disk group. Especially, because existing size of FRA is just 3.66TB, wheareas size of DB is 6TB(approx).

I do appreciate the fact that redo would be a very fuzzy measure of modified blocks as we may end-up massively
over estimating it(updating same blocks over and over again).

But, couldn't get your point of "way way under" (i.e. under estimation). When can this situation happen, considering the fact that my DB is in "force-logging" mode?

Roughly 40% of blocks are repeatedly updated whereas the rest only once a day.

And yes, I will certainly keep your advise in mind when using acronyms and I have already started adhering to this norm. I hope you have already noticed that.

Last but not the least, Merry Christmas, to you and your entire family. I reiterate that, your contribution to Oracle fraternity by sharing your priceless knowledge is simply unquantifiable. I hope you will continue to do so, in 2010 and years beyond.

Best Regards,
Vijay
Tom Kyte
December 18, 2009 - 1:13 pm UTC

chase the link I've already posted above about the change tracking file. A 10g feature is the ability to apply incrementals to a backup on disk. Not talking image copies, I'm talking supported, documented, good idea RMAN feature. Catch your backup up.


You would OF COURSE have multiple backups on tape. On disk you would have what appears to be a full backup (the last full backup) always. On tape you would have the last N copies of that (representing the last full backup, the previous last full backup and so on)


As for "using redo to estimate number of blocks changed", say you have 1tb of redo. How many blocks might that be for?

answer 1: it is for one block of course, our system only does "update t set x = x+1 where primary key=42". We update the same block over and over (extreme case)

answer 2: it is for

ops$tkyte%ORA10GR2> select 1024*1024*1024*1024/100  from dual;

1024*1024*1024*1024/100
-----------------------
             1.0995E+10


blocks of course, since we never update the same block and our transactions are 100bytes in size.

so, for 1tb of redo, the answer is somewhere in between 1 and 10,995,116,278 blocks.

A reader, December 19, 2009 - 5:27 pm UTC

...rman doesn't need to checkpoint like user managed backups (alter tablespace) does. It (rman) doesn't change what is happening in the database as far as data file header maintenance and redo generation like alter tablespace begin backup does.

Sir,
Since rman doesnt perform checkpoint, if there are dirty blocks which are not yet written to datafiles.These are not backed up.
Am I right?
However since "alter tablespace begin backup" performs checkpoint, these are included in backup.



Tom Kyte
December 20, 2009 - 9:02 am UTC

any and all types of hot backups require the archives and current online redo logs generated during the backup in order to restore.

rman OR manual ones.

Both backup types backup fuzzy, inconsistent datafiles. Neither is different in that regard.

A manual backup has to do a lot more work to get ready thing, since we have to log even more stuff for it to work - since you are not using the database to backup with - using an OS tool instead. We have to protect ourselves from the limitations of the OS tool (it doesn't know it is backing up an Oracle data file and cannot do it 100% correctly, we have to fix that - we do that by logging even more data during a backup for it)

We checkpoint during a 'begin backup' because every block modified in that tablespace from now on out (until the end backup) will have a full block recorded in the redo log file to cover the OS tool's way of copying files. In order to ensure that every block modified during the 'backup mode' has this redo generated - we need to get to a know state in the datafiles - so we checkpoint and then record full block images in the redo stream - it just gets the 'begin backup' file into a known state.

And we do not need to do anything fancy like that for rman.

But in the end - both of them backup fuzzy, inconsistent files - they are not different in that way.

A reader, December 21, 2009 - 12:08 pm UTC

Respected Sir,

....the first time a block is changed in a datafile that is in hot backup mode, the ENTIRE BLOCK is
written to the redo log files, not just the changed bytes?


What about the dirty blocks which are present in cache when we issue "alter tablespace begin backup" will
Oracle copy full block image into redo?


Please clarify


Thanks
Tom Kyte
December 21, 2009 - 3:34 pm UTC

... What about the dirty blocks which are present in cache when we issue "alter
tablespace begin backup" will
Oracle copy full block image into redo?
...

nope, we don't need to - that is why the non-rman backup must do a checkpoint first - there will be NO blocks in the cache that are dirty for the affected datafiles immediately after you 'begin backup', we checkpointed them.


Otherwise we would have to go through the buffer cache, find those blocks and write them to the redo. Instead of doing that, we just checkpoint them and then the 'begin backup' command finishes.

So, right at the start of the begin backup - we start logging full blocks into the redo log for the affected datafiles of any block modified since the begin backup.

And then we clean the buffer cache of any pre-existing dirty blocks for those datafiles.

And then you can safely copy the file using the OS tools.


RMAN needs to do none of that.

A reader, January 11, 2010 - 2:23 am UTC

Sir;
You said that rman doesnt checkpoint; 
According to below demo it checkpoints.Am I wrong?


SQL> select file#, checkpoint_change# from v$datafile order by file#;

     FILE# CHECKPOINT_CHANGE#
--------------------------------------------------------------------------------
------------------
         1           13248851
         2           13248851
         3           13248851
         4           13248851
         5           13248851
         6           13248851

6 rows selected.

SQL>

sh-3.00$ rman target /

RMAN> backup tablespace system;

SQL> select file#, checkpoint_change# from v$datafile order by file#;

     FILE# CHECKPOINT_CHANGE#
--------------------------------------------------------------------------------
------------------
         1           13248978
         2           13248851
         3           13248851
         4           13248851
         5           13248851
         6           13248851

Tom Kyte
January 18, 2010 - 6:24 am UTC

my point was that rman doesn't have to checkpoint, user managed backups do - they have to - in order to start the process of logging full block images. Rman might well do a file/tablespace checkpoint - but it doesn't really matter to us - it doesn't have to.

A reader, January 24, 2010 - 1:26 am UTC

Hi Tom;

1-)
Suppose checkpoint doesnt happen during hot backup mode.
Suppose these dirty blocks are written while we are copying datafiles from o/s.
Since we write the entire block of these changes, nothing would happen.
so,Why checkpoint is crucial?I dont get it


2-)
...the first time a block is changed in a datafile that is in hot backup mode, the ENTIRE BLOCK is written to the redo log files.

Does it mean that if there is no dirty block, no entire block is written.
and
if a row is changed, full block of that row is written.


3-)During o/s restore and recovery. If oracle detects fractured blocks, it will reread the full image of the block from redologs.Am I right?


4-)
When I issue "alter tablespace begin backup", does all dirty blocks in buffer cache cleared? or only dirty blocks of this tablespace ?










Tom Kyte
January 26, 2010 - 1:02 am UTC

1) with non-rman copies, you are using OS utilities. OS utilities read/write data using OS blocks, not Oracle blocks. OS blocks are NOT the same size as Oracle blocks (typically, smaller). Therefore the following could happen:

a) your OS copy starts to read block 1023, it has read 50% of it and gets pre-empted.

b) DBWR checkpoints a bunch of blocks, block 1023 gets written 100%.

c) your OS copy finishes reading block 1023, it reads the 50%

at that point in time, your OS copy has now copied a block that is 50% "old" and 50% "new", it is fractured, the head of the block does NOT match the tail of the block.

When we go to recover that block later (media recovery, restore backed up file), we'll discover the block is fractured, we RELY on the redo stream to have a 100% block copy of that block that is not fractured to recover with. Unless we got the database into a known state BEFORE you started copying - we would not necessarily have that block in the redo stream, therefore, we will a) checkpoint (your OS copy hasn't started) and after the checkpoint b) log a full block image of any modified block. In that fashion - if you get a fractured block - we are OK.

2) I do not understand the reference to dirty block here. But basically, the first time ANY BYTE is changed on a block in backup mode, the entire block is logged to redo.

3) yes

4) it can do tablespace level (file level really) checkpoints, Yes.

brown, March 12, 2010 - 9:47 am UTC

please how do i configure RMAN

Block change tracking

A reader, June 06, 2011 - 10:24 am UTC

We have a 10.2.0.4 database in production. We run full backups over the weekend and incremental backups over the course of the week. We have block change tracking enabled or atleast had it on until 4 weeks ago when we did disaster recovery to the physical standby over a weekend and switched back. After we switched back we missed to turn on block change tracking. We just realized that today. So I went back and checked the backups during the past four weeks and somehow the backup has been running full over the weekend and only incrementals during the week. I checked the backup files size and they are smaller during the week. I would have expected that with block change tracking disabled full backups would have run everyday? Can you explain why that didnt happen ?

Also, usually our full backups take abt 30 mins to complete and yesterday it took 1.5 hrs to complete. Can you help us understand that as well ? Do you need any more information ? Is there any documentation to help determine the cause of the long backups ?
Tom Kyte
June 06, 2011 - 10:53 am UTC

if you have changed block tracking on - we only read and backup the blocks that have been changed

if you have changed block tracking off - we read EVERY block and backup the blocks that have been changed.

changed block tracking simply reduces the amount of reading we have to do - if only 1% of your 10TB database has changed - we'd only read 1% of it, not 100% of it.

You should notice that your backups were taking longer due to this.

A reader, June 06, 2011 - 11:06 am UTC

Ok, that makes sense... I checked the backup times. They were atleast twice as long for the past 4 weeks.

Could disabling the block change tracking also slow down the full back up ?Are there other reasons for the full backup slowing down ?
Tom Kyte
June 06, 2011 - 12:07 pm UTC

the full backup should not have been affected by the lack of a change tracking file.

When you did the switchover - was it a true switchover or a failover - did you reconstitute the primary. If so, perhaps it was something you did differently in the building of the new primary. If it was a switchover and switchback - then nothing should have really changed if you didn't change anything else.

A reader, June 06, 2011 - 12:13 pm UTC

It wasnt a failover. It was a switchover and switch back.
Tom Kyte
June 06, 2011 - 12:35 pm UTC

then the full backup time should not have been affected. I'd be looking at what else might have changed at the OS level when the switchover took place. Physical changes that might have occurred as part of your testing plans.

A reader, February 27, 2013 - 8:13 am UTC

Hi Tom - I have a question on the effect of block change tracking on RMAN incremental backups. We have a production database that does not have block change tracking enabled (it was missed). I was under the impression that in that case the full backup will run independent of the level (level 0 or level 1). But when I was looking at the backup sizes, I do see that after a weekend level 0 backup, the level 1 backup had a smaller size and it increased everyday with the size being the greatest during the level 0 backup. Can you explain our situation and clarify how level 0 / level 1 works with block change tracking ?

Thanks.
Tom Kyte
February 27, 2013 - 9:50 am UTC

with block change tracking, rman only reads the blocks it has to from the datafiles to perform an incremental.

without block change tracking, rman has to read the entire database to find the blocks needed for the incrementals.


The size of the incrementals would be the same in both cases.

The time to do the backup could be radically different - since the non-block change tracking database will have to read the entire database.

A reader, February 27, 2013 - 11:10 am UTC

Ok, that makes sense. Thanks for your explanation Tom.

RMAN archive log backup

preet, March 19, 2013 - 8:23 pm UTC

Hi Tom,

I couldn't find any documentation on

1) what does ORACLE do behind the screen when RMAN backup ARCHIVE logs vs regular data files. Is it the same?
2) What about the load on database during datafile backup vs archive log backup?
3) I think when rman run backup on data files(let's say incremental without block tracking and using control file to store rman catalog information) it uses more resources in the database compared to the case where it backup archivelog?

Sorry for my English. I hope I didn't confuse you.

Thanks Tom
Tom Kyte
March 25, 2013 - 10:25 am UTC

1) reads them, interprets them (removes stuff we don't need anymore) and writes them, no one is writing to them. datafiles can be written to while rman reads them - so special code maybe involved to ensure we can get a consistent block image...

so both read - process - write.


2) what about it? do you have a specific question? We will be doing IO against both - at about the same rate. We read - process the data - write.

3) Maybe more, maybe the same, maybe much less.

They both read - process all of the data. Archives will always write the vast majority of the data read, whereas a datafile incremental backup might write very very little data or the entire datafile - depending on how much was modified.




so, as usual, "it depends" :)

incremental backup restore on 11g from 10g backup

aman, September 17, 2014 - 3:59 pm UTC

Hi, not sure if this is the correct place to ask this. We are migrating database version from 10gr2 to 11gr2. After migration, there is need to keep 10g database open for user for sometime before migrating all the users to 11g database. During this period, we want to daily update 11g database with those changes which are made by user in 10g database.
Is rman incremental backup and restore a solution for this? If not,then what is the best way to do this?

Restore database

Harry, March 15, 2019 - 6:22 pm UTC

I have rman backup policy which take full backup every 1st,11,21 date of month.
And others days take incremental 0 level backup.

But I want to restore only database on my 10th date of month. So how could I do.
Connor McDonald
March 18, 2019 - 9:49 am UTC

An 'incremental level 0' is pretty much the same as a full backup. The difference is that it can be used in conjunction with other incremental levels to perform a restore.

But as long as RMAN

a) knows where the backups are, and
b) they are still available on disk/tape

then you just restore/recover in the normal way using 'set until' with your database in mount state

RUN { 
  SET UNTIL TIME ...
  RESTORE DATABASE;
  RECOVER DATABASE;
  ALTER DATABASE OPEN RESETLOGS;
}


A reader, April 18, 2019 - 12:42 pm UTC

how to restore the incremental backup at a given specific time?
Connor McDonald
April 26, 2019 - 12:04 am UTC

Thats *why* we have RMAN. You don't need to worry about *which* backup (just as long as you have them available).

You just nominate a time to restore to, and RMAN will work out which full backup to start with, and which incremental(s) are best to apply on time of it.

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.