Skip to Main Content
  • Questions
  • RMAN backup performance Tuning for Very Large Database

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Sandeep.

Asked: December 06, 2014 - 9:00 pm UTC

Last updated: December 10, 2014 - 11:08 am UTC

Version: 11.2.0

Viewed 1000+ times

You Asked

Hi Tom,

Like always, I find your inputs are very helpful.

Could I please request you to help me find out "How to make RMAN take backup faster when the database grows into a very large size like 10TB". As of now I have discovered features like, Block Change Tracking, Parallel, fast compression (considering using a faster algorithm), incremental differential backup. Is there any other way or method out there to speed up the RMAN.

If this question of mine is an obvious one, and if I have missed any published paper on this, the URL to such article would also be helpful.

Thanks in Advance,
Regards.

and Tom said...

add "using disk based backups" would be another. Backup to a device that is fast.

Incrementals with block change tracking can be used to keep the disk based backup up to date - so you never take another full backup again. Incremental + disk based backup + period copy to tape would be the thing to consider. The fastest way to backup 10tb is to NOT backup 10tb - just backup the changes...



another new option is an engineered system: http://www.oracle.com/us/corporate/press/2020722

here your database in pretty much constantly backed up as changes occur.

Rating

  (5 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

incrementally updated backups ?

Sokrates, December 08, 2014 - 12:23 pm UTC


Incrementals with block change tracking can be used to keep the disk based backup up to date - so you never take another full backup again.

You meant to say "Incrementally updated backups with block change tracking ... " ?
Tom Kyte
December 08, 2014 - 3:36 pm UTC

I'm not sure how what you say and what I said are materially different.

We can use block change tracking to keep the disk based backup up to date.


Incremental backups applied to the disk based backup keep it up to date.

Incrementally updated backups


say the same thing to me. In fact, the documentation says it both ways in the very beginning of describing it:

<quote>

For use in a strategy based on incrementally updated backups, where these incremental backups are used to periodically roll forward an image copy of the database

</quote>

"where these incremental backups are used to periodically roll forward an image copy of the database" = "incrementals (made with) block change tracking can be used to keep the disk based backup up to date - so you never take another full backup again"


Sandeep, December 08, 2014 - 5:31 pm UTC

Hi Tom,

Many Thanks, for the response.
Considering the implementation of " Block Change Tracking + Incremental Differential + Disk Based", if my understandings are correct, then to recover from a complete database corruption I have to implement, may be the complete backup that I had taken one year back and then all the incremental backup since then.

But considering, If I am able to take the complete backup one week back (say when the size was 9.5TB), and just implementing few incremental backup sets, my database would be ready again.

I may be wrong, but since the DB size is same in either of the above way, OS level disk writing would be talikng the same time, but the maintainability would be much better in the second case, with least number of backups to track

Kindly let me know your thoughts.
Regards
Tom Kyte
December 09, 2014 - 3:03 pm UTC

no, the disk based backup is constantly being caught up

Incrementals with block change tracking can be used to keep the disk based backup up to date - so you never take another full backup again.


you will have on disk what looks to be a full backup that was just taken.

the incrementals are constantly applied to the disk based backup, it is always caught up. to recover, you just restore those backup datafiles to production and apply a little bit of archive and online redo logs.




verify ?

Sokrates, December 09, 2014 - 4:48 pm UTC

that's why I emphasized updated

Can we verify the correctness of the resulting backup in some way ? Or do we have to trust it ?
Tom Kyte
December 09, 2014 - 6:28 pm UTC

rman does that all of the time - as it is building blocks from the source, it is checking them all out - unlike an OS copy would.

You can run various validate commands from within rman as well.

https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmvalid.htm#BRADV90063

RMAN backup performance Tuning for Very Large Database

A reader, December 10, 2014 - 8:59 am UTC

Hi Tom
Thanks for the reply. It is a common practice to take level-0 backup, and then keep taking level-1 backup. In case of Incremental differential level-1 backup it would only be backing up the changes. so for a 10 TB database, with complete database corruption, I have to restore from the 1st complete backup that I have taken on day-one, and rest 363 incremental backups are also need to be implemented to make it as it should be on day 364. I would be much glad to be corrected here.

Regards
Tom Kyte
December 10, 2014 - 11:08 am UTC

see above, we are keeping the disk based backup up to date.

The disk based backup has the incrementals applied to it - you have a full CURRENT backup on disk. If you suffer corruption, you only have to restore datafile, apply some online/archive redo log. The incrementals will already have been applied


Sandeep, December 12, 2014 - 3:15 am UTC

Hi Tom,

Thanks for the clarification.

Regards

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.