Skip to Main Content
  • Questions
  • Migration of a very large warehouse database

Breadcrumb

May 4th

Question and Answer

Chris Saxon

Thanks for the question, Dan.

Asked: March 20, 2019 - 4:13 pm UTC

Last updated: July 13, 2021 - 10:23 am UTC

Version: 11.2.0.4

Viewed 10K+ times! This question is

You Asked

This question is more of a solicitation for advice than an actual question.

I've been tasked with migrating a large warehouse database from one set of hardware in one datacenter to a new set of hardware in a different datacenter. Excluding temp, undo, system, sysaux, and index-only tablespaces, the instance is about 6.5TB in size.

Some related topics/constraints/parameters:

-- The network connectivity between the datacenters is limited, which rules out several approaches. We can't use the network_link feature of datapump to initiate the migration with an impdp on the receiving side. It will just be too slow. The lack of network bandwidth also rules out the luxury of using Golden Gate.

-- The lack of network bandwidth has lead us down the path of using a removable storage device, and physically shipping the data from one datacenter to the other. The device we have chosen has the capacity to handle the entire database instance. There is not enough SAN storage capacity available to the source system to create export files locally - the removable storage gets us past that issue too.

-- The host hardware (specifically the chipset and endianness) in the two datacenters differs. The host in the source datacenter is bigendian, while in host in the new datacenter is littleendian. This points us to datapump as an overall solution since datapump will handle the endian conversion.

-- One schema in the database accounts for roughly 50% of the data to be migrated, and one partitioned table in that schema is over 1TB in size.

-- The tablespaces in the source system are not transportable tablespaces. Plus, in the source system, disk is presented to the database as raw partitions, not in filesystems, so copying files isn't an option anyway.

-- There are 33 tablespaces in the source instance which contain data to be migrated (index-only tablespaces are omitted because for indexes, only the metadata is exported).


So, that gives you a high-level overview of the migration effort we're facing. Based on the information provided above, I would very much appreciate your thoughts and input on how best to approach performing the migration using datapump exports and imports.

So, I guess my question(s) relate to your confidence in datapump for handling very large export files. My initial thoughts are that performing a single, full database export rather than breaking the migration into pieces - perhaps by tablespace - might be a big ask. However, doing so would remove many hurdles and complexities (such as how to deal with inter-tablespace dependencies, having to import user accounts prior to importing the data tablespaces (and then having to deal with all of the other factors like privileges, roles, materialized views, triggers, sequences, etc).

The thought of performing a full database export makes me a bit nervous, in that it amounts to putting all your eggs in one basket. If something were to go wrong during the export (or even worse, during the import), starting over will have wasted a lot of precious time.

So, first an foremost, do you feel that given enough space, datapump can export the entire database in a single full database export?

Secondly, if you have misgivings about utilizing a single full database export, how would you go about breaking the migration into pieces? I see two options here: 1) schema-by-schema (or multiple groups of schemas), and 2) tablespace-by-tablespace.

Recall though that a single schema in the database instance accounts for roughly half of the data, so its export file would be very big in its own right. Also note that several of the tablespaces exceed 1T in size (in fact, one of the 1T+ tablespace contains only the partitions for a single table).

Any advice and/or thoughts would be greatly appreciated.

Thank you.




and Chris said...

Looks like you've investigated your options for migrating the data well.

I would opt to split the migration into pieces if possible.

A single migration is likely to require an extended period of downtime for the application. Doing it piecemeal may mean more outages. But each should be smaller and more contained.

It'll also make things much less painful should you find you need to revert back to your current server for any reason. If, after a day or two on the new system, you want to move all 6.5 TB back... that's going to hurt!

Assuming there are no cross-schema dependencies, schema-by-schema is a good approach.

Starting with the smallest/least impact schemas should enable you to verify everything's running correctly on the new system quickly.

You've also mentioned you've partitioned your tables. Assuming that:

They're date partitioned
AND
Old partitions are read-only (in practice, if not fully enforced)

There's another way you could go about it:

Copy over all the old, unchanging partitions separately. Once these are all loaded in the new system, hopefully you only have a handful left to copy over. So when you come to the live cut-over, there's a much smaller data set to transfer.

This allows you to get the bulk of the data transfer done while the current system is still live. Giving you plenty of time to re-run the imports should anything go wrong. And you'll need a smaller outage window to complete the migration.

Good luck!

====

Addenda:

Check out cross-platform tablepspace migration as well. It sounds like a perfect match for your needs

MOS Note: 2471245.1 – V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Options to accelerate datapump

David D., March 22, 2019 - 3:22 pm UTC


Hello,

Your database is very big. There are many options to accelerate Datapump but maybe they are not adequate in your case :

EXPORT
- Parallelism
- ...

IMPORT
- Parallelism
- NOLOGGING
- Integrity constraints desactivated
- triggers desactivated
- ...


If you have to do regularly migrations, this is a very very good book : "Oracle Database Upgrade, Migration & Transformation Tips & Techniques" by Edward Whalen (Author), Jim Czuprynski (Author)

https://www.amazon.com/Database-Upgrade-Migration-Transformation-Techniques-ebook/dp/B00ZSZBHAE/ref=sr_1_3?keywords=oracle+migration&qid=1553268004&s=gateway&sr=8-3


David D.

It's big to you, but is it really BIG?

Jeff W., September 11, 2020 - 5:05 pm UTC

Size is obviously relative. Having managed databases from MB to multi-GB, with our ERP at 10 TB, DW at 39 TB and IOT DW at 75+ TB currently, I don't even consider these to be "large". From readings and discussions with others, a DW in the Petabyte range, now that to me is really LARGE.

We are now in a similar situation. We are exiting a datacenter over a 1Gb VPN Tunnel, fastest the source would give us. We have Exadata and ZDLRA on both sides. ZDLRA replication was too slow. What worked for us is direct backup from Source Exadata to Target ZDLRA using 16 channels over VPN Tunnel. We then used the rman virtual L0 to build a standby dataguard DB and shipped AL files from source to target over the tunnel. In some cases AL shipping was too slow, we generate over 1 TB of AL per day on average for each of these databases. So we had to take AL backup to target ZDLRA then manually apply.

This worked for our 10 TB ERP, several smaller 3-4 TB databases and also our 39 TB DW.

We now have to move our 75+ TB IOT DW. This is the one that has us concerned and we will look closer at read-only option for older partitions and doing in chunks using transportable tablespace.

Thanks.

Jeff

Connor McDonald
September 12, 2020 - 3:59 am UTC

Nice input.

In particular, it shows that each case should be treated on its own merits, and appropriate solutions devised *and tested*

Large DB migrate to Cloud

Shuja, July 13, 2021 - 4:28 am UTC

Hi,
What if we're talking about moving say 20TB database from on-premises to OCI cloud ?
Chris Saxon
July 13, 2021 - 10:23 am UTC

There are many options depending on which release of Oracle Database you're using, whether you're using CDB vs non-CDB, and how much downtime you can accept.

The OCI docs discuss the options in detail:

https://docs.oracle.com/en-us/iaas/Content/Database/Tasks/migrating.htm

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.