This question is more of a solicitation for advice than an actual question.
I've been tasked with migrating a large warehouse database from one set of hardware in one datacenter to a new set of hardware in a different datacenter. Excluding temp, undo, system, sysaux, and index-only tablespaces, the instance is about 6.5TB in size.
Some related topics/constraints/parameters:
-- The network connectivity between the datacenters is limited, which rules out several approaches. We can't use the network_link feature of datapump to initiate the migration with an impdp on the receiving side. It will just be too slow. The lack of network bandwidth also rules out the luxury of using Golden Gate.
-- The lack of network bandwidth has lead us down the path of using a removable storage device, and physically shipping the data from one datacenter to the other. The device we have chosen has the capacity to handle the entire database instance. There is not enough SAN storage capacity available to the source system to create export files locally - the removable storage gets us past that issue too.
-- The host hardware (specifically the chipset and endianness) in the two datacenters differs. The host in the source datacenter is bigendian, while in host in the new datacenter is littleendian. This points us to datapump as an overall solution since datapump will handle the endian conversion.
-- One schema in the database accounts for roughly 50% of the data to be migrated, and one partitioned table in that schema is over 1TB in size.
-- The tablespaces in the source system are not transportable tablespaces. Plus, in the source system, disk is presented to the database as raw partitions, not in filesystems, so copying files isn't an option anyway.
-- There are 33 tablespaces in the source instance which contain data to be migrated (index-only tablespaces are omitted because for indexes, only the metadata is exported).
So, that gives you a high-level overview of the migration effort we're facing. Based on the information provided above, I would very much appreciate your thoughts and input on how best to approach performing the migration using datapump exports and imports.
So, I guess my question(s) relate to your confidence in datapump for handling very large export files. My initial thoughts are that performing a single, full database export rather than breaking the migration into pieces - perhaps by tablespace - might be a big ask. However, doing so would remove many hurdles and complexities (such as how to deal with inter-tablespace dependencies, having to import user accounts prior to importing the data tablespaces (and then having to deal with all of the other factors like privileges, roles, materialized views, triggers, sequences, etc).
The thought of performing a full database export makes me a bit nervous, in that it amounts to putting all your eggs in one basket. If something were to go wrong during the export (or even worse, during the import), starting over will have wasted a lot of precious time.
So, first an foremost, do you feel that given enough space, datapump can export the entire database in a single full database export?
Secondly, if you have misgivings about utilizing a single full database export, how would you go about breaking the migration into pieces? I see two options here: 1) schema-by-schema (or multiple groups of schemas), and 2) tablespace-by-tablespace.
Recall though that a single schema in the database instance accounts for roughly half of the data, so its export file would be very big in its own right. Also note that several of the tablespaces exceed 1T in size (in fact, one of the 1T+ tablespace contains only the partitions for a single table).
Any advice and/or thoughts would be greatly appreciated.
and we said...
Looks like you've investigated your options for migrating the data well.
I would opt to split the migration into pieces if possible.
A single migration is likely to require an extended period of downtime for the application. Doing it piecemeal may mean more outages. But each should be smaller and more contained.
It'll also make things much less painful should you find you need to revert back to your current server for any reason. If, after a day or two on the new system, you want to move all 6.5 TB back... that's going to hurt!
Assuming there are no cross-schema dependencies, schema-by-schema is a good approach.
Starting with the smallest/least impact schemas should enable you to verify everything's running correctly on the new system quickly.
You've also mentioned you've partitioned your tables. Assuming that:
They're date partitioned
Old partitions are read-only (in practice, if not fully enforced)
There's another way you could go about it:
Copy over all the old, unchanging partitions separately. Once these are all loaded in the new system, hopefully you only have a handful left to copy over. So when you come to the live cut-over, there's a much smaller data set to transfer.
This allows you to get the bulk of the data transfer done while the current system is still live. Giving you plenty of time to re-run the imports should anything go wrong. And you'll need a smaller outage window to complete the migration.
Check out cross-platform tablepspace migration as well. It sounds like a perfect match for your needs
MOS Note: 2471245.1 – V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup
and you rated our response
Is this answer out of date? If it is, please let us know via a Review