Hello,
Hope you are doing well.
We are working on a database migration exercise where our source Oracle database is on 11g (11.2.0) and the target Oracle database is 12c (12.1.0).
We are using expdp/impdp for exporting and importing data between the 2 databases. There are some tables which have LOB data in it and are in sizes over 450 GB.
An expdp on one of the table was running for more than 12 hours and finally failed with a snapshot too old error.
We wish to know is there an efficient way to migrate this LOB data from source to target database?
From what I know our application cannot have more than 5 hours of downtime hence we want to optimize the data migration activity as much as we can.
Below are some additional details I was able to gather through some test runs on lower environments where table sizes are smaller and on performance environment where table sizes mimic the production environment.
1. Source Database version: - 11g (11.2.0.2).
2. Target Database version: - 12c (12.1.0.2).
3. The character set on both databases is different.
a. Source database charset - WE8ISO8859P1
b. Target database charset - AL32UTF8
4. Since the databases have differing charactersets we cannot use transportable tablespaces, as to use them both databases should be on same characterset. (
https://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#ADMIN11396 )
5. On a lower environment where the table sizes are smaller, a table with lob data of size 18 GB with COMPRESSION=ALL set in par file took 31 minutes for expdp command.
6. The same table when exported without the compression parameter in same database took 3 minutes for expdp command.
7. The tables having LOB data are not partitioned. They do have historic data in them which can pre-migrated before the actual go live.
8. The box on which the old database resides has 125 GB of RAM however the MAX SGA is set to 15 GB.
Queries: -
1. Does compression parameter actually makes the expdp process slow for LOB data?
2. We are thinking of exporting and importing (expdp/impdp) data in phases before hand since it is all historic and does not change using the query parameter in par file. Is this a good option? Since the table has close to 75 million rows we dont want the query parameter itself to add on to the expdp slowness. Any thoughts on any other way of picking up data in chunks beforehand?
3. The 2 databases are not in the same data centres and are geographically located at different places. In such a case would an expdp using network link really benefit? (We havent tried that approach yet.)
4. Apart from transportable tablespaces, is there any other feature we can use to migrate this data efficiently and in less time?
5. Does parallel work in case of expdp with lob data? I have read that even if we mention parallel degrees more than 1, Oracle employs only 1 process to perform the export of table having lob data? Please correct me if I am wrong here. if parallel does not work, what causes oracle to not run it in parallel.
6. Is there any other way to parallelize the expdp process lob data?
7. Would increasing the SGA size help expdp process run faster?