Finally, I was able to convince my clients to switch from the old "exp" utility to the new Data Pump "expdp" utility for exporting out a DB Dump file that can be used to move data into a Test DB.
We used to do export out the main schema of our application using the following script:
D:\Oracle\11g\Server\BIN\exp user/pwd@SID owner=schema_name buffer=65536 file=F:\DBDumps\exp_schema.DMP log=F:\DBDumps\exp_schema.log
Now I am using the following script for Data Pump utility:
D:\Oracle\11g\Server\BIN\expdp user/pwd@SID schemas=schema_name directory=EXPDP_DB_DMPS dumpfile=expdp_schema.dmp logfile=expdp_schema.log reuse_dumpfiles=y
It took over 3 hours to estimate before it started exporting the data and then it took just 1 hour to actually export out around 125 GB data.
By default it used BLOCKS for estimating the export size.
We run gather statistics script every week. So I even tried using STAISTICS for estimating the export size, using following script:
D:\Oracle\11g\Server\BIN\expdp user/pwd@SID schemas=schema_name directory=EXPDP_DB_DMPS dumpfile=expdp_schema.dmp logfile=expdp_schema.log estimate=STATISTICS reuse_dumpfiles=y
The result was not too different, the estimate part still took long.
Then I used the following script:
D:\Oracle\11g\Server\BIN\expdp user/pwd@SID schemas=schema_name directory=EXPDP_DB_DMPS dumpfile=expdp_schema.dmp logfile=expdp_schema.log reuse_dumpfiles=y exclude=STATISTICS
I was hoping it would skip the estimate step, but it did not.
The actual export part is not taking long, it is the estimate part that is taking around 3 hours.
Can we speed up the estimate part or can we skip it altogether as we already know the space requirements?
I have tried searching on Google and also on Ask Tom, but have not seen anything regarding this.
I am also thinking of using "PARALLEL" parameter, but being new to expdp, I am not sure if this will help.