Can you please comment on this . http://www.orafaq.com/wiki/Import_Export_FAQ
How can one improve Import/ Export performance?
Set the BUFFER parameter to a high value (e.g. 2Mb -- entered as an integer "2000000")
Set the RECORDLENGTH parameter to a high value (e.g. 64Kb -- entered as an integer "64000")
Use DIRECT=yes (direct mode export)
Stop unnecessary applications to free-up resources for your job.
If you run multiple export sessions, ensure they write to different physical disks.
DO NOT export to an NFS mounted filesystem. It will take forever.
Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
Place the file to be imported on a separate physical disk from the oracle data files
Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora file
Set the LOG_BUFFER to a big value and restart oracle.
Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)
Use COMMIT=N in the import parameter file if you can afford it
Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
Remember to run the indexfile previously created
and we said...
The fastest way to run export/import is.....
Not to run them, see if you can avoid it. If you are doing export/import for an upgrade - STOP, do not do that, just upgrade. If you are using export/import for backup - STOP, you cannot use them for backup, they do not create backups - only backups do.
So, if you are upgrading or backing up, you can make them run infinitely fast - by not running them at all.
If you are using them for some other reason, STOP for a minute and ask yourself - do I really need to? Can I use a tablespace transport instead (just copy datafiles, do not unload/reload)? Can I use a database link? Do I really need a copy of this data.
And if you do, then, the export recommendations look ok (but 64Kb is not 64000 of course, it is 65,536). The NFS statement is not necessarily true - it might have been in the early 1990's but not in the 22nd century in general.
The import suggestions are less so. "Place the file to be imported on a separate physical disk..." - that probably doesn't matter - since the writes to the table are done in the buffer cache and DBWR writes in the background. The indexes would do direct IO, but while an index is being created - you aren't reading the DMP file anymore, so no contention would be there. Just make sure you don't have 'hot' disks - we all use striping pretty much so in the year 2010, it is unlikely you have physical disks to deal with anyway.
Increasing the db_cache_size might make it
o run faster
o run slower
o not affect it at all.
dbwr does the writes in the background, as long as it can keep up - with your existing cache - you are just fine. Increasing the cache size might actually slow things down as you can cache more - increasing the amount of work that DBWR would be called upon to do at some point. So you might "go fast inserting - then stop (wait for dbwr) - go fast again - stop go fast - stop and so on" with a large cache. The smaller cache would keep dbwr busy constantly so you get a "smoother" load.
Setting log buffer larger - only if it was too small to begin with - again, it would probably not do much for you unless it was tiny to begin with.
turning off archivelog mode is dangerous if it was on. it breaks the redo chain and obviates your existing backups. You need to think long and hard.
We don't use rollback segments in the 22nd century, so ignore that. (And rollback for a table consists of "delete+rowid" - it would not be the size of the table, not nearly).
Commit=n is OK, if you UNDERSTAND what it means. Not if you can "afford it"
Using statistics = none will make the import faster- but only because it does less work. If you need the statistics gathered, you'll be doing that anyway.
To recap: the fastest way to export/import is to NOT DO IT AT ALL. That is my preferred route. If you really have to do it, and you have a lot to do, consider implementing "do it yourself parallelism". Export really big tables by themselves (as a single export dmp file). Export schemas by themselves (a schema in a file) and export them simultaneously (as many at a time as your machine is capable of doing). Then, import them simultaneously...
Is this answer out of date? If it is, please let us know via a Comment