Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: March 02, 2010 - 5:16 pm UTC

Last updated: March 09, 2010 - 11:03 am UTC

Version: 9.2.0.8

Viewed 10K+ times! This question is

You Asked

Can you please comment on this .


http://www.orafaq.com/wiki/Import_Export_FAQ

How can one improve Import/ Export performance?
EXPORT:

Set the BUFFER parameter to a high value (e.g. 2Mb -- entered as an integer "2000000")
Set the RECORDLENGTH parameter to a high value (e.g. 64Kb -- entered as an integer "64000")
Use DIRECT=yes (direct mode export)
Stop unnecessary applications to free-up resources for your job.
If you run multiple export sessions, ensure they write to different physical disks.
DO NOT export to an NFS mounted filesystem. It will take forever.

IMPORT:

Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
Place the file to be imported on a separate physical disk from the oracle data files
Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora file
Set the LOG_BUFFER to a big value and restart oracle.
Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)
Use COMMIT=N in the import parameter file if you can afford it
Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
Remember to run the indexfile previously created

and Tom said...

The fastest way to run export/import is.....

Not to run them, see if you can avoid it. If you are doing export/import for an upgrade - STOP, do not do that, just upgrade. If you are using export/import for backup - STOP, you cannot use them for backup, they do not create backups - only backups do.

So, if you are upgrading or backing up, you can make them run infinitely fast - by not running them at all.

If you are using them for some other reason, STOP for a minute and ask yourself - do I really need to? Can I use a tablespace transport instead (just copy datafiles, do not unload/reload)? Can I use a database link? Do I really need a copy of this data.


And if you do, then, the export recommendations look ok (but 64Kb is not 64000 of course, it is 65,536). The NFS statement is not necessarily true - it might have been in the early 1990's but not in the 22nd century in general.

The import suggestions are less so. "Place the file to be imported on a separate physical disk..." - that probably doesn't matter - since the writes to the table are done in the buffer cache and DBWR writes in the background. The indexes would do direct IO, but while an index is being created - you aren't reading the DMP file anymore, so no contention would be there. Just make sure you don't have 'hot' disks - we all use striping pretty much so in the year 2010, it is unlikely you have physical disks to deal with anyway.

Increasing the db_cache_size might make it
o run faster
o run slower
o not affect it at all.

dbwr does the writes in the background, as long as it can keep up - with your existing cache - you are just fine. Increasing the cache size might actually slow things down as you can cache more - increasing the amount of work that DBWR would be called upon to do at some point. So you might "go fast inserting - then stop (wait for dbwr) - go fast again - stop go fast - stop and so on" with a large cache. The smaller cache would keep dbwr busy constantly so you get a "smoother" load.


Setting log buffer larger - only if it was too small to begin with - again, it would probably not do much for you unless it was tiny to begin with.

turning off archivelog mode is dangerous if it was on. it breaks the redo chain and obviates your existing backups. You need to think long and hard.


We don't use rollback segments in the 22nd century, so ignore that. (And rollback for a table consists of "delete+rowid" - it would not be the size of the table, not nearly).


Commit=n is OK, if you UNDERSTAND what it means. Not if you can "afford it"


Using statistics = none will make the import faster- but only because it does less work. If you need the statistics gathered, you'll be doing that anyway.




To recap: the fastest way to export/import is to NOT DO IT AT ALL. That is my preferred route. If you really have to do it, and you have a lot to do, consider implementing "do it yourself parallelism". Export really big tables by themselves (as a single export dmp file). Export schemas by themselves (a schema in a file) and export them simultaneously (as many at a time as your machine is capable of doing). Then, import them simultaneously...

Rating

  (5 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Alexander, March 03, 2010 - 9:56 am UTC

"turning off archivelog mode is dangerous if it was on. it breaks the redo chain and obviates your existing backups. You need to think long and hard. "

Can you clarify something for me, does turning off archiving actually speed thing up at all? I was under the impression that since Oracle writes the changes to the online redo logs regardless, and that the achiver process writes the archive logs in the back round, that there's very little performance to be gained by switching to noarchivelog mode.
Tom Kyte
March 03, 2010 - 11:00 am UTC

In a properly configured system, archiving should add (most of the time) approximately 0% overhead.

Because it does happen in the background, so assuming you had the CPU to do it before and assuming you had the IO capabilities (you want to make sure that archiving does not contend with the rest of the system) - it shouldn't affect MOST of your operations.

HOWEVER - it does cause redo to be generated for some operations that would not generate redo otherwise.

For example, create table as select - in NOARCHIVELOG, there is no redo generated for the table. In ARCHIVELOG - there would be (this does not affect import/export however, they do not do create table as select)

For example, create index - in NOARCHIVELOG, there is no redo generated for the table. In ARCHIVELOG - there would be, this is why you MIGHT (stress MIGHT) want to use the indexfile option, because then you could use nologging for the create (and make sure you are either going to back up the files right after OR be willing to recreate the indexes in the event of media failure)


Not able to execute the below Stored procedure

RAJ, March 07, 2010 - 12:25 pm UTC

Not able to execute the below Stored procedure:
Please provide the solution

  1  create or replace procedure
  2  p1(i_ename in varchar(10),
  3     o_sal out number(10))
  4  as
  5  l_num number(10);
  6  o_num number (10);
  7  select sal
  8  into l_num
  9  from emp
 10  where name =i_name;
 11  if l_num >1000 then
 12  o_num:=0;
 13  else
 14  o_num:=1;
 15  end if;
 16  o_sal:=l_num;
 17  dbms_output.put_line(o_sal);
 18* end p1;
SQL> /
create or replace procedure
*
ERROR at line 1:
ORA-01031: insufficient privileges

Tom Kyte
March 09, 2010 - 11:03 am UTC

no, you are not allowed to CREATE that procedure, if you could create it, you could execute it.

get yourself granted CREATE PROCEDURE.

A reader, March 08, 2010 - 5:16 am UTC

I always knew that Tom was ahead of time, but I didn't know yet that he's a century ahead. ;-)
Tom Kyte
March 09, 2010 - 11:03 am UTC

:) funny, i didn't catch that!

legacy export conventional path vs direct path

Chakra, June 14, 2012 - 2:10 am UTC

I have a few confusions and are mentioned below regarding legacy export in oracle.
Please find the meta link artical ID 76329.1, in which a diagram had given. I have below doubt from this as well my own doubts.

1) How does a table is exported from database?
2) What are the layers (like...OS filesystem memory,oracle SGA and so on) touches in this export
3) In this artical [ID 76329.1], I am not clear about the terminalogies. Please can some one explain me.

a) row-Major order conversion
b) column-major order conversion
c) data layer
d) export output buffer

In the diagram I am not sure whether they are mentioing the SGA's buffer cache or OS filesystem buffer cache.
In this Pic, buffer cache management--- Are they pointing to OS filesystem buffer cache?

Could you please read this artical and put the things simpler than this... so that people like me will understand the concepts. (Internal working explanation would be great)

Conventional & Direct Path Export: Oracle Release 7.3 [ID 76329.1]

( If the filesystem are block device then the data which is read from the filesystem are kept in filesystem buffer cache in the RAM of the system then it would be taken to database buffer cache for database use --- along with this could you please co-relate the things)

Regards,
Chakra



chakra, July 18, 2012 - 11:18 pm UTC

Any update on above one Tom.