Skip to Main Content
  • Questions
  • Import with CLOB column is very slow

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: June 16, 2005 - 2:45 pm UTC

Last updated: June 16, 2005 - 3:55 pm UTC

Version: 9.2.0

Viewed 1000+ times

You Asked

Hi
I am importing a table with CLOB column is very slow. I tried few suggestions given like setting the table as nologging
Setting imp parameters like buffersize, commit=y etc.
Its still taking 1 1/2 hrs to import 37K records.
Is there any way i can improve the performance. What about some init.ora parameters which i can set to make it work?
These are the import parameters
buffer=90000000
FILE=../data/kdd_break1.dmp
IGNORE=y
GRANTS=n
INDEXES=n
ROWS=y
LOG=../logs/kdd_break_imp.log
COMMIT=y
CONSTRAINTS=n
full=y

Pls help.


Thanks
Manish Mendiratta

and Tom said...

setting table as nologging will "do nothing", nothing whatsoever for an import.

commit=y is the way to make imp run SLOWER.


the importing of lobs and longs is done slow by slow (row by row) and that is the major cause of the slow performance. It must be though because the clobs could literally be HUGE.

if you are going to be doing this over and over, I would suggest you exp slices of the table and import them in parallel, much like I just suggested here:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:42796274859286 <code>

The other thing to look at is how the lob is defined.

For the import, it would be nice of the lob was "cached" so import didn't have to wait for the direct IO to disk to complete on each row.

But, you might not want it cached after that, that would be upto you. But you can alter it to be cache/nocache so you might consider precreating the table, caching the lob, doing the load and then putting it back to nocache if that is what you want.


Is this answer out of date? If it is, please let us know via a Comment

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here