Skip to Main Content
  • Questions
  • Import question : COMMIT=Y : How big is the array insert ?

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Patrick.

Asked: May 29, 2003 - 6:59 pm UTC

Last updated: September 18, 2007 - 1:48 pm UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

I would like a better understanding of the IMPORT parameter COMMIT=Y.

The Oracle manual says that if you use the parameter COMMIT=Y the import will commit after each array insert.
What I want to know is ; How big is each array insert ?

I dug around on the net and I found a white paper by Michael Abbey in titled 'Export/Import - The DBA's Best Friend'.
</code> http://www.dbatoolbox.com/WP2002_06/dba_imp_exp_friend.pdf <code>
On the bottom of page 8 he explains :
COMMIT=Y will cause the import to commit after each buffer. The buffer size is controlled by the import BUFFER parameter.

I would like to confirm this.
Is the 'array insert size' that COMMIT=Y uses controlled by the IMPORT parameter BUFFER ?

IE; If I use COMMIT=Y BUFFER=64000 - Will I get a commit after every 64K of data ?

and Tom said...

it'll be roughly 64k -- there is overhead.

For example, a number will always consume 44 bytes in buffer. But the numbers you are inserting will most likely NOT consume 44 bytes. 44 bytes is the max

Think about this -- you have a table with 10 varchar2(4000) columns. The maxed out space will be 40,000 (plus overhead) bytes. Hence, using a 64k buffer, you'll get at most 1 record per insert. Even if you insert a record with 9 nulls and the string "hello world" -- one record (since the buffer is a maxed out representation of the row)

so, you'll get a commit after not each 64k of data -- but after you've inserted the number of rows that, when you compute the WORST CASE rowsize -- fit into that 64k buffer (eg: if your worst case rowsize is 32k - 1 row per insert. 20k -- 3 rows, 10k -- 6rows and so on).


You can observe this behaviour fairly easily. Consider a script like this:


create table t_1byte ( x char(1) );
create table t_1k ( x varchar2(1000) );
create table t_5k ( a varchar2(4000), b varchar2(1000) );
create table t_10k
( a varchar2(4000), b varchar2(4000), c varchar2(2000) );
create table t_20k
( a varchar2(4000), b varchar2(4000), c varchar2(4000),
d varchar2(4000), e varchar2(4000) );
create table t_40k
( a varchar2(4000), b varchar2(4000), c varchar2(4000),
d varchar2(4000), e varchar2(4000) ,
f varchar2(4000), g varchar2(4000), h varchar2(4000),
i varchar2(4000), j varchar2(4000) );

insert into t_1byte select 1 from all_objects;
insert into t_1k select 1 from all_objects;
insert into t_5k(a) select 1 from all_objects;
insert into t_10k(a) select 1 from all_objects;
insert into t_20k(a) select 1 from all_objects;
insert into t_40k(a) select 1 from all_objects;
commit;

!exp userid=/ 'tables=(t_1byte,t_1k,t_5k,t_10k,t_20k,t_40k)'

so, we have tables of various sizes -- various MAX sizes -- but they all have the same *data* in them and then we


create or replace trigger tkyte_logon
after logon on schema
begin
execute immediate 'alter session set sql_trace=true';
end;
/

drop table t_1byte;
drop table t_1k;
drop table t_5k;
drop table t_10k;
drop table t_20k;
drop table t_40k;

!imp userid=/ commit=y buffer=65367 full=y



the tkprof is very telling as to the array size used for each:

INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T_1BYTE" ("X")
VALUES
(:1)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.09 0.15 0 83 783 30563
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.09 0.16 0 83 783 30563


that shows we did two inserts here. A CHAR(1) takes two bytes (leading length plus the data) so that makes sense -- with the overhead, we got most of the rows on the first insert and a couple of stragglers on the second. The math is easier with 1k:

INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T_1K" ("X")
VALUES
(:1)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 478 0.29 0.27 0 90 1612 30563
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 479 0.30 0.27 0 90 1612 30563

ops$tkyte@ORA920> select 30563/478 from dual;

30563/478
----------
63.9393305


about 64 rows/insert, using a 64k buffer -- that makes sense


INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T_5K" ("A", "B")
VALUES
(:1, :2)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2547 1.10 1.14 0 34 5628 30563
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2548 1.11 1.15 0 34 5628 30563

there, 5k MAX rowsize

ops$tkyte@ORA920> select 30563/2547 from dual;

30563/2547
----------
11.9996074

about 12 (12*5 = 60 with 4k leftover)....


INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T_10K" ("A", "B", "C")
VALUES
(:1, :2, :3)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 5094 2.93 2.94 0 34 36500 30563
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5095 2.94 2.95 0 34 36500 30563

ops$tkyte@ORA920> select 30563/5094 from dual;

30563/5094
----------
5.99980369

about 6 rows per insert (but see how the time is going UP UP UP -- same data, shows the efficiency of array processing)



INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T_20K" ("A", "B", "C", "D", "E")
VALUES
(:1, :2, :3, :4, :5)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10188 5.17 5.25 0 34 41613 30563
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10189 5.17 5.25 0 34 41613 30563


ops$tkyte@ORA920> select 30563/10188 from dual;

30563/10188
-----------
2.99990185


3 rows per insert, slower still...



INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T_40K" ("A", "B", "C", "D", "E", "F",
"G", "H", "I", "J")
VALUES
(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 30563 12.69 14.31 0 34 62097 30563
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 30564 12.69 14.31 0 34 62097 30563

1 row per insert -- even though it was one byte big!


And believe me, that 40k insert took a heck of alot longer then 14 seconds, it was 10's of minutes from IMP due to back and forth and back and forth and waiting for commits to complete (lots of log file syncs going on there)


You want to set buffer really really really big on import to achieve "faster=true".

Use commit=y at your own risk (tons of log file syncs will be in your future)

Rating

  (17 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

commit=no-no

Martin Guillen, May 30, 2003 - 11:04 am UTC

Hi Tom:
Do COMMIT=N mean commiting at the end of imp?
Thank You,
Martin.

Tom Kyte
May 30, 2003 - 11:59 am UTC

no, the commit happens after the table is loaded, table by table (they throw create indexes and such in there that auto commit as well)

Import question : COMMIT=Y : How big is the array insert

patrick howe, May 30, 2003 - 12:11 pm UTC

Very detailed explaination of my question!
Thanks for laying out a good explaination of the problem.

tkprof

Reader, May 30, 2003 - 12:16 pm UTC

I tried your example but when i did my tkprof i got the following. What am i doing wrong here? kindly help. thanks.

user19 ksh > tkprof u19_ora_3214.trc a.tkp explain=system/manager sys=no

TKPROF: Release 9.2.0.1.0 - Production on Fri May 30 08:55:46 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


user19 ksh > more a.tkp

TKPROF: Release 9.2.0.1.0 - Production on Fri May 30 08:55:46 2003

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: u19_ora_3214.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
0 statements EXPLAINed in this session.
********************************************************************************
Trace file: u19_ora_3214.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
0 user SQL statements in trace file.
0 internal SQL statements in trace file.
0 SQL statements in trace file.
0 unique SQL statements in trace file.
22 lines in trace file.


Tom Kyte
May 30, 2003 - 1:49 pm UTC

that trace file had nothing in it -- we'll, it had 22 lines and if I had to guess the 22 lines included the message:


** SESSION ID:(9.3307) 2003-05-30 13:50:13.802
Skipped error 604 during the execution of OPS$TKYTE.TKYTE_LOGON
*** 2003-05-30 13:50:13.803
ksedmp: internal or fatal error
ORA-00604: error occurred at recursive SQL level 1
ORA-01031: insufficient privileges
ORA-06512: at line 2



grant yourself ALTER SESSION directly. 

7.3.4 big table reorg - some questions

A reader, July 08, 2004 - 2:40 am UTC

Tom,
Whats the best way to accomplish a BIG table reorg (to reclaim space) on 7.3.4 with MINIMAL downtime? Assume BIG table has long cols.

1) Is an exp->truncate->imp the best way to go? (Ofcourse by dropping indexes and recreating them after the table import).

2) While the exp direct=y and truncate (with reasonable # of extents) will fly; what does one about the import?! An "imp commit=N.." would be ideal, but that'll require configuring a massive amount of RBS - hence the dilemma. (By the way, can I assume that I approximately need the same amount of Rollback as the size of the imported table - assuming no other activity on the database at the time). Moreover, I'm kinda paranoid that should the "imp commit=N" fail for some reason, the transaction is going to be sitting there rolling back for a loooong time.

3) Should I be unable to allocate sufficient rollback, I guess I'll have to a) "imp commit=Y" AND b) allocate an outrageously large BUFFER number (AVG_ROW_LEN * 1000rows) for my imp. Correct? While the advantage is that the RBS won't blow, the downside is that it'll take much longer.

Thanks for your time.

Tom Kyte
July 08, 2004 - 8:30 am UTC

</code> http://asktom.oracle.com/Misc/MoveLongs.html <code>

you might be able/want to use the sqlplus copy command. You could

a) prevent updates to the base table (revoke, put a trigger that always fails, whatever).

b) start N sqlplus copy sessions -- each using different ranges of rows to copy (where primary key between A and B...) into a new_table

c) index new_table using parallel/unrecoverable. grant on it, trigger it, constrain it, whatever you need to make it "the same"

d) rename old_table to hold, rename new_table to old_table

e) allow updates again.


this'll let you array fetch/insert (imp will be doing it a row at a time), in parallel, using multiple rollback segments (each session could have it's own rbs)...



Thanks

A reader, July 08, 2004 - 12:42 pm UTC

Tom,
Thanks. I did explore the copy option as well. I suppose I'll have to reset the FREELISTS on the table to take advantage of the 'N' concurrent transactions. That'll mean dropping the table and recreating it :(.

A 'drop BIG table' (with many extents) doesn't perform consistently on my database. Sometimes, it comes back in a reasonable period of time..while others, it hangs and kills my CPU freeing up the extents.

I was hoping to avoid this hassle on production and play it safe by issuing a "truncate table junk reuse storage" and "alter table blah deallocate unused" in smaller chunks. Then issuing an "import with ignore=Y". Since the import with commit, performs poorly - I guess it goes out the door (?).

Incidentally, a plsql loop performs just about as well a sqlplus copy. I would've thought the additional logic which keeps track of the counter before commiting ought to make it perform a tad worse. But apparently not. A plsql loop that commits every 5000 rows versus a sqlcopy with "arraysize 50, copycommit 1000, long 2000" - performs just about the same. (~40minutes for 1.35mill rows). Former commits every 5000, latter every 50000.

I admit I haven't traced the processes to actually "see" and "compare". Hope to do it later in the day.

Tom Kyte
July 08, 2004 - 1:15 pm UTC

well, the new table is empty sooooo.... drop and recreate isn't a big deal.

you can do the truncate/alter if you like, just rename the table to something else, and rename the new table.


long 2000 -- is that big enough? why did you even use a long then?

Thanks - I misunderstood you.

A reader, July 08, 2004 - 1:16 pm UTC

Sorry. Nevermind my rambles about the freelists. I misunderstood your initial response. My fault. I re-read it and exactly understand what you mean.

A question though -
"c) index new_table using parallel/unrecoverable. grant on it, trigger it,
constrain it, whatever you need to make it "the same"

d) rename old_table to hold, rename new_table to old_table"

Why do I need to perform "C"?? Grants, constraints - yes. But triggers, synonyms, procedures, views, functions should not need to be touched at all correct? They'll all recompile by themselves - when accessed. Basically, it should be practically seamlesss.

Tom Kyte
July 08, 2004 - 1:33 pm UTC

triggers travel with the table!

views, synonyms, functions -- all of those things I didn't mention, yes -- they'll be OK.

triggers - you need to recreate any.

ops$tkyte@ORA9IR2> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA9IR2> create trigger t_trigger before insert on t
  2  begin
  3   dbms_output.put_line( 'hello, inserting into t' );
  4  end;
  5  /
 
Trigger created.
 
ops$tkyte@ORA9IR2> insert into t values ( 1 );
hello, inserting into t
 
1 row created.
 
ops$tkyte@ORA9IR2> rename t to abc;
 
Table renamed.
 
ops$tkyte@ORA9IR2> insert into abc values ( 2 );
hello, inserting into t
 
1 row created.
 

Thank you for showing me the light!

A reader, July 08, 2004 - 1:58 pm UTC

Many thanks for illustrating the "trigger following the table" during a table-rename deal. I would never have guessed as much! Time to pore the docs to see what else might be inadvertantly happening (without my knowledge that is) during a rename.

Great!! But i need help!

Sadaf, September 21, 2004 - 5:48 pm UTC

hi!..i have gone through your responses and i find them very useful..i myself am a student computer studies..im sorry for a long review..but i really dont know how to mail you my question..well, here it is..
L,R and D are one dimensional arrays, each with 7 elements numbered from 1 to 7..the contents of the three arrays are shown below:
L R Data
1. 3 2 email
2. 0 4 is
3. 0 7 a
4. 6 5 quick
5. 0 0 way
6. 0 0 of
7. 0 0 communicating

In these arrays, L(4) contains the number 6, R(3) contains the number 7 and Data(1) contains the Data email.
Read the following algorithm:
T=1
S(T)=1
WHILE T > 0 DO
P=S(T)
T=T-1
WHILE P <> 0 DO
OUTPUT Data(P)
IF R(P)<>0 THEN
T=T+1
S(T)=R(P)
ENDIF
P=L(P)
END WHILE
END WHILE
EXIT

Complete the trace table below. It is only necessary to enter numbers in the table when they change value. You should work from left to right and top to bottom. The first two lines are completed for you.

T P S(1) S(2) OUTPUT
1 1
1

this is the question..i would really be thankful to you if you guide my on this as soon as possible!..because its very urgent..thank you!

Tom Kyte
September 21, 2004 - 7:26 pm UTC

er? homework -- i haven't had to do homework in about 18 years.

got a question about the database -- i'll look at that.

Buffer size - how big?

Nitin, February 25, 2005 - 12:43 pm UTC

If we are importing a 10GB table with average row length of 300 bytes. Then can we set BUFFER=2000000000 (2GB) if we have enough free memory on the server. We will be performing the import on the server where database resides.

Is there any value for BUFFER beyond which it will not provide huge benefits?

Thanks!

Tom Kyte
February 25, 2005 - 6:33 pm UTC

that would be a little over kill.

for the same reason you don't set array size to 100000000000


do a little work (imp reads and binds a little)
send it to the database
let the database do a little work
do a little more work (goto 1)

otherwise you flood things -- if you feed the database work a bit at a time, lgwr can flush the redo log buffer nicely in the background, dbwr can checkpoint blocks so you don't have free buffer waits.

If you save it all up, lgwr will make you wait while it flushes, dbwr will make you wait while it frees buffer blocks...

do a little, let the db do a little, do a little, let the db do a little.


2gig too big.

couple of meg, reasonable.

Not using buffer or commit

Jagjeet Singh, November 25, 2005 - 5:42 am UTC

Hi,

I am not using commit=y or buffer.
But still there is multiple execution for a table import.


*******************************************************************************
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("TABLE_NAME", "COMMENTS")
VALUES
(:1, :2)

call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ----------
----------
Parse 1 0.00 0.01 0 1 0
0
Execute 11228 8.85 9.81 71 2049 105158
78592
Fetch 0 0.00 0.00 0 0 0
0
------- ------ -------- ---------- ---------- ---------- ----------
----------
total 11229 8.85 9.82 71 2050 105158
78592

********************************************************************************

Then I tried it with buffer=100m but still the same no. of
executions.

Tom Kyte
November 25, 2005 - 10:43 am UTC

You do not give a complete example, but here is mine again:

ops$tkyte@ORA10GR1> create table t as select * from all_objects;
Table created.

ops$tkyte@ORA10GR1> !exp userid=/ tables=t

Export: Release 10.1.0.4.0 - Production on Fri Nov 25 11:25:56 2005

Copyright (c) 1982, 2004, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                              T      47991 rows exported
Export terminated successfully without warnings.

ops$tkyte@ORA10GR1> drop table t;

Table dropped.


ops$tkyte@ORA10GR1> set echo on
ops$tkyte@ORA10GR1> @tracetrigger
ops$tkyte@ORA10GR1> begin
  2          execute immediate 'grant alter session to ' || user;
  3  end;
  4  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> create or replace trigger trace_trigger
  2  after logon on schema
  3  begin
  4  execute immediate 'alter session set events ''10046 trace name context forever, level 12'' ';
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA10GR1> !imp userid=/ full=y buffer=1024

Import: Release 10.1.0.4.0 - Production on Fri Nov 25 11:26:27 2005

Copyright (c) 1982, 2004, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.01.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"      47991 rows imported
Import terminated successfully without warnings.

<b>tkprof at this point shows this:</b>


INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("OWNER", "OBJECT_NAME",
  "SUBOBJECT_NAME", "OBJECT_ID", "DATA_OBJECT_ID", "OBJECT_TYPE", "CREATED",
  "LAST_DDL_TIME", "TIMESTAMP", "STATUS", "TEMPORARY", "GENERATED",
  "SECONDARY")
VALUES
 (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   1372      1.33       1.30          0       1290       7875       47991
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1373      1.33       1.30          0       1290       7875       47991


ops$tkyte@ORA10GR1> drop table t;

Table dropped.

ops$tkyte@ORA10GR1> !imp userid=/ full=y buffer=1048576

Import: Release 10.1.0.4.0 - Production on Fri Nov 25 11:27:07 2005

Copyright (c) 1982, 2004, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.01.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"      47991 rows imported
Import terminated successfully without warnings.

<b>and tkprof at this time shows this, a definite change:</b>

INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("OWNER", "OBJECT_NAME",
  "SUBOBJECT_NAME", "OBJECT_ID", "DATA_OBJECT_ID", "OBJECT_TYPE", "CREATED",
  "LAST_DDL_TIME", "TIMESTAMP", "STATUS", "TEMPORARY", "GENERATED",
  "SECONDARY")
VALUES
 (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     11      0.54       0.71          0       1511       6630       47991
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.54       0.71          0       1511       6630       47991


ops$tkyte@ORA10GR1> drop trigger trace_trigger;

Trigger dropped.
 

Re: Commit=Y

A reader, November 29, 2005 - 5:17 pm UTC

Does it mean we should use buffer=* whenever we use imp in
real time because it will improve performance.

Obviously BIG buffer may cause of slow performance.

If because of my buffer size my insert statement [ for a single table ] is executing for multiple time, let's say 100 time.

And if use "commit=y" with that import then will import issue commit after each array size ? mean 100 time ...

I had exp. of using commit=y [ slowness and partially imported data ] And I have also seen people using commit=y without buffer parameter and since default value is much less then it would be more worst thing .. correct ?

Regards,




Tom Kyte
November 30, 2005 - 11:08 am UTC

because it MAY LIKELY improve performance, yes.


I would like to use commit=n with a buffer that is reasonable for the amount of free memory I have.

commit

neeraj, February 15, 2006 - 6:41 pm UTC

I am importing indexes so should i use commit=y or n?

Tom Kyte
February 15, 2006 - 9:59 pm UTC

up to you, does not affect index creation at all.

only affect whether each array insert into the table, prior to index creation, is commited - or if the entire table is commited at the end.

commit=n will likely result in faster import speeds.

Exp imp

neeraj, February 15, 2006 - 7:12 pm UTC

Sorry ..correction I am not importing indexes

copy command

mohammed, April 25, 2006 - 12:31 pm UTC

im trying to use copy command in oracle like this
copy from user1/user1@alais create emp1 using select from emp;
but i failed and i got this error
ORA-01031 insufficient privilege
would you please help me?

Tom Kyte
April 25, 2006 - 2:27 pm UTC

perhaps you are not allowed to create a table?

IOT Import using BUFFER

GJ, September 07, 2007 - 2:34 am UTC

I'm testing the import of 126 million records into an Index Organized table on 9.2.0.8 (AIX 5.3).

The average row length is 130 bytes. Then I perform an import with the following parameters

commit=Y buffer=67108864

And then traced, results

VALUES
(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 3754 1874.71 2969.20 68036 4174534 61264968 123005366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------

Which equates to 32766 records per array, therefore equating to around 4MB per write (32766 * 130 bytes)/1024/1024.

What I'm trying to determine is what is restricting the insert array.

The number of 32766, is suspiciously near 32768, but I can't determine if there's a correlation between the number of records, and "32k"?
Tom Kyte
September 11, 2007 - 8:53 am UTC

max possible row size must be used, not the average observed row size.

lose the commit=y. just do the insert - it is not like you can just restart the import.

and import/export will be about the slowest way to do this - rather use an external table or sqlldr.

clarification

amit poddar, September 11, 2007 - 10:57 am UTC

So that means imp has been using bulk binding (multiple inserts in one execution of insert) before bulk binding has been available for us ?
Tom Kyte
September 15, 2007 - 3:30 pm UTC

bulk binding has been available for you for a long long long time, not sure what you mean.


A reader, September 16, 2007 - 8:53 pm UTC

Bulk binding was not there in 8.0

but import has been doing bulk binding since then ?
Tom Kyte
September 18, 2007 - 1:48 pm UTC

bulk binding was in oci, it was in pro*c, it was in most of the 3gl programmatic interfaces *forever*.

and actually, dbms_sql had the procedure(s) "bind_array" as early as 8.0 as well...

bulk binding (array interface) to Oracle has been available forever basically...

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library