Skip to Main Content
  • Questions
  • Bulk insert of records , using something like a user defined record ???

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Sam.

Asked: October 09, 2001 - 4:32 pm UTC

Last updated: July 02, 2012 - 7:02 am UTC

Version: 8.1.7

Viewed 50K+ times! This question is

You Asked


We need to select 50 columns and insert it into another table.

Which is the quicket way.

The one which comes to my mind immedietly is as follows

declare
type testarray is table of varchar2(3000) index by binary_integer;

v_ename testarray;
v_empno testarray;
v_sal testarray;

cursor cu_cursor is select empno, ename, sal from emp;
begin
open cu_cursor;

loop
fetch cu_cursor bulk collect into v_empno, v_ename, v_sal
LIMIT 1000;

forall i in 1 .. v_empno.count
insert into testemp_table( empno, ename, sal )
values
( v_empno(i), v_ename(i), v_sal(i) );

exit when cu_cursor%notfound;
end loop;
close cu_cursor;
end;


But this way I have to declare 50 variables, that is going to be very tedious, and I also know that, I cannot declare a user defined record and select the data into it and insert.

But I want a similar kind of functionality

I want to select my records into a something like a record( Iam I talking about sql type ,and create objects out of that sql type ?, Iam not very sure)

and then insert them into the table. But at the same time I want the bulk insert functionality, with the LIMIT clause.

How can I achieve this.

(I cannot use a direct insert, due to some computatations, and also due to the fact that the table will be locked , and other processes wanting to insert or delete records have to wait ?( clarification needed here too)

Thank you


and Tom said...

The fastest way is:

insert into t select * from t2;

and perform the computations in SQL. There is not more (or less) locking with the direct insert.



What about this then (you still have to create the object type with all 50 columns, NO ESCAPING that)

ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace type myScalarType as object
2 ( ename varchar2(30),
3 hiredate date,
4 sal number
5 )
6 /

Type created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace type myArrayType as table of myScalarType
2 /

Type created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select myScalarType( ename, hiredate, sal ) from emp;

MYSCALARTYPE(ENAME,HIREDATE,SAL)(ENAME, HIREDATE, SAL)
-----------------------------------------------------------------------------------------------------------------------------------
MYSCALARTYPE('A', '17-DEC-80', 800)
MYSCALARTYPE('ALLEN', '20-FEB-81', 1600)
MYSCALARTYPE('WARD', '22-FEB-81', 1250)
MYSCALARTYPE('JONES', '02-APR-81', 2975)
MYSCALARTYPE('MARTIN', '28-SEP-81', 1250)
MYSCALARTYPE('BLAKE', '01-MAY-81', 2850)
MYSCALARTYPE('CLARK', '09-JUN-81', 2450)
MYSCALARTYPE('SCOTT', '09-DEC-82', 3000)
MYSCALARTYPE('KING', '17-NOV-81', 5000)
MYSCALARTYPE('TURNER', '08-SEP-81', 1500)
MYSCALARTYPE('ADAMS', '12-JAN-83', 1100)
MYSCALARTYPE('JAMES', '03-DEC-81', 950)
MYSCALARTYPE('FORD', '03-DEC-81', 3000)
MYSCALARTYPE('MILLER', '23-JAN-82', 1300)

14 rows selected.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> declare
2 cursor c is select myScalarType( ename, hiredate, sal ) from emp;
3 l_data myArrayType;
4 begin
5
6 open c;
7 loop
8 l_data := myArrayType();
9 fetch c bulk collect into l_data limit 6;
10 dbms_output.put_line( 'Fetched ' || l_data.count || ' rows' );
11
12 for i in 1 .. l_data.count
13 loop
14 l_data(i).ename := 'X' || l_data(i).ename;
15 end loop;
16
17 insert into emp2 ( ename, hiredate, sal )
18 select * from TABLE( cast (l_data as myArrayType) );
19
20 exit when c%notfound;
21 end loop;
22 close c;
23 end;
24 /
Fetched 6 rows
Fetched 6 rows
Fetched 2 rows

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from emp2;

ENAME HIREDATE SAL
---------- --------- ----------
XA 17-DEC-80 800
XALLEN 20-FEB-81 1600
XWARD 22-FEB-81 1250
XJONES 02-APR-81 2975
XMARTIN 28-SEP-81 1250
XBLAKE 01-MAY-81 2850
XCLARK 09-JUN-81 2450
XSCOTT 09-DEC-82 3000
XKING 17-NOV-81 5000
XTURNER 08-SEP-81 1500
XADAMS 12-JAN-83 1100
XJAMES 03-DEC-81 950
XFORD 03-DEC-81 3000
XMILLER 23-JAN-82 1300

14 rows selected.


Rating

  (179 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

I thought you couldn't BULK COLLECT into tables of records/objects until 9i

Adrian, October 10, 2001 - 9:55 am UTC

Tom

I thought that you could only bulk collect into a "plsql table per column" in 8i ?? And that bulk collecting into a PL/SQL table of a record or multi-attribute object was only introduced in 9i?

I tried a similar example to the one you gave above way back (create object in SQL, use it in PLSQL) and it didn't work... I'm using 8.1.6...

Please respond to this as it has been one of my GREATEST disappointments with what I considered to be PL/SQL's saviour - BULK COLLECT. I want to BULK COLLECT 50+ columns...

Thanks

Tom Kyte
October 10, 2001 - 10:12 am UTC

It is not a plsql table of records I used.

I used an OBJECT TYPE -- a collection -- an extensible datatype. I didn't use any plsql specific datatypes.

The example I provided works in Oracle8i release 2 (816) and up.


Tom

A reader, October 10, 2001 - 10:31 am UTC

Iam very sure that the solution you provided in your answer using sql type , will be much slower that the solution provided by the questioner , in the question.

Your comments please.

Tom Kyte
October 10, 2001 - 11:00 am UTC

they did not provide a solution -- hence my solution is faster.

It is true that using collections of objects will be slower then using 50 simple scalar arrays.

It is also true that both are slower then

insert into ... select ....


Scalar arrays are faster then collections of objects. But, that did not answer the question.




A reader, October 10, 2001 - 10:31 am UTC

hi tom
do we need to initilize collection when we are using
BULK COLLECT ?

declare
2 cursor c is select myScalarType( ename, hiredate, sal ) from emp;
3 l_data myArrayType;
4 begin
5
6 open c;
7 loop
8 l_data := myArrayType(); <<<-----IS initilization required?
9 fetch c bulk collect into l_data limit 6;
10 dbms_output.put_line( 'Fetched ' || l_data.count || ' rows' );
11


Tom Kyte
October 10, 2001 - 11:07 am UTC

No, it's not necessary. the bulk collect would reset the .count as well.

limit of LIMIT

Sam, October 10, 2001 - 10:34 am UTC

What is the limit or the upper end of LIMIT.
At what threshold of LIMIT , will we get optimum performance.

i.e. what would you recommend the LIMIT to be , 1000 or 2000, or 3000 or 5000 or 10,000

Tom Kyte
October 10, 2001 - 11:10 am UTC

I'd suggest hundreds, not thousands. There are ram concerns, you want to keep the interaction between you and database going back and forth -- not "ok database do tons of work, then I'll do tons of work and then I'll give you tons of work again". You want things to be flowing between you and the database -- not "bursting" between you and the db.

Consider this -- if you array insert 10k rows - you might have to wait for LGWR to make room in the redo buffer cache (sync writes to the file system). If you send him 100 or 500 rows -- you won't wait and LGWR will flush the buffer cache in the background whilst you are off doing the next 100/500 rows.

You are right , but

Sam, October 10, 2001 - 11:26 am UTC

"Consider this -- if you array insert 10k rows - you might have to wait for LGWR to make room in the redo buffer cache (sync writes to the file system). If you send him 100 or 500 rows -- you won't wait and LGWR will flush the buffer cache in the background whilst you are off doing the next 100/500 rows."

Firstly, YES, I will limit it to a figures between 100 and 500, but what can be the max number, out of academic interest.

Secondly, what if the redo buffer has a lot of memory, then even in the case I set the limit at 5000, then it should not slow down the process, Iam I right.

Thirdly, Setting the LIMIT to a lesser number, does it also mean faster execution. , then when you set the LIMIT to a hight number.

KIndly expand on the below--
"if you array insert 10k rows - you might have to wait for LGWR to make room in the redo buffer cache (sync writes to the file system). "




Tom Kyte
October 10, 2001 - 12:24 pm UTC

Out of academic interest -- the limit is dictated by the amount of RAM you have (and 2billion)

Secondly -- Yes, if you use 5,000 a small redo log buffer could hurt you.

Thirdly -- It might, it might not, you need to benchmark to find your "sweet spot". I find 100-500 the max.


I don't know how to expand on that last part. If you send over 10k rows - and generat 5m of log -- you might have to WAIT for log writer to make room. If you send over 100rows and generate 0.05meg of log -- you probably WON'T have to wait for space. And then log writer will write that data out in the background. By having a nice steady flow -- lots of back and forth -- you'll get into a nice steady state with no log jams.




A reader, October 10, 2001 - 6:01 pm UTC

What do you mean by

"the limit is dictated by the amount of RAM you have
(and 2billion)"

I understand that 2 billion is the max limit, what do you mean by the limit is dictated by the amount of RAM you have
, in what way that is related to the LIMIT we set for bulk insert.

Please bear with me and explain.


Tom Kyte
October 10, 2001 - 6:40 pm UTC

If you use


limit 1000000

and don't have enough ram to hold 1,000,000 array element in memory, you will get an ora-4031 or some related memory error. The array can only be as big as you can allocate memory for.

A reader, October 10, 2001 - 7:59 pm UTC

How can we know our memmory settings.
When you say memory here, are you talking about the redo log buffer cache.

How can I know what my total memory is , and how it is distributed.

Typically how much of RAM is required to hold 1,000,000 records in an array.

Tom Kyte
October 11, 2001 - 7:01 am UTC

The log buffer is a fixed sized piece of memory in the SGA

SQL> show parameter log_buffer

will show you its size.

You can see the amount of log/ram you are using by querying v$mystat and v$statname.  I use a small script like:

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat redo
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&1')||'%'
  5  and b.value > 0
  6  /
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo')||'%'

NAME                                VALUE
------------------------------ ----------
redo synch writes                       1
redo synch time                         2
redo entries                           16
redo size                            4072

ops$tkyte@ORA817DEV.US.ORACLE.COM> update emp set ename = ename;

14 rows updated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit
  2  /      

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat redo
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&1')||'%'
  5  and b.value > 0
  6  /
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('redo')||'%'

NAME                                VALUE
------------------------------ ----------
redo synch writes                       2
redo synch time                         3
redo entries                           31
redo size                            7928

<b>diff the redo sizes and you get a feeling for how much you are generating.  For memory, you look at pga and uga usage:</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat memory
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&1')||'%'
  5  and b.value > 0
  6  /
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('memory')||'%'

NAME                                VALUE
------------------------------ ----------
session uga memory                  55836
session uga memory max              68344
session pga memory                 228668
session pga memory max             228668
sorts (memory)                         48

ops$tkyte@ORA817DEV.US.ORACLE.COM> declare
  2  x long default rpad('*',32000,'*');
  3  begin null; end;
  4  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> @mystat memory
ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&1')||'%'
  5  and b.value > 0
  6  /
old   4: and lower(a.name) like '%' || lower('&1')||'%'
new   4: and lower(a.name) like '%' || lower('memory')||'%'

NAME                                VALUE
------------------------------ ----------
session uga memory                  55836
session uga memory max              68344
session pga memory                 239256
session pga memory max             239256
sorts (memory)                         50
 

Kinldy give your thoughts on the solution for this

Jane, October 17, 2001 - 12:52 pm UTC

Iam encountering the following error
"
[1]: (Error): ORA-06550: line 18, column 12: PLS-00103: Encountered the symbol "IF" when expecting one of the following: . ( * @ % & - + / at mod rem select update <an exponent (**)> delete insert || ORA-06550: line 19, column 22: PLS-00103: Encountered the symbol "=" when expecting one of the following: := . ( % ; The symbol ":= was inserted before "=" to continue. ORA-06550: line 27, column 5: PLS-00103: Encountered the symbol "CLOSE" when expecting one of the following: begin function package pragma procedure form ORA-06550: line 30, column 0: PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: begin function package pragma procedure subtype type use <an identifier> <a double-quoted delimited-identifier> cursor form current
"

when Iam trying to set the ename after the FORALL.

1.why
2.How should I achieve what Iam trying to do. I want to set the ename to 'HIGHSAL', before inserting into the testemp_table, if the sal is more than 1500.

Note: I want to achieve this with in the given structure.


declare
type testarray is table of varchar2(3000) index by binary_integer;

v_ename testarray;
v_empno testarray;
v_sal testarray;

cursor cu_cursor is select empno, ename, sal from emp;
begin
open cu_cursor;

loop
fetch cu_cursor bulk collect into v_empno, v_ename, v_sal
LIMIT 1000;

forall i in 1 .. v_empno.count

if v_sal(i) >1500 then
v_ename(i)='HIGHSAL' ;
END IF;
insert into testemp_table( empno, ename, sal )
values
( v_empno(i), v_ename(i), v_sal(i) );

exit when cu_cursor%notfound;
end loop;
close cu_cursor;
end;



Tom Kyte
October 17, 2001 - 1:19 pm UTC

forall can only be followed by ONE dml statement. Its goal is to cut down on the back and forth between PLSQL and SQL -- if you embedded PLSQL in there -- it could not possibly do that!

you would code something like:

fetch cu_cursor bulk collect into v_empno, v_ename, v_sal
LIMIT 1000;

for i in 1 .. v_empno.count
loop
if v_sal(i) > 1500 then
v_ename(i) := 'highsal';
end if;
end loop

forall i in 1 .. v_empno.count
insert into testemp_table( empno, ename, sal )
values
( v_empno(i), v_ename(i), v_sal(i) );





Jane, October 17, 2001 - 1:05 pm UTC

Can you give the link to the oracle objects chapter, and also the pl/sql manual.

Tom Kyte
October 17, 2001 - 1:21 pm UTC

comparision

Nag, October 17, 2001 - 1:41 pm UTC


Tom , I want to know
how much time the follwing steps are taking. How can I do it in a convenient way.

1.
for i in 1 .. v_empno.count
loop
if v_sal(i) > 1500 then
v_ename(i) := 'highsal';
end if;
end loop
;

2.
forall i in 1 .. v_empno.count
insert into testemp_table( empno, ename, sal )
values
( v_empno(i), v_ename(i), v_sal(i) );


I guess this
.
for i in 1 .. v_empno.count
loop
if v_sal(i) > 1500 then
v_ename(i) := 'highsal';
end if;
end loop
;

step should be very very fast, irrespective of the number of values int the given arrays, as no context switches are required, and everythig is done in the memory.


Tom Kyte
October 17, 2001 - 1:45 pm UTC

time it with dbms_utility.get_time

l_start := dbms_utility.get_time;

... code ....

dbms_output.put_line( 'That took ' ||
(dbms_utility.get_time-l_start) || ' hsecs' );

Updating the whole of an array to one value

Nag, October 17, 2001 - 4:28 pm UTC

Tom

I have an interesting question

I have a number array which has an array of 1000 values it it.

Now I want to update the array and set all the values to 25.

One way of doing it is to loop throught the array and set the value.

But I dont want to loop, but want to update the whole array in a single go, is there a way to do it.

How I achieve this functionality.

Tom Kyte
October 17, 2001 - 6:51 pm UTC

No way to really do it in one step. you have to loop.



updating a pl/sql table

Nag, October 17, 2001 - 4:48 pm UTC


</code> http://info-it.umsystem.edu/oradocs/doc/server/doc/PLS23/ch4.htm#toc038 <code>

"
Using DELETE
This attribute has three forms. DELETE removes all elements from a PL/SQL table. DELETE(n) removes the nth element. If n is null, DELETE(n) does nothing. DELETE(m, n) removes all elements in the range m .. n. If m is larger than n or if m or n is null, DELETE(m, n) does nothing.
DELETE lets you free the resources held by a PL/SQL table. DELETE(n) and DELETE(m, n) let you prune a PL/SQL table. Consider the following examples:


ename_tab.DELETE(3); -- delete element 3
ename_tab.DELETE(5, 5); -- delete element 5
ename_tab.DELETE(20, 30); -- delete elements 20 through 30
ename_tab.DELETE(-15, 0); -- delete elements -15 through 0
ename_tab.DELETE; -- delete entire PL/SQL table
If an element to be deleted does not exist, DELETE simply skips it; no exception is raised.

Note: The amount of memory allocated to a PL/SQL table can increase or decrease dynamically. As you delete elements, memory is freed page by page. If you delete the entire PL/SQL table, all the memory is freed.
"

Tom

Iam curious know , that when we are allowed to delete all the values in a pl/sql table, what are we not allowed to update a pl/sql table or insert new values into a pl/sql table.

what are the reasons, for oracle choosing it to doing this way.


Tom Kyte
October 17, 2001 - 6:56 pm UTC

Umm, you can update:

ename_tab(5).ename := 'hello';

and you can "insert"

ename_tab(ename_tab.count+1).ename := 'this be a new record';


its an "array", not a real "table".


You are right , but

A reader, October 18, 2001 - 10:40 am UTC

"ename_tab.DELETE; -- delete entire PL/SQL table"


Tom

From your answer, yes, we can update a particular value in
an array, or insert a new one.

But my questions specifically, was how do I set all the values in an array to a single value, i.e. for ename_tab array , I want to set all the values in it to 'hello', not the 5th one only.

When I tried to do it, I got the following error

ename_tab.ename:= 'hello';

[1]: (Error): ORA-06550: line 59, column 9: PLS-00302: component 'ENAME' must be declared ORA-06550: line 59, column 1: PL/SQL: Statement ignored



Tom Kyte
October 18, 2001 - 12:53 pm UTC

you have to update each component. As i said, its really an array, not a database table

Another situation - important and critical

A reader, October 18, 2001 - 4:45 pm UTC

Create or replace procedure pr_array(lv_comm number)

is

type testarray is table of varchar2(3000) index by binary_integer;

v_ename testarray;
v_empno testarray;
v_sal testarray;

cursor cu_cursor is select empno, ename, sal from emp;
begin
open cu_cursor;

loop
fetch cu_cursor bulk collect into v_empno, v_ename, v_sal
LIMIT 5;


forall i in 1 .. v_empno.count
insert into testemp_table( empno, ename, sal,comm )values
( v_empno(i), v_ename(i), v_sal(i),lv_comm );

exit when cu_cursor%notfound;
end loop;
close cu_cursor;

commit;
end;

I execute the procedure as pr_array(20);

1.Now the procedures inserts the comm as 20 for all rows, but the question is is it doing bulk insert, or is it doing row by row.

2.How can I confirm that bulk insert is infact occuring or not occuring.

3.If it is not, how can I make it happen in a scenario like the above.
























Tom Kyte
October 18, 2001 - 7:10 pm UTC

</code> http://asktom.oracle.com/~tkyte/tkprof.html <code>

Use sql_trace and TKPROF to see what it is doing. Look at the executes and the rows -- it'll tell you (if you see 1 execute/5 rows, its doing it in bulk.

Nag

Nag, October 22, 2001 - 4:41 pm UTC

</code> http://download-east.oracle.com/docs/cd/F49540_01/DOC/server.815/a67842/04_colls.htm#20425 <code>
"The SQL statement can reference more than one collection. However, the PL/SQL engine bulk-binds only subscripted collections. So, in the following example, it does not bulk-bind the collection sals, which is passed to the function median:

FORALL i IN 1..20
INSERT INTO emp2 VALUES (enums(i), names(i), median(sals), ...);


The next example shows that the collection subscript cannot be an expression:

FORALL j IN mgrs.FIRST..mgrs.LAST
DELETE FROM emp WHERE mgr = mgrs(j+1); -- illegal subscript

"

Tom,

can you explain the phrase 'subscripted collections' and
'subscript cannot be an expression'.

( is mgrs(j+1); being referred to as an expression)


Thank you






Tom Kyte
October 22, 2001 - 11:18 pm UTC

In the first example, enums(i) is a "subscripted collection" in the forall statement. It is subscripted by the "i" in the forall. median is not.

(j+1) is an expression, yes.

rollback while bulk insert--how can we gather the details of rolled back records

Nag, October 22, 2001 - 5:05 pm UTC

"
Rollback Behavior
If a FORALL statement fails, database changes are rolled back to an implicit savepoint marked before each execution of the SQL statement. Changes made during previous executions are not rolled back. For example, suppose you create a database table that stores department numbers and job titles, as follows:

CREATE TABLE emp2 (deptno NUMBER(2), job VARCHAR2(15));


Next, you insert some rows into the table, as follows:

INSERT INTO emp2 VALUES(10, 'Clerk');
INSERT INTO emp2 VALUES(10, 'Clerk');
INSERT INTO emp2 VALUES(20, 'Bookkeeper'); -- 10-char job title
INSERT INTO emp2 VALUES(30, 'Analyst');
INSERT INTO emp2 VALUES(30, 'Analyst');


Then, you try to append the 7-character string ' (temp)' to certain job titles using the following UPDATE statement:

DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(10, 20);
BEGIN
FORALL j IN depts.FIRST..depts.LAST
UPDATE emp2 SET job = job || ' (temp)'
WHERE deptno = depts(j);
-- raises a "value too large" exception
EXCEPTION
WHEN OTHERS THEN
COMMIT;
END;


The SQL engine executes the UPDATE statement twice, once for each index number in the specified range; that is, once for depts(10) and once for depts(20). The first execution succeeds, but the second execution fails because the string value 'Bookkeeper (temp)' is too large for the job column. In this case, only the second execution is rolled back.

"


Tom, in a scenario like the above example, what I want to do is collect the details of all the records which have been rolled back due to some error or the other , so that I can process them later.

Is this kind of exception handling possible with bulk operations. If this is possible , how ?

(If this kind of functionality is not available Iam afraid we cannot consider using bulk inserts or bulk updates becasue, there will be no way to know abou the records which have not been inserted or deleted, unless and untill we go back into the respective tables and investigate( which is going to be a very cumbersome process))



Tom Kyte
October 22, 2001 - 11:28 pm UTC

You can use any combo of returning BULK COLLECT or sql%bulk_rowcount to determine this:

ops$tkyte@ORA717DEV.US.ORACLE.COM> DECLARE
  2     TYPE NumList IS TABLE OF NUMBER;
  3     depts NumList := NumList(10, 20, 30 );
  4     l_depts NumList;
  5  BEGIN
  6     FORALL j IN depts.FIRST..depts.LAST
  7        UPDATE emp2 SET job = job || ' (temp)'
  8           WHERE deptno = depts(j) returning deptno bulk collect into l_depts;
  9           -- raises a "value too large" exception
 10  EXCEPTION
 11     WHEN OTHERS THEN
 12            for i in 1 .. depts.count
 13            loop
 14                  dbms_output.put_line( sql%bulk_rowcount(i) );
 15            end loop;
 16            for i in 1 .. l_depts.count
 17            loop
 18                  dbms_output.put_line( 'updated ' || l_depts(i) );
 19            end loop;
 20        COMMIT;
 21  END;
 22  /
2
0
0
updated 10
updated 10

that shows the first entry updated 2 rows, the next two (20 and 30) got 0 rows and the updated 10 entries show the value of the depts() collection that "worked" 

A reader, October 23, 2001 - 8:17 am UTC

hi tom,
when i run your code it is giving me error

  1  DECLARE
  2         TYPE NumList IS TABLE OF NUMBER;
  3         depts NumList := NumList(10, 20, 30 );
  4         l_depts NumList := NumList();
  5      BEGIN
  6         FORALL j IN depts.FIRST..depts.LAST
  7            UPDATE emp2 SET job = job || ' (temp)'
  8               WHERE deptno = depts(j) returning deptno bulk collect into l_depts;
  9               -- raises a "value too large" exception
 10     EXCEPTION
 11        WHEN OTHERS THEN
 12               for i in 1 .. depts.count
 13              loop
 14                 null;
 15                     dbms_output.put_line( sql%bulk_rowcount(i) );
 16               end loop;
 17               for i in 1 .. l_depts.count
 18               loop
 19                     dbms_output.put_line( 'updated ' || l_depts(i) );
 20               end loop;
 21           COMMIT;
 22*    END;
SQL> /
DECLARE
*
ERROR at line 1:
ORA-06531: Reference to uninitialized collection
ORA-06512: at line 15
ORA-01401: inserted value too large for column 

Tom Kyte
October 23, 2001 - 9:25 am UTC

and your version just happens to be?

A reader, October 23, 2001 - 9:46 am UTC

8.1.6

Help

Rahul, October 23, 2001 - 3:19 pm UTC

I want to pose a question can anybody help me how should
do it

A reader, June 30, 2003 - 5:54 pm UTC

Tom,

I have a question,lets say i gave this command
sql> insert into t1 select * from t2;

Now i want to know how many records were inserted without giving a count(*), is there something like sql%rowcount for sql..


Tom Kyte
June 30, 2003 - 7:47 pm UTC

in a real programming language -- sure. sqlca.sqlerrd[2] in proc for example.

in sqlplus, well, once you add a real programming language -- sure

begin
insert into t1 select * from t2;
dbms_output.put_line( sql%rowcount );
end;
/

remember -- sqlplus is NOT sql -- sql is sql, there are programming environments like PLSQL, C, C++, Java and even VB that give you procedural things -- a simple thing like sqlplus - it is just a very basic reporting tool.

A reader, July 01, 2003 - 10:34 am UTC

Thanks

why bulk collect

Jean Boulé, July 16, 2003 - 11:13 am UTC

Hi Tom,

Going back to the initial question, why declare a cursor and use bulk collect .. I Thing i would be more simple to use cast multiset like this

declare
l_data myArrayType;
begin
select cast ( multiset ( select ename, hiredate, sal
from emp
)
as myArrayType
)
into l_data
from dual;
for i in 1 .. l_data.count
loop
l_data(i).ename := 'X' || l_data(i).ename;
end loop;
insert into emp2 ( ename, hiredate, sal )
select * from TABLE( cast (l_data as myArrayType) );
end;
/

It give the same result with less code

Am I right or am I wrong


Tom Kyte
July 16, 2003 - 11:57 am UTC

time for a benchmark eh?  first, I'm not sure it is "easier"

ops$tkyte@ORA920LAP> create type myScalarType as object (
  2   OWNER               VARCHAR2(30),
  3   OBJECT_NAME         VARCHAR2(30),
  4   SUBOBJECT_NAME      VARCHAR2(30),
  5   OBJECT_ID           NUMBER,
  6   DATA_OBJECT_ID      NUMBER,
  7   OBJECT_TYPE         VARCHAR2(18),
  8   CREATED             DATE,
  9   LAST_DDL_TIME       DATE,
 10   TIMESTAMP           VARCHAR2(19),
 11   STATUS              VARCHAR2(7),
 12   TEMPORARY           VARCHAR2(1),
 13   GENERATED           VARCHAR2(1),
 14   SECONDARY           VARCHAR2(1)
 15  )
 16  /

Type created.

ops$tkyte@ORA920LAP> create type myTableType as table of myScalarType
  2  /

Type created.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create table t1 as select * from all_objects where 1=0;

Table created.


ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create or replace procedure object_way
  2  as
  3          l_data myTableType;
  4  begin
  5          select cast( multiset( select myScalarType(
  6                                    OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
  7                                    DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME,
  8                                    TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY)
  9                               from all_objects
 10                              where rownum <= 5000 ) as myTabletype )
 11        into l_data
 12        from dual;
 13
 14          insert into t1
 15      select * from TABLE( cast( l_data as myTableType ) );
 16  end;
 17  /

Procedure created.


<b>versus (be a bit more code in 8i yes, you would have to declare an array per column -- but not any more tedious then declaring an object type above...</b>



ops$tkyte@ORA920LAP> create table t2 as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create or replace procedure table_way(p_n in number default 100)
  2  as
  3          type array is table of t2%rowtype;
  4          l_data array;
  5
  6          cursor c is select * from all_objects where rownum <= 5000;
  7  begin
  8          open c;
  9          loop
 10                  fetch c bulk collect into l_data LIMIT p_n;
 11
 12                  forall i in 1 .. l_data.count
 13                     insert into t2 values l_data(i);
 14
 15                  exit when c%notfound;
 16          end loop;
 17          close c;
 18  end;
 19  /

Procedure created.

<b>and runstats says:</b>

ops$tkyte@ORA920LAP> exec runstats_pkg.rs_start

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> exec begin for i in 1 .. 10 loop object_way; end loop; end;

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> exec runstats_pkg.rs_middle

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> exec begin for i in 1 .. 10 loop table_way(750); end loop; end;

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> exec runstats_pkg.rs_stop(100)
Run1 ran in 845 hsecs
Run2 ran in 568 hsecs
run 1 ran in 148.77% of the time

<b>objects take longer by the wall clock</b>

Name                                  Run1        Run2        Diff
STAT...buffer is not pinned co     200,605     200,710         105
LATCH.shared pool                      524         409        -115
STAT...recursive calls                 432         555         123
LATCH.simulator hash latch          14,649      14,518        -131
LATCH.row cache enqueue latch       10,714      10,468        -246
LATCH.row cache objects             15,954      15,708        -246
STAT...Elapsed Time                    851         590        -261
LATCH.library cache                  1,047         769        -278<b>
STAT...CPU used by this sessio         819         504        -315
STAT...CPU used when call star         819         504        -315</b>
STAT...recursive cpu usage             816         494        -322
LATCH.cache buffers lru chain        1,001       1,666         665
STAT...dirty buffers inspected           0         985         985
STAT...free buffer inspected             0         993         993
LATCH.checkpoint queue latch           782       1,857       1,075
LATCH.cache buffers chains         434,257     435,907       1,650
STAT...redo size                 5,913,284   5,926,764      13,480
STAT...session pga memory          -65,536           0      65,536

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
487,691     489,977       2,286     99.53%

PL/SQL procedure successfully completed.

<b>and the other issues I see are:

o no limit clause on the select into.  for "big" tables, this is not good, you NEED limit to make this practical

o people don't like those objects "littering" their schema (for whatever reason, it doesn't bother me personally, that is what a data dictionary is for)

o you have two pieces of "code" to fix if you chane the structure even a little bit -- the object type and the code itself

</b> 

A reader, August 08, 2003 - 2:37 am UTC

Hi Tom,

Back to the original question (PLSQL script duplicated below). If one of the columns to be inserted is constant, say SYSDATE. Do I need to create a PLSQL table for that column and populate it with SYSDATEs?


declare
type testarray is table of varchar2(3000) index by binary_integer;

v_ename testarray;
v_empno testarray;
v_sal testarray;

cursor cu_cursor is select empno, ename, sal from emp;
begin
open cu_cursor;

loop
fetch cu_cursor bulk collect into v_empno, v_ename, v_sal
LIMIT 1000;

forall i in 1 .. v_empno.count
insert into testemp_table( empno, ename, sal )
values
( v_empno(i), v_ename(i), v_sal(i) );

exit when cu_cursor%notfound;
end loop;
close cu_cursor;
end;


Tom Kyte
August 10, 2003 - 11:29 am UTC

do you want a constant sysdate for all rows inserted? if so, YES, you need to get a SYSDATE and then use it over and over.

Else, each insert will get sysdate as of right then and there. Every BULK insert (every set of 1000 rows) could have their own unique sysdate if it takes a second to fetch/insert 1000 rows.

rowid in the object

Rd, August 11, 2003 - 4:35 am UTC

Hi,

Can i get rowid into array along with empno ?

I mean can i have like
"create or replace type myScalarType as object (rowid rowid, empno varchar2(20) )"

I gotto pass an array of rowid,empno to another procedure as an argument.

TIA


Tom Kyte
August 11, 2003 - 7:41 am UTC

a@ORA920> create type x as object ( rid rowid, empno varchar2(20) )
2 /

Warning: Type created with compilation errors.

a@ORA920> show err
Errors for TYPE X:

LINE/COL ERROR
-------- -----------------------------------------------------------------
1/24 PLS-00530: Illegal type used for object type attribute: 'ROWID'.



no, no rowids however -- you can use varchar2(18) just as easily. you would use rowidtochar and chartorowid in your sql.

Scalar type/PLSQL table

Rd, August 11, 2003 - 8:25 am UTC

Hi,

Thanks for that rowidtochar solution.
I was just wondering about this creating objects.
Can we keep creating as many types as we want !!! what i mean to know is, should we be careful and make only really useful ones.

If some other user by chance uses the same name for some other purpose by saying "create or replace .. " then the procedure cant run.
Arent PLSQL tables more safer in such cases ??

Also can PLSQL tables be sent as arguments to procedures ?

TIA


Tom Kyte
August 11, 2003 - 8:37 am UTC

...
should we be careful and make only really useful ones.
......


that sounds "obvious", that would be true for every and anything!



..
If some other user by chance uses the same name for some other purpose by saying
...

sounds like you need "source code control" or some tool to help you mediate these things? what if some other developer does "create or replace your_really_important_pkg"??? same thing. types are not "special" in this regard.


PLSQL table types are nice -- if all you need is plsql access to them. if you want to use SQL on them -- you need a top level type in SQL.

A reader, August 11, 2003 - 8:42 am UTC

Hi,

Sorry for the dumb questions.

Pls tell me how i can pass PLSQL tables as aruments to another procedure.

TIA

Tom Kyte
August 11, 2003 - 9:56 am UTC

just do it? like you pass a number? no different.

thanks

A reader, August 11, 2003 - 10:39 am UTC

Hi,

Thanks. I was trying to pass PLSQL table as argument but it didnt work.So i thought only way out is use scalar types.

But now since i know that it shud work i can make it work.
That is how i put some dumb Qs.
All i wanted to know is when PLSQL tables can do the job why am i creating an object and a type ..etc etc.

If possible you may suggest me where scalar ones are more appropriate .

Thanks again


Problem in Forall clause

ARC, September 17, 2003 - 1:42 pm UTC

Hi Tom,
I am using bellow code to do some inserts and updates.

type Carray is table of dwadmin.tb_sdw000_ge_gls_tmp.alt_customer_number index by binary_integer;
cust_tab Carray;

open cur_gen(v_fm, v_fy);
loop
fetch cur_gen bulk collect into cust_tab LIMIT 100;
v_rec_cnt := v_rec_cnt + 1;
forall i in 1 .. cust_tab.count
DBMS_OUTPUT.PUT_LINE(cust_tab.count);
--get_mtd_sales_amt(cust_tab).cust_num,p_bus_tier1,v_fm,v_fy,p_run_dt);
exit when cur_gen%notfound;
end loop;
close cur_gen;
get_mtd_sales_amt is private procedure which will do inserts and updates.
getting error
ERROR at line 233:
ORA-06550: line 233, column 3:
PLS-00103: Encountered the symbol "DBMS_OUTPUT" when expecting one of the
following:
. ( * @ % & - + / at mod rem select update <an exponent (**)>
delete insert ||
The symbol "." was substituted for "DBMS_OUTPUT" to continue.
ORA-06550: line 233, column 39:
PLS-00103: Encountered the symbol ";" when expecting one of the following:
. ( * % & - + / at mod rem select update <an exponent (**)>
delete insert ||
ORA-06550: line 236, column 6:
PLS-00103: Encountered the symbol "END"

Plese help me to overcome. Thanks in advance.



Tom Kyte
September 17, 2003 - 5:27 pm UTC

forall cannot have any code inside of it except for the SINGLE dml statement.

it is not really a "loop", it is an array (bulk) processing directive.

so, the dbms_output is misplaced. it cannot go there, put it IN FRONT of the forall statement.

then, follow the forall statement with dml, not a procedure.

if what you want to do is call a procedure, that is coded using a traditional for loop

for i in 1 .. cust_tab.count
loop
get_mts_sales_amt( .... );
end loop;


Can't bulk fetch with explicit cursor into user defined record

A reader, October 08, 2003 - 11:03 am UTC

Tom can you explain this...

CAN'T bulk fetch with explicit cursor into user defined record...

SQL> ed
Wrote file afiedt.buf

  1  declare
  2     cursor c is select t.table_name, i.index_name
  3                   from all_tables t, all_indexes i
  4                  where t.table_name = i.table_name
  5                    and t.owner = i.table_owner;
  6     type t_rec is record (
  7        table_name all_tables.table_name%type,
  8        index_name all_indexes.index_name%type);
  9     type t_tab is table of t_rec;
 10     l_tab t_tab;
 11  begin
 12     open c;
 13     fetch c bulk collect into l_tab;
 14     close c;
 15* end;
SQL> /
   fetch c bulk collect into l_tab;
                             *
ERROR at line 13:
ORA-06550: line 13, column 30:
PLS-00597: expression 'L_TAB' in the INTO list is of wrong type
ORA-06550: line 13, column 4:
PL/SQL: SQL Statement ignored

CAN bulk fetch with implicit cursor into user defined record...

SQL> ed
Wrote file afiedt.buf

  1  declare
  2     type t_rec is record (
  3        table_name all_tables.table_name%type,
  4        index_name all_indexes.index_name%type);
  5     type t_tab is table of t_rec;
  6     l_tab t_tab;
  7  begin
  8     select t.table_name, i.index_name
  9       bulk collect into l_tab
 10       from all_tables t, all_indexes i
 11      where t.table_name = i.table_name
 12        and t.owner = i.table_owner;
 13* end;
SQL> /

PL/SQL procedure successfully completed.


CAN bulk fetch with explicit cursor into rowtype record...

SQL> create view v as select t.table_name, i.index_name from all_tables t, all_i
ndexes i where t.table_name = i.table_name and t.owner = i.table_owner;

View created.

SQL> ed
Wrote file afiedt.buf

  1  declare
  2     cursor c is select t.table_name, i.index_name
  3                   from all_tables t, all_indexes i
  4                  where t.table_name = i.table_name
  5                    and t.owner = i.table_owner;
  6     type t_tab is table of v%rowtype;
  7     l_tab t_tab;
  8  begin
  9     open c;
 10     fetch c bulk collect into l_tab;
 11     close c;
 12* end;
SQL> /

PL/SQL procedure successfully completed.

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.2.0 - 64bit Production
PL/SQL Release 9.2.0.2.0 - Production
CORE    9.2.0.2.0       Production
TNS for HPUX: Version 9.2.0.2.0 - Production
NLSRTL Version 9.2.0.2.0 - Production


Why doesn't the first method work, but the next two do? 

I believe I need to use an explicit cursor because I need to limit the rows fetched (to 100 or so). Does this mean I need to type out each field of the record in the fetch? 


 

Tom Kyte
October 08, 2003 - 11:25 am UTC

ops$tkyte@ORA920LAP> declare
  2     cursor c is select t.table_name, i.index_name
  3                   from all_tables t, all_indexes i
  4                  where t.table_name = i.table_name
  5                    and t.owner = i.table_owner
  6                    and rownum < 10;
  7     type t_tab is table of c%rowtype;
  8     l_tab t_tab;
  9  begin
 10     open c;
 11     fetch c bulk collect into l_tab;
 12     close c;
 13  end;
 14  /

PL/SQL procedure successfully completed.



you made it more complicated then it needed to be!  just use c%rowtype 

Excellent

A reader, October 08, 2003 - 11:58 am UTC

Thanks for the very quick reply.

It's still strange tho'. Exactly how does the parser decide whether record types match?

And I sorta wanted to have the types exposed in the package spec, and the cursor hidden in the package body... Perhaps I'm being fussy.

Having problems with your example

Rob, October 08, 2003 - 3:56 pm UTC

SQL> declare
  2  cursor c is select t.table_name, i.index_name
  3                       from all_tables t, all_indexes i
  4                      where t.table_name = i.table_name
  5                        and t.owner = i.table_owner
  6                        and rownum < 10;
  7         type t_tab is table of c%rowtype;
  8         l_tab t_tab;
  9      begin
 10        open c;
 11        fetch c bulk collect into l_tab;
 12        close c;
 13     end;
 14  /
      fetch c bulk collect into l_tab;
                                *
ERROR at line 11:
ORA-06550: line 11, column 33:
PLS-00597: expression 'L_TAB' in the INTO list is of wrong type
ORA-06550: line 11, column 7:
PL/SQL: SQL Statement ignored

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.2.0 - Production
PL/SQL Release 9.2.0.2.0 - Production
CORE    9.2.0.2.0       Production
TNS for Solaris: Version 9.2.0.2.0 - Production
NLSRTL Version 9.2.0.2.0 - Production

Am I missing something?

Thanks 

Tom Kyte
October 09, 2003 - 3:46 pm UTC

they seemed to have "made it work" in 9iR2 patch 9203 and above.

Can this be used for the Following?

Vinnie, December 05, 2003 - 5:34 pm UTC

Tom,

I have a similar situation with a table containing the following attributes:
id
db_table_name
db_col_name

This contains a list of many tables each having various # of columns.
I need to be able to query this table for a given id. Then retrieve the data in the actual table based on the columns in the master table. Then build inserts from the results to another empty table.

I.e
select id,db_table_name, db_col_name from master_tab
where id = 'EMP';

result set:
EMP EMP emp_no
EMP EMP emp_fname
EMP EMP2 emp_name
...
...

grab data:
select emp_no,emp_fname into ??? from emp;
select emp_name into ??? from emp2;

inserts:
insert into result_table (identifier, table_name, value)
values ('EMP','EMP',1);
insert into result_table (identifier, table_name, value)
values ('EMP','EMP','VS');
insert into result_table (identifier, table_name, value)
values ('EMP','EMP2','SA');

I would like to dynamically build these selects * inserts.
There may be many attributes per table.
What would the best way to accomplish this?

Thanks

Tom Kyte
December 06, 2003 - 8:57 am UTC

you are going to write code. I would not write code. I would produce a flat file that can be loaded using any load utility (see </code> http://asktom.oracle.com/~tkyte/flat <code>

Unknown column name?

Vinnie, December 08, 2003 - 10:07 am UTC

How can I use this if I do not know the column name
I wish to insert to?



Table Name

Vinnie, December 08, 2003 - 2:30 pm UTC

Tom,

I figured out how to get the col_name using:
l_descTbl(i).col_name.

Is there anyway to get the actual table that was used in the query?

i.e.

Select * from emp;

Can I get EMP somehow?

Tom Kyte
December 09, 2003 - 5:45 am UTC

no, because a query doesn't normally use a "single table"

select * from emp, dept....



Collection of objects vs Scalar arrays

Su, December 18, 2003 - 4:58 pm UTC

Tom,

If I used more then one scalar array to bulk collect some
values and I like to use these arrays in a subquery,
how can I correlate the values from different arrays. Similar to column_value is there any pseudo column (such as column_index) to correlate various arrays ?

INSERT INTO tab1
(SELECT c1.column_value, c2.column_value
FROM table(cast(arr1 as ScalarArray)) c1,
table(cast(arr2 as ScalarArray)) c2
WHERE c1.column_index = c2.column_index);

I am trying to find out if collection of objects is my only
other option.

Thanks in advance,
-Su



Tom Kyte
December 18, 2003 - 6:16 pm UTC

it would just look like this?

loop
fetch cu_cursor bulk collect into v_empno, v_ename, v_sal
LIMIT 1000;

forall i in 1 .. v_empno.count
insert into testemp_table( empno, ename, sal )
values
( v_empno(i), v_ename(i), v_sal(i) );

not sure why you would try to join them?

How can I use multiple insert in FORALL

Suhail, December 22, 2003 - 4:47 pm UTC

Tom,

I want to use multiple INSERT for master -detail insert, I am reading data from a temp table and inserting into three difefrent tables, here is my code;

declare
type testarray is table of varchar2(3000) index by binary_integer;
v_lname testarray;
v_fname testarray;
v_address testarray;
v_city testarray;
v_phone1 testarray;
v_address_flag testarray;
v_phone_flag testarray;
v_fileid testarray;
cursor c is
select fileid,last_name, first_name, address,city,phone1,address_flag
phone_flag
from emp_temp;
begin
open c;
loop
fetch c bulk collect into v_fileid,v_lname,v_fname,v_address,v_city,v_phone1,
v_address_flag,v_phone_flag
LIMIT 10;
FORALL i in 1..v_fileid.count
if v_address_flag(i) = 'Y' and v_phone_flag(i) = 'Y' then
--insert address and phone tables
insert into emp
values(SEQ_EMP.nextval,v_lname(i),v_fname(i));
insert into emp_addr(address,city,emp_id)
values(v_address(i),v_city(i),seq_emp.currval);
insert into emp_ph(phone,emp_id,type)
values(v_phone1(i),seq_emp.currval,'1');
elsif v_address_flag(i) = 'Y' and v_phone_flag(i) = 'N' then
insert into emp
values(SEQ_EMP.nextval,v_lname(i),v_fname(i));
insert into emp_addr(address,city,emp_id)
values(v_address(i),v_city(i),seq_emp.currval);
elsif v_address_flag(i) = 'N' and v_phone_flag(i) = 'N' then
insert into emp
values(SEQ_EMP.nextval,v_lname(i),v_fname(i));
end if;
exit when cr%notfound;
end loop;
close c;
end;



This code does not work. How can I use conditional IF between FORALL

Thanks for your help

Tom Kyte
December 22, 2003 - 6:32 pm UTC

ops$tkyte@ORA9IR2> create table t ( fileid int, lname varchar2(10), fname varchar2(10),
  2                   address varchar2(10), city varchar2(10),
  3                   phone1 varchar2(10), address_flag varchar2(1), phone_flag varchar2(1) );
 
Table created.
 
ops$tkyte@ORA9IR2> create sequence s;
 
Sequence created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( 1, 'a1', 'b1', 'aa1', 'cc1', 'pp1', 'Y', 'Y' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into t values ( 1, 'a2', 'b2', 'aa2', 'cc2', 'pp2', 'Y', 'N' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into t values ( 1, 'a2', 'b2', 'aa2', 'cc2', 'pp2', 'N', 'N' );
 
1 row created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert
  2    when (1=1) then
  3    into emp (empno,lname,fname) values ( s.nextval, lname, fname )
  4    when (address_flag='Y') then
  5    into emp_addr( address, city, empno ) values ( address, city, s.currval )
  6    when (phone_flag='Y' and address_flag = 'Y') then
  7    into emp_ph( phone, type, empno ) values ( phone1, '1', s.currval )
  8  select * from t;
 
6 rows created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from emp;
 
     EMPNO LNAME      FNAME
---------- ---------- --------------------
         1 a1         b1
         2 a2         b2
         3 a2         b2
 
ops$tkyte@ORA9IR2> select * from emp_addr;
 
ADDRESS    CITY            EMPNO
---------- ---------- ----------
aa1        cc1                 1
aa2        cc2                 2
 
ops$tkyte@ORA9IR2> select * from emp_ph;
 
PHONE      TYPE            EMPNO
---------- ---------- ----------
pp1        1                   1


it is a single insert into statement. 

I cannot use it

suhail, December 22, 2003 - 8:14 pm UTC

Tom,
I cannot use this , I tried similar approach as indicated by you in one of the thread but I am hitting a bug# 3334077 in 9i - 9.0.1.4.0. As of Dec 31, 2003 - Server version 9.0.1.4 is desupported ( according to Metalink ) meaning there will be no Error Correction Support (no bug fixes), so I am bound to use Bulk Collection. However, I can nnot use multiple DML or IF condition in FORALL. Is there a way that I can still use FORALL or do I have to use same old row-by-row style.

Thanks for your input.

Suhail


Tom Kyte
December 23, 2003 - 9:42 am UTC

it seems you filed that bug, it goes beyond the simple example here (you have triggers and such which are not part of this example and they are the underlying cause)

the bug report also states that a backport of the fix may be possible.

I cannot reproduce your issue, however, the bug seems to state it has to do with multi-row inserts only so perhaps:

declare
type testarray is table of varchar2(4000) index by binary_integer;
l_fileid testarray;
l_lname testarray;
l_fname testarray;
l_address testarray;
l_city testarray;
l_phone testarray;
l_address_flag testarray;
l_phone_flag testarray;
begin
select * bulk collect into l_fileid, l_lname, l_fname, l_address, l_city, l_phone, l_address_flag, l_phone_flag
from t;

forall i in 1 .. l_fileid.count
insert
when (1=1) then
into emp (empno,lname,fname) values ( s.nextval, lname, fname )
when (address_flag='Y') then
into emp_addr( address, city, empno ) values ( address, city, s.currval )
when (phone_flag='Y' and address_flag = 'Y') then
into emp_ph( phone, type, empno ) values ( phone1, '1', s.currval )
select l_lname(i) lname, l_fname(i) fname, l_address(i) address, l_city(i) city, l_phone(i) phone1,
l_address_flag(i) address_flag, l_phone_flag(i) phone_flag
from dual;
end;
/


will work. try it out. otherwise, you either

o do it row by row
o request the backport for the trigger fix.

When clause does not work

Suhail, December 23, 2003 - 10:43 am UTC

Tom,

I tried this on my end , I created new_emp instead of t table and found following.

  1  declare
  2      type testarray is table of varchar2(4000) index by binary_integer;
  3      l_fileid testarray;
  4      l_lname testarray;
  5      l_fname testarray;
  6      l_address testarray;
  7      l_city testarray;
  8      l_phone testarray;
  9      l_address_flag testarray;
 10      l_phone_flag testarray;
 11  begin
 12      select * bulk collect into l_fileid, l_lname, l_fname, l_address, l_city,
 13  l_phone, l_address_flag, l_phone_flag
 14        from new_emp;
 15      forall i in 1 .. l_fileid.count
 16         insert
 17           when (1=1) then
 18           into emp (empno,lname,fname) values ( SEQ_EMP.nextval, lname, fname )
 19           when (address_flag='Y') then
 20           into emp_addr( address, city, empno ) values ( address, city, SEQ_EMP.currval
 21  )
 22           when (phone_flag='Y' and address_flag = 'Y') then
 23           into emp_ph( phone, type, empno ) values ( phone1, '1', SEQ_EMP.currval )
 24         select l_lname(i) lname, l_fname(i) fname, l_address(i) address,
 25  l_city(i) city, l_phone(i) phone1,
 26                l_address_flag(i) address_flag, l_phone_flag(i) phone_flag
 27           from dual;
 28* end;
SQL> /
         when (address_flag='Y') then
                               *
ERROR at line 19:
ORA-06550: line 19, column 32:
PL/SQL: ORA-00904: invalid column name
ORA-06550: line 17, column 65530:
PL/SQL: SQL Statement ignored

It seems when clause does not recognise address_flag column.
Here is desc of new_emp
SQL> desc new_emp
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------------
 FILEID                                             NUMBER(38)
 LNAME                                              VARCHAR2(10)
 FNAME                                              VARCHAR2(10)
 ADDRESS                                            VARCHAR2(10)
 CITY                                               VARCHAR2(10)
 PHONE1                                             VARCHAR2(10)
 ADDRESS_FLAG                                       VARCHAR2(1)
 PHONE_FLAG                                         VARCHAR2(1)
 

Tom Kyte
December 23, 2003 - 11:57 am UTC

the address_flag there is the one coming from dual -- not new_emp.

show us your v$version.  code ran for me

ops$tkyte@ORA9I> declare
  2          type testarray is table of varchar2(4000) index by binary_integer;
  3          l_fileid testarray;
  4          l_lname testarray;
  5          l_fname testarray;
  6      l_address testarray;
  7          l_city testarray;
  8      l_phone testarray;
  9          l_address_flag testarray;
 10          l_phone_flag testarray;
 11  begin
 12          select * bulk collect into l_fileid, l_lname, l_fname, l_address, l_city, l_phone, l_address_flag, l_phone_flag
 13            from t;
 14
 15          forall i in 1 .. l_fileid.count
 16         insert
 17           when (1=1) then
 18           into emp (empno,lname,fname) values ( s.nextval, lname, fname )
 19           when (address_flag='Y') then
 20           into emp_addr( address, city, empno ) values ( address, city, s.currval )
 21           when (phone_flag='Y' and address_flag = 'Y') then
 22           into emp_ph( phone, type, empno ) values ( phone1, '1', s.currval )
 23         select l_lname(i) lname, l_fname(i) fname, l_address(i) address, l_city(i) city, l_phone(i) phone1,
 24                    l_address_flag(i) address_flag, l_phone_flag(i) phone_flag
 25           from dual;
 26  end;
 27  /
 
PL/SQL procedure successful

ops$tkyte@ORA9I>
ops$tkyte@ORA9I> select * from v$version;
 
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.0.1.2.0 - Production
PL/SQL Release 9.0.1.2.0 - Production
CORE    9.0.1.2.0       Production
TNS for Solaris: Version 9.0.1.2.0 - Production
NLSRTL Version 9.0.1.2.0 - Production
 
ly completed.
 

my version

A reader, December 23, 2003 - 12:01 pm UTC

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.0.1.4.0 - Production
PL/SQL Release 9.0.1.4.0 - Production
CORE    9.0.1.2.0       Production
TNS for Solaris: Version 9.0.1.4.0 - Production
NLSRTL Version 9.0.1.4.0 - Production
 

i am sorry I was referencing long column name

A reader, December 23, 2003 - 12:27 pm UTC

On my end my tables columns are different ie first_name for fname, emp_id for empno etc. It was paste- copy problem. I apolozise for that...

BULK Inserts with Indexes

Ivan, January 22, 2004 - 2:19 pm UTC

Hi Tom,

I have a table (9 million rows) that I have to insert and delete (roughly 50,000 records) a night. I also have 29 indexes on that table (used for matching purposes using 39 different algorithms). This update has to be done overnight and ready for the next morning.

My initial design when the table only had 1 million rows and took 3 hours was:
1) Truncate the table
2) Mark the indexes unusable
3) skip index
4) insert records
5) rebuild indexes
6) analyze the tables

However because of the volume of an additional 8 million records it now takes 11 hours to complete which means that it barely makes the morning deadline.

How would BULK/FORALL be effected by the indexes?

What would your stretegy to this problem be?

Thanks,
Ivan

Tom Kyte
January 22, 2004 - 7:57 pm UTC

those six steps took 3 hours?????

you'll have to expand on that. I can do that in MINUTES on my laptop.


So, which of the 6 takes the most time and HOW do you do it?

re BULK Inserts with Indexes

Ivan, January 23, 2004 - 9:59 am UTC

Hi Tom,

Come on - minutes....

Here is my code and roughly the time it takes per process the 1 million records. I do not have the exact timing as the database has grown but if you need the exact timing I can get the databases the right size and run it with timing.

1) Truncate the table ...... seconds
2) Mark the indexes unusable .... minutes
3) skip index ...... seconds
4) insert records (1 million) ....... 1 hour
5) rebuild indexes (29 multi field indexes) ....... 1 hour
6) analyze the tables (including histograms) ......1 hour



PROCEDURE prep_global
IS

FUNCTION trunc_table
(p_table_name IN VARCHAR2)
RETURN BOOLEAN IS

v_check_date DATE;
v_check BOOLEAN DEFAULT FALSE;
v_name VARCHAR2(100);
v_table VARCHAR2(200);

BEGIN

BEGIN
SELECT TRUNC(last_update_date)
INTO v_check_date
FROM match_co_2
WHERE ROWNUM =1;

EXCEPTION
WHEN no_data_found THEN
v_check_date := SYSDATE - 2;
END;

IF v_check_date <> TRUNC(SYSDATE)
THEN
v_name := p_table_name;
v_table := ('TRUNCATE TABLE ' || v_name);
-- truncate the table
EXECUTE IMMEDIATE (v_table);

-- delete stats
v_table := ('ANALYZE TABLE '|| v_name || ' DELETE STATISTICS');
EXECUTE IMMEDIATE (v_table);

v_check := TRUE;
END IF;
RETURN v_check;

EXCEPTION
WHEN OTHERS
THEN
/*
|| Anonymous block inside the exception handler lets you
|| declare local variables to hold the error code info
*/
DECLARE
error_code NUMBER := SQLCODE;
error_msg VARCHAR2(300) := SQLERRM;
BEGIN
mev_audit.log_audit(-20058,'E', 'Error in trunc_table function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
RAISE_APPLICATION_ERROR(-20058,
'Error in trunc_table function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
END;

END trunc_table;


FUNCTION unusable_index
(p_table_name user_tables.table_name%type)
RETURN BOOLEAN IS
CURSOR idx_c
IS
SELECT index_name
FROM user_indexes
WHERE table_name = p_table_name
AND uniqueness <>'UNIQUE';
idx_rec idx_c%ROWTYPE;
v_name VARCHAR2(100);
v_index VARCHAR2(200);
BEGIN
OPEN idx_c;
LOOP
FETCH idx_c INTO idx_rec;
EXIT WHEN idx_c%NOTFOUND;
v_name := idx_rec.index_name;
v_index := ('ALTER INDEX ' || v_name || ' UNUSABLE');
EXECUTE IMMEDIATE (v_index);

END LOOP;
CLOSE idx_c;
RETURN TRUE;

EXCEPTION
WHEN OTHERS
THEN
/*
|| Anonymous block inside the exception handler lets you
|| declare local variables to hold the error code info
*/
DECLARE
error_code NUMBER := SQLCODE;
error_msg VARCHAR2(300) := SQLERRM;
BEGIN
mev_audit.log_audit(-20059,'E', 'Error in unusable_index function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
RAISE_APPLICATION_ERROR(-20059,
'Error in unusable_index function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
END;

END unusable_index;

FUNCTION skip_index
RETURN BOOLEAN IS
v_statement VARCHAR2(200);
BEGIN
-- alter session to skip the unusable indexes
v_statement := ('ALTER SESSION SET skip_unusable_indexes=TRUE');
EXECUTE IMMEDIATE (v_statement);
RETURN TRUE;

EXCEPTION
WHEN OTHERS
THEN
/*
|| Anonymous block inside the exception handler lets you
|| declare local variables to hold the error code info
*/
DECLARE
error_code NUMBER := SQLCODE;
error_msg VARCHAR2(300) := SQLERRM;
BEGIN
mev_audit.log_audit(-20060,'E', 'Error in skip_index function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
RAISE_APPLICATION_ERROR(-20060,
'Error in skip_index function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
END;
END skip_index;

FUNCTION insert_match_companies
RETURN BOOLEAN IS

CURSOR comp
IS
SELECT company_id
FROM companies;

CURSOR combo (p_company_id companies.company_id%type)
IS
SELECT mc.company_id,
mca.name AS name_alias,
osfun.space(TRANSLATE(UPPER(mc.name),'-(),',' ')) AS name,
mc.address_1,
mc.address_2,
mc.city,
mc.district,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS telephone_number,
mc.postal_code,
LTRIM(RTRIM(osfun.word(mc.postal_code,1))) as postal_1_word,
mc.parent_id_usage,
mc.country_id,
UPPER(SUBSTR(mc.address_1,1,10)) AS address_1_10,
UPPER(SUBSTR(mc.address_1,1,8)) AS address_1_8,
UPPER(SUBSTR(mc.address_1,1,5)) AS address_1_5,
UPPER(SUBSTR(mc.address_2,1,10)) AS address_2_10,
UPPER(SUBSTR(mc.address_2,1,5)) AS address_2_5,
UPPER(SUBSTR(mc.city,1,10)) AS city_10,
UPPER(SUBSTR(mc.city,1,5)) AS city_5,
UPPER(SUBSTR(mc.city,1,3)) AS city_3,
UPPER(SUBSTR(mc.district,1,3)) AS district_3,
UPPER(SUBSTR(mc.district,1,2)) AS district_2,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,15) AS phone_15,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS phone_12,
osfun.space(TRANSLATE(UPPER(SUBSTR(mca.name,1,40)),'-(),',' ')) AS name_40,
osfun.space(TRANSLATE(UPPER(SUBSTR(mca.name,1,30)),'-(),',' ')) AS name_30,
osfun.space(TRANSLATE(UPPER(SUBSTR(mca.name,1,20)),'-(),',' ')) AS name_20,
osfun.space(TRANSLATE(UPPER(SUBSTR(mca.name,1,13)),'-(),',' ')) AS name_13,
osfun.space(TRANSLATE(UPPER(SUBSTR(mca.name,1,10)),'-(),',' ')) AS name_10,
osfun.space(TRANSLATE(UPPER(SUBSTR(mca.name,1,7)),'-(),',' ')) AS name_7,
osfun.space(TRANSLATE(UPPER(SUBSTR(mca.name,1,3)),'-(),',' ')) AS name_3
FROM companies mc,
company_aliases mca
WHERE mc.company_id = mca.company_id
AND mc.company_id = p_company_id
UNION
SELECT mc.company_id,
mc.name AS name_alias,
osfun.space(TRANSLATE(UPPER(mc.name),'-(),',' ')) AS name,
mc.address_1,
mc.address_2,
mc.city,
mc.district,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS telephone_number,
mc.postal_code,
LTRIM(RTRIM(osfun.word(mc.postal_code,1))) as postal_1_word,
mc.parent_id_usage,
mc.country_id,
UPPER(SUBSTR(mc.address_1,1,10)) AS address_1_10,
UPPER(SUBSTR(mc.address_1,1,8)) AS address_1_8,
UPPER(SUBSTR(mc.address_1,1,5)) AS address_1_5,
UPPER(SUBSTR(mc.address_2,1,10)) AS address_2_10,
UPPER(SUBSTR(mc.address_2,1,5)) AS address_2_5,
UPPER(SUBSTR(mc.city,1,10)) AS city_10,
UPPER(SUBSTR(mc.city,1,5)) AS city_5,
UPPER(SUBSTR(mc.city,1,3)) AS city_3,
UPPER(SUBSTR(mc.district,1,3)) AS district_3,
UPPER(SUBSTR(mc.district,1,2)) AS district_2,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,15) AS phone_15,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS phone_12,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,40)),'-(),',' ')) AS name_40,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,30)),'-(),',' ')) AS name_30,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,20)),'-(),',' ')) AS name_20,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,13)),'-(),',' ')) AS name_13,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,10)),'-(),',' ')) AS name_10,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,7)),'-(),',' ')) AS name_7,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,3)),'-(),',' ')) AS name_3
FROM companies mc
WHERE mc.company_id = p_company_id
AND mc.company_id NOT IN (SELECT /*+ HASH_AJ */ company_id
FROM company_aliases
WHERE UPPER(name) = UPPER(mc.name));


icounter INTEGER;

BEGIN

icounter := 0;
FOR recco IN comp LOOP

FOR rec IN combo (recco.company_id) loop

INSERT /*+ append */ INTO match_co_2
VALUES (rec.company_id,
rec.name_alias,
rec.name,
rec.address_1,
rec.address_2,
rec.city,
rec.district,
rec.telephone_number,
rec.postal_code,
rec.parent_id_usage,
rec.country_id,
rec.address_1_10,
rec.address_1_8,
rec.address_1_5,
rec.address_2_10,
rec.address_2_5,
rec.city_10,
rec.city_5,
rec.city_3,
rec.district_3,
rec.district_2,
rec.phone_15,
rec.phone_12,
rec.name_40,
rec.name_20,
rec.name_13,
rec.name_7,
rec.name_3,
SYSDATE,
rec.postal_1_word,
rec.name_30,
rec.name_10);

IF icounter > 1000 THEN
COMMIT;
icounter := 0;
ELSE
icounter := icounter + 1;
END IF;

END LOOP;
END LOOP;

COMMIT;
RETURN TRUE;

EXCEPTION
WHEN OTHERS
THEN
/*
|| Anonymous block inside the exception handler lets you
|| declare local variables to hold the error code info
*/
DECLARE
error_code NUMBER := SQLCODE;
error_msg VARCHAR2(300) := SQLERRM;
BEGIN
mev_audit.log_audit(-20061,'E', 'Error in insert_match_companies function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
RAISE_APPLICATION_ERROR(-20061,
'Error in insert_match_companies function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
END;
END insert_match_companies;

FUNCTION rebuild_idx
(p_table_name user_tables.table_name%type)
RETURN BOOLEAN IS
CURSOR idx_c
IS
SELECT index_name
FROM user_indexes
WHERE table_name = p_table_name
AND uniqueness <>'UNIQUE';
v_name VARCHAR2(100);
v_index VARCHAR2(200);
BEGIN
FOR recname in idx_c LOOP
v_name := recname.index_name;
v_index := ('ALTER INDEX ' || v_name || ' REBUILD NOLOGGING');
EXECUTE IMMEDIATE (v_index);
END LOOP;
RETURN TRUE;

EXCEPTION
WHEN OTHERS
THEN
/*
|| Anonymous block inside the exception handler lets you
|| declare local variables to hold the error code info
*/
DECLARE
error_code NUMBER := SQLCODE;
error_msg VARCHAR2(300) := SQLERRM;
BEGIN
mev_audit.log_audit(-20061,'E', 'Error in rebuild_idx function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
RAISE_APPLICATION_ERROR(-20063,
'Error in rebuild_idx function ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
END;

END rebuild_idx;

BEGIN

-- analyze the tables

-- truncate the table
IF trunc_table('MATCH_CO_2')
THEN
-- make the indexes unusable
IF unusable_index('MATCH_CO_2')
THEN
-- alter the system to skip the unusable indexes
IF skip_index
THEN
-- populate the temp match tables
IF insert_match_companies
THEN
-- rebuild the indexe
IF rebuild_idx('MATCH_CO_2')
THEN
-- analyze the tables
IF analyze_table_stats('MATCH_CO_2')
THEN
NULL;
END IF;
END IF;
END IF;
END IF;
END IF;
END IF;

EXCEPTION
WHEN OTHERS
THEN
/*
|| Anonymous block inside the exception handler lets you
|| declare local variables to hold the error code info
*/
DECLARE
error_code NUMBER := SQLCODE;
error_msg VARCHAR2(300) := SQLERRM;
BEGIN
mev_audit.log_audit(-20061,'E', 'Error in prep_global ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
RAISE_APPLICATION_ERROR(-20063,
'Error in prep_global ' || ' error_code: ' || error_code ||' error_msg occurred: ' || error_msg);
END;

END prep_global;





Tom Kyte
January 23, 2004 - 6:25 pm UTC

come -- YES MINUTES -- MINUTES


your big mistakes are:

a) insert /*+ APPEND */ only works with SELECT

b) you are doing row by row (opps, gotta stop doing that -- i meant "row by row" processing


change the code to be

insert /*+ APPEND */ into table SELECT......


lose the procedural code.
lose the commits.

and you too can be "minutes".

since you have NO indexes being maintained
AND
you are using append (and letting append work by doing insert as select)

you will generate no undo for the load
HENCE
the incremental commits which are seriously slowing you down in addition to the slow by slow processing (doh, did it again) will go away!!!!!



re BULK Inserts with Indexes

Ivan, January 23, 2004 - 10:21 am UTC

Hi Tom,

Before you KILL me on the inserting one company at a time above I just wanted you to know that the companies and company_aliases tables that I am retrieving the data from are changing constantly. This causes snapshot too old errors when I tried to gather the information at the beginning in one cursor.

Thanks,
Ivan

Tom Kyte
January 23, 2004 - 6:28 pm UTC

and how does doing it row by row -- which is slower then insert select -- help for that just make the entire thing run SLOWER (and your commits don't help the 1555 at all!)...

nope, I don't get it. no way it would 1555 on the much faster insert as select if it doesn't on the painfully slow insert slow by slow.

Re bulk Insert

Ivan, January 28, 2004 - 10:18 am UTC

I tried it with your recommendations but I had to struggle with the DBA before it went through successfully.

The rollback segment had to to increased from 1G to 2G because the insert took up 650 meg (6 million rows for this insert) which caused a "ORA-01650: unable to extend rollback segment R03 by 2560 in tablespace RBS". He claims that a transaction of 650 Meg is much to big and would instead rather have me do it in smaller runs (your thoughts please).

As far as timing for the insert of the 6 million rows it took roughly 3 1/2 hours. How the heck do you do it in "minutes" like you stated in your earlier post?

Thanks,
Ivan

Tom Kyte
January 28, 2004 - 10:41 am UTC

My thoughts should be obvious - you need to size rbs for the size of your transactions -- for your workload.


ps$tkyte@ORA9IR2> set timing on
ops$tkyte@ORA9IR2> l
  1  insert /*+ APPEND */ into test select b.* from big_table.big_table b,
  2    ( select rownum from dual union all select rownum from dual union all
  3    select rownum from dual union all select rownum from dual union all select
  4*   rownum from dual union all select rownum from dual )
ops$tkyte@ORA9IR2> /
 
6000000 rows created.
 
Elapsed: 00:01:13.74
ops$tkyte@ORA9IR2>


like that?   have you run a tkprof to see where your low hanging fruit is?  (that was a tecra m1 laptop, nothing fancy) 

Re Bulk Inserts

Ivan, January 29, 2004 - 9:58 pm UTC

You are right about the timing as I have left the primary key index on which is causing the problem.

So if I read your statement correctly you agree with me in that the rbs should be able to handle a 650 meg transaction. If a 650 meg transaction is not too large then what is the breaking point where you would start creating smaller transactions?

What is difference between rebuilding the index and drop/recreating the index after the load. I read in one of your responses that you always rebuild instead of drop/create indexes.

Thanks,
Ivan



Tom Kyte
January 30, 2004 - 8:08 am UTC

I create transactions based on business requirements only, not based on size.


Now that you have no indexes -- you should be generating almost no undo.


I like "setting unusable" and then "rebuilding" since the index never "goes away".

Consider:

drop index i1;
drop index i2;
load table
create index i2;
<process fails here and no one noticed>


Now, index i1 is missing and system performance goes down the tubes and you spend a day trying to figure out "why".


Consider:

set unusable i1;
set unusable i2;
load table
rebuild i2;
<process fails here and no one noticed>

Now, index i1 is unusable and the end users tell you about it since they get an error message. You fix the problem immediately. system performance is not impacted, you wasted very very little time.



Bulk collects or insert select from

Duncan, February 03, 2004 - 6:12 am UTC

Tom

I have been tasked with trying to speed up the following procedure. We are using 8.1.7.4. Using tkprof this procedure running time is far too long
(approx 2hours) and mainly consists of the count:

SELECT Count(*)
INTO V_Check
FROM Linked_Mortality_Data2
WHERE NHS_Number = rec_Cur.NHS_NO
AND YEAR = V_Quarter;


and then also the INSERT :


INSERT INTO MORTALITY_DATA_SET
SELECT NHS_NUMBER,
ANONYMISED_ID,
DECEASED_FIRST_FORENAME,
DECEASED_SECOND_FORENAME,
...
...
WHERE NHS_Number = rec_Cur.NHS_NO
AND YEAR = V_Quarter;


(The insert query with the cursor are large tables with lots of data, they have been indexed but I'm sure this should be running faster.)

The count is there because the procedure must check that only 1 row will be inserted into the MDSET table, and if this is not the case then those rows are logged in an error table.

So to speed up the procedure I was wondering whether or not a BULK COLLECT into an array then use a FORALL loop
to do the insert (logging failures as i went), would this run quicker? Since I am on 8.1.7.4 would i have to BULK COLLECT into an object type rather than a
plsql table?

If this method would not be quicker than the INSERT SELECT FROM that is already used, then how could i speed what i've got up?
Joining the cursor to the INSERT SELECT FROM and maybe losing the count query would help but then i would need to log which rows did not make it into the table, how could i achieve this and speed up the query?
(I can think of a decode statement which would calculate the V_Quarter field, but this wouldn't shave much time off though)


CREATE OR REPLACE PROCEDURE P_M4 ( PO_SUCCESS_OR_FAILURE OUT VARCHAR2,
P_Batch_ID IN MORT_BATCH_RECORDS.BATCH_ID%TYPE)


IS

V_NHS_NO MBR.NHS_NO%TYPE;
V_Batch_Record_ID MBR.BATCH_RECORD_ID%TYPE;
V_Field_ID FIELDS.FIELD_ID%TYPE;
V_Type MM.TYPE%TYPE;
V_ERROR_Count NUMBER(10) := 0;
V_Success_Count NUMBER(10) := 0;
V_Fail_Count NUMBER(10) := 0;
V_Check NUMBER(10);
VO_SUCCESS_OR_FAILURE VARCHAR2(1000);
V_Error VARCHAR2(1000);
V_APR_02 DATE := '01/04/2002';
V_Month CHAR(2);
V_Quarter DATE;


BEGIN

V_Type := 'E';

SELECT FIELD_ID
INTO V_Field_ID
FROM FIELDS
WHERE COLUMN_NAME = 'GP_CODE'
AND TABLE_NAME = 'LM_DATA'
AND SYSTEM = 'O';


FOR rec_Cur IN (SELECT mbr.NHS_NO,
mbr.Batch_Record_ID,
ons.Date_of_Death DOD
FROM MBR,
ONS_DATA_SET ons
WHERE mbr.NHS_NO = ons.NHS_Number
AND mbr.Batch_ID = P_Batch_ID
AND mbr.Stage_ID = 3 ) LOOP

V_Batch_Record_ID := rec_Cur.Batch_Record_ID;

V_ERROR_Count := 0;

IF rec_Cur.DOD IS NOT NULL THEN
IF rec_Cur.DOD < V_APR_02 THEN
V_Quarter := V_APR_02;

ELSE
V_Month := TO_CHAR(rec_Cur.DOD, 'MM');

IF V_Month IN ('01','02','03') THEN
V_Month := '01';
ELSIF V_Month IN ('04','05','06') THEN
V_Month := '04';
ELSIF V_Month IN ('07','08','09') THEN
V_Month := '07';
ELSIF V_Month IN ('10','11','12') THEN
V_Month := '10';
END IF;

V_Quarter := TO_DATE('01/'||V_Month||TO_CHAR(rec_Cur.DOD, 'RRRR'), 'DD/MM/RRRR');

END IF;

END IF; -- DOD null check

SELECT Count(*)
INTO V_Check
FROM Linked_Mortality_Data2
WHERE NHS_Number = rec_Cur.NHS_NO
AND YEAR = V_Quarter;

IF V_Check = 1 THEN

INSERT INTO MDSET
SELECT NHS_NUMBER,
ANONYMISED_ID,
DECEASED_FIRST_FORENAME,
DECEASED_SECOND_FORENAME,
DECEASED_THIRD_FORENAME,
DECEASED_SURNAME,
DECEASED_ALIASES,
DECEASED_MAIDEN_NAME,
DATE_OF_BIRTH,
DATE_OF_DEATH,
PLACE_OF_DEATH,
DECEASED_USUAL_ADDRESS,
DEC_USUAL_ADDRESS_POSTCODE,
CAUSE_OF_DEATH_LINE_1,
CAUSE_OF_DEATH_LINE_2,
CAUSE_OF_DEATH_LINE_3,
CAUSE_OF_DEATH_LINE_4,
CAUSE_OF_DEATH_LINE_5,
CAUSE_OF_DEATH_LINE_6,
CAUSE_OF_DEATH_LINE_7,
CAUSE_OF_DEATH_LINE_8,
UNDERLYING_CAUSE_OF_DEATH,
STHA_OF_RESIDENCE,
PCT_OF_RESIDENCE,
DATE_OF_REGISTRATION,
SEX,
CAUSE_OF_DEATH_ICD_CODE_1,
CAUSE_OF_DEATH_ICD_CODE_2,
CAUSE_OF_DEATH_ICD_CODE_3,
CAUSE_OF_DEATH_ICD_CODE_4,
CAUSE_OF_DEATH_ICD_CODE_5,
CAUSE_OF_DEATH_ICD_CODE_6,
CAUSE_OF_DEATH_ICD_CODE_7,
CAUSE_OF_DEATH_ICD_CODE_8,
CERTIFYING_GP,
PLACE_OF_DEATH_CODE,
GP_CODE,
GP_NAME,
GP_PRACTICE_CODE,
NULL,
PCT_CODE,
HA_CODE,
SYSDATE,
P_Batch_ID,
V_Batch_Record_ID
FROM Linked_Mortality_Data2
WHERE NHS_Number = rec_Cur.NHS_NO
AND YEAR = V_Quarter;


V_Success_Count := V_Success_Count + 1;

ELSIF V_Check = 0 THEN

INSERT INTO MM
(MESSAGE_ID,
BATCH_RECORD_ID,
STAGE_ID,
MESSAGE_TYPE_ID,
FIELD_ID,
TYPE,
DATESTAMP,
BATCH_ID)
VALUES (
SEQ_MESSAGES.NEXTVAL,
V_Batch_Record_ID,
4,
3003,
V_Field_ID,
V_Type,
SYSDATE,
P_BATCH_ID);

V_Fail_Count := V_Fail_Count + 1;

ELSIF V_Check > 1 THEN

INSERT INTO MM
(MESSAGE_ID,
BATCH_RECORD_ID,
STAGE_ID,
MESSAGE_TYPE_ID,
FIELD_ID,
TYPE,
DATESTAMP,
BATCH_ID)
VALUES (
SEQ_MESSAGES.NEXTVAL,
V_Batch_Record_ID,
4,
3004,
V_Field_ID,
V_Type,
SYSDATE,
P_BATCH_ID);

V_Fail_Count := V_Fail_Count + 1;

END IF;

END LOOP;


PO_SUCCESS_OR_FAILURE := 'SUCCESS';

END P_M4;
/


Please let me know how I can get this to run in a reasonable time.

Tom Kyte
February 03, 2004 - 8:26 am UTC

I see AT MOST three inserts and no standalone selects that should be executed here.

In fact, I believe 2 inserts at most. No selects. No procedural code at all.


I have not compiled this code.
I have not run this code.
However, you should be able to read this code and see what it does (your date conversion for example is a simple sql function!)

Pretty sure this is very close to what you want -- might need a tweak or two but this is "just a join" -- really.

In 9i -- we'd be able to get this to a SINGLE INSERT as SELECT with multi-table inserts!!!!


ps -- in 8i, you'll need to use execute immediate and bind in the v_/p_ parameters since plsql won't recognize the analytic functions in there.



CREATE OR REPLACE PROCEDURE P_M4
( PO_SUCCESS_OR_FAILURE OUT VARCHAR2,
P_Batch_ID IN MORT_BATCH_RECORDS.BATCH_ID%TYPE)
IS
V_Field_ID FIELDS.FIELD_ID%TYPE;
BEGIN
V_Type := 'E';

SELECT FIELD_ID
INTO V_Field_ID
FROM FIELDS
WHERE COLUMN_NAME = 'GP_CODE'
AND TABLE_NAME = 'LM_DATA'
AND SYSTEM = 'O';

INSERT INTO MDSET
select NHS_NUMBER, ANONYMISED_ID, DECEASED_FIRST_FORENAME,
DECEASED_SECOND_FORENAME, DECEASED_THIRD_FORENAME,
DECEASED_SURNAME, DECEASED_ALIASES, DECEASED_MAIDEN_NAME,
DATE_OF_BIRTH, DATE_OF_DEATH, PLACE_OF_DEATH,
DECEASED_USUAL_ADDRESS, DEC_USUAL_ADDRESS_POSTCODE,
CAUSE_OF_DEATH_LINE_1, CAUSE_OF_DEATH_LINE_2,
CAUSE_OF_DEATH_LINE_3, CAUSE_OF_DEATH_LINE_4,
CAUSE_OF_DEATH_LINE_5, CAUSE_OF_DEATH_LINE_6,
CAUSE_OF_DEATH_LINE_7, CAUSE_OF_DEATH_LINE_8,
UNDERLYING_CAUSE_OF_DEATH, STHA_OF_RESIDENCE, PCT_OF_RESIDENCE,
DATE_OF_REGISTRATION, SEX, CAUSE_OF_DEATH_ICD_CODE_1,
CAUSE_OF_DEATH_ICD_CODE_2,
CAUSE_OF_DEATH_ICD_CODE_3, CAUSE_OF_DEATH_ICD_CODE_4,
CAUSE_OF_DEATH_ICD_CODE_5, CAUSE_OF_DEATH_ICD_CODE_6,
CAUSE_OF_DEATH_ICD_CODE_7, CAUSE_OF_DEATH_ICD_CODE_8,
CERTIFYING_GP, PLACE_OF_DEATH_CODE, GP_CODE, GP_NAME,
GP_PRACTICE_CODE, NULL, PCT_CODE, HA_CODE,
SYSDATE, P_Batch_ID, Batch_Record_ID
from (
SELECT count(*) over (partition by nhs_number, year ) cnt,
NHS_NUMBER, ANONYMISED_ID, DECEASED_FIRST_FORENAME,
DECEASED_SECOND_FORENAME, DECEASED_THIRD_FORENAME,
DECEASED_SURNAME, DECEASED_ALIASES, DECEASED_MAIDEN_NAME,
DATE_OF_BIRTH, DATE_OF_DEATH, PLACE_OF_DEATH,
DECEASED_USUAL_ADDRESS, DEC_USUAL_ADDRESS_POSTCODE,
CAUSE_OF_DEATH_LINE_1, CAUSE_OF_DEATH_LINE_2,
CAUSE_OF_DEATH_LINE_3, CAUSE_OF_DEATH_LINE_4,
CAUSE_OF_DEATH_LINE_5, CAUSE_OF_DEATH_LINE_6,
CAUSE_OF_DEATH_LINE_7, CAUSE_OF_DEATH_LINE_8,
UNDERLYING_CAUSE_OF_DEATH, STHA_OF_RESIDENCE, PCT_OF_RESIDENCE,
DATE_OF_REGISTRATION, SEX, CAUSE_OF_DEATH_ICD_CODE_1,
CAUSE_OF_DEATH_ICD_CODE_2,
CAUSE_OF_DEATH_ICD_CODE_3, CAUSE_OF_DEATH_ICD_CODE_4,
CAUSE_OF_DEATH_ICD_CODE_5, CAUSE_OF_DEATH_ICD_CODE_6,
CAUSE_OF_DEATH_ICD_CODE_7, CAUSE_OF_DEATH_ICD_CODE_8,
CERTIFYING_GP, PLACE_OF_DEATH_CODE, GP_CODE, GP_NAME,
GP_PRACTICE_CODE, NULL, PCT_CODE, HA_CODE,
rec_cur.Batch_Record_ID
FROM Linked_Mortality_Data2
(SELECT mbr.NHS_NO,
mbr.Batch_Record_ID,
greatest( trunc( dt, 'q' ) , to_date( '01-apr-2002',
'dd-mon-yyyy' ) ) DOD
FROM MBR,
ONS_DATA_SET ons
WHERE mbr.NHS_NO = ons.NHS_Number
AND mbr.Batch_ID = P_Batch_ID
AND mbr.Stage_ID = 3 ) rec_cur
WHERE NHS_Number = rec_Cur.NHS_NO
AND YEAR = rec_cur.DOD
)
where cnt = 1;

INSERT INTO MM
(MESSAGE_ID, BATCH_RECORD_ID, STAGE_ID, MESSAGE_TYPE_ID,
FIELD_ID, TYPE, DATESTAMP, BATCH_ID)
select seq_messages.nextval, batch_record_id, 4,
decode( cnt, 0, 3003, 3004 ), v_field_id,
'E', sysdate, p_batch_id
from (
SELECT count(*) over (partition by nhs_no, dod ) cnt,
rec_cur.Batch_Record_ID
FROM Linked_Mortality_Data2
(SELECT mbr.NHS_NO,
mbr.Batch_Record_ID,
greatest( trunc( dt, 'q' ) , to_date( '01-apr-2002',
'dd-mon-yyyy' ) ) DOD
FROM MBR,
ONS_DATA_SET ons
WHERE mbr.NHS_NO = ons.NHS_Number
AND mbr.Batch_ID = P_Batch_ID
AND mbr.Stage_ID = 3 ) rec_cur
WHERE NHS_Number(+) = rec_Cur.NHS_NO
AND YEAR(+) = rec_cur.DOD
)
where cnt <> 1;

PO_SUCCESS_OR_FAILURE := 'SUCCESS';

END P_M4;
/


re BULK Insert

Ivan, February 03, 2004 - 9:41 am UTC

Hi Tom,

I am still struggling even after removing the join of the tables.

Here is my TKPROF of the insert statement and as you can see it is taking hours. Your thoughts please on how I can speed this thing up.

TKPROF: Release 9.2.0.1.0 - Production on Tue Feb 3 09:27:12 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: dev05_ora_1204.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

ALTER SESSION SET SQL_TRACE = TRUE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.01 0.01 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.01 0.01 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 106 (MEV)
********************************************************************************

ALTER SESSION SET skip_unusable_indexes=TRUE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 106 (MEV)
********************************************************************************

INSERT /*+ APPEND */ INTO match_co_2
SELECT mc.company_id,
mc.name AS name_alias,
osfun.space(TRANSLATE(UPPER(mc.name),'-(),',' ')) AS name,
mc.address_1,
mc.address_2,
mc.city,
mc.district,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS telephone_number,
mc.postal_code,
mc.parent_id_usage,
mc.country_id,
UPPER(SUBSTR(mc.address_1,1,10)) AS address_1_10,
UPPER(SUBSTR(mc.address_1,1,8)) AS address_1_8,
UPPER(SUBSTR(mc.address_1,1,5)) AS address_1_5,
UPPER(SUBSTR(mc.address_2,1,10)) AS address_2_10,
UPPER(SUBSTR(mc.address_2,1,5)) AS address_2_5,
UPPER(SUBSTR(mc.city,1,10)) AS city_10,
UPPER(SUBSTR(mc.city,1,5)) AS city_5,
UPPER(SUBSTR(mc.city,1,3)) AS city_3,
UPPER(SUBSTR(mc.district,1,3)) AS district_3,
UPPER(SUBSTR(mc.district,1,2)) AS district_2,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,15) AS phone_15,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS phone_12,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,40)),'-(),',' ')) AS name_40,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,20)),'-(),',' ')) AS name_20,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,13)),'-(),',' ')) AS name_13,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,7)),'-(),',' ')) AS name_7,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,3)),'-(),',' ')) AS name_3,
sysdate,
LTRIM(RTRIM(osfun.word(mc.postal_code,1))) as postal_1_word,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,30)),'-(),',' ')) AS name_30,
osfun.space(TRANSLATE(UPPER(SUBSTR(mc.name,1,10)),'-(),',' ')) AS name_10
FROM companies mc

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.12 0.17 0 0 0 0
Execute 1 6886.45 8228.53 475289 1088093 18356 3714322
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 6886.57 8228.70 475289 1088093 18356 3714322

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 106 (MEV)
error during execute of EXPLAIN PLAN statement
ORA-26028: index MEV.XIE9MC2_CTRY_N13_A210 initially in unusable state

parse error offset: 94
********************************************************************************

select file#
from
file$ where ts#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1350 0.23 0.50 0 0 0 0
Execute 1350 0.18 0.26 0 0 0 0
Fetch 2700 0.29 0.24 0 4050 0 1350
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5400 0.71 1.00 0 4050 0 1350

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID OBJ#(17) (cr=3 r=0 w=0 time=10249 us)
1 INDEX RANGE SCAN OBJ#(37) (cr=2 r=0 w=0 time=10195 us)(object id 37)

********************************************************************************

update tsq$ set blocks=:3,maxblocks=:4,grantor#=:5,priv1=:6,priv2=:7,priv3=:8
where
ts#=:1 and user#=:2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1350 0.25 0.38 0 0 0 0
Execute 1350 1.32 1.98 0 5400 1350 1350
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2700 1.57 2.36 0 5400 1350 1350

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE (cr=4 r=0 w=0 time=838 us)
1 TABLE ACCESS CLUSTER OBJ#(55) (cr=4 r=0 w=0 time=443 us)
1 INDEX UNIQUE SCAN OBJ#(11) (cr=1 r=0 w=0 time=46 us)(object id 11)

********************************************************************************

COMMIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.02 0 0 8 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.02 0 0 8 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 106 (MEV)
********************************************************************************

ALTER SESSION SET SQL_TRACE = FALSE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.02 0 0 0 0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 106 (MEV)



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.12 0.19 0 0 0 0
Execute 5 6886.48 8228.57 475289 1088093 18364 3714322
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 6886.60 8228.76 475289 1088093 18364 3714322

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2700 0.48 0.88 0 0 0 0
Execute 2700 1.51 2.24 0 5400 1350 1350
Fetch 2700 0.29 0.24 0 4050 0 1350
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8100 2.29 3.37 0 9450 1350 2700

Misses in library cache during parse: 2

5 user SQL statements in session.
2700 internal SQL statements in session.
2705 SQL statements in session.
0 statements EXPLAINed in this session.
********************************************************************************
Trace file: dev05_ora_1204.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
5 user SQL statements in trace file.
2700 internal SQL statements in trace file.
2705 SQL statements in trace file.
7 unique SQL statements in trace file.
25794 lines in trace file.

Thanks,
Ivan




Tom Kyte
February 03, 2004 - 10:10 am UTC

I'd be suspicious of osfun.space.

calling a plsql function 7 times for each row and doing it some 3.7 MILLION times is going to "take a while"


say that function returns in 0.001 second (pretty good that would be).

well, that is 8 hours at 3.7 MILLION times 7 times....


So, lets see osfun.space and see why you think you need to call plsql from sql. we'll want to make that pure sql.

Thanks!

Duncan, February 04, 2004 - 4:08 am UTC

That's great thanks.
However can you just explain how the date conversion below works:

"..
greatest( trunc( dt, 'q' ) , to_date( '01-apr-2002',
'dd-mon-yyyy' ) ) DOD
..."

what is dt?? internal date?
what is q? a format code?

Thanks



Tom Kyte
February 04, 2004 - 7:34 am UTC

DT is the column (sorry, cut and pasted from a test -- dt in your case would be your ons.Date_of_Death)


trunc( ons.Date_of_Death, 'q' )

will truncate to the quarter, just as you are doing procedurally.

you have the additional requirement that if the trunc'ed date is before april 1 2002, it should be april 1 2002 and greatest does that for us....


yes, 'q' is a documented format code for TRUNC and dates in general.

Re- BULK INSERT

Ivan, February 04, 2004 - 9:21 am UTC

OSFUN.SPACE FUNCTION

FUNCTION space
************************************************************
-- Function:
-- SPACE(string[,[n][,pad]])
-- Replaces any strings of one or more inter-word blanks in string with n pad
-- characters and returns the result. The default value of n is 1 and the
-- default value of pad is a blank. Leading and trailing blanks in string are
-- removed.
--
-- Examples:
-- space(' edge of night ') ==> 'edge of night'
-- space(' edge of night ',0) ==> 'edgeofnight'
-- space(' edge of night ',2,'-') ==> 'edge--of--night'
-- space(' ') ==> null
--
-- Note:
-- Same syntax and functionality as REXX function of the same name.
-- ***********************************************************
(
i_string IN VARCHAR2,
i_n IN NATURAL,
i_pad IN CHAR
)
RETURN VARCHAR2
IS
v_temp VARCHAR2(4000);
BEGIN
v_temp := LTRIM(RTRIM(i_string));
IF v_temp IS NULL THEN
RETURN NULL;
ELSIF i_n = 0 THEN
RETURN REPLACE(v_temp,' ',NULL);
ELSE
WHILE INSTR(v_temp,' ') > 0 LOOP
v_temp := REPLACE(v_temp,' ',' ');
END LOOP;
RETURN REPLACE(v_temp,' ',RPAD(i_pad,i_n,i_pad));
END IF;
END space;

Thanks,
Ivan

Tom Kyte
February 04, 2004 - 5:02 pm UTC

                                                                                                               since you are just replacing multiple spaces with a single space, just use:


ltrim(rtrim( replace( replace( replace(  YOUR_DATA ,' ',' @'),'@ ',''),' @',' ')))


and stop calling the space function.  It'll be much much faster.

consider:


ops$tkyte@ORA10G> select '"'|| space( '  edge    of   night   ' ) || '"' from dual;
 
'"'||SPACE('EDGEOFNIGHT')||'"'
-----------------------------------------------------------------------------------------------------------------------------------
"edge of night"
 
Elapsed: 00:00:00.04
ops$tkyte@ORA10G> select '"'|| ltrim(rtrim( replace( replace( replace(
  2  '  edge    of   night   '
  3  ,' ',' @'),'@ ',''),' @',' '))) || '"' from
  4  dual;
 
'"'||LTRIM(RTRI
---------------
"edge of night"
 
Elapsed: 00:00:00.03
ops$tkyte@ORA10G> /*
ops$tkyte@ORA10G> drop table t;
ops$tkyte@ORA10G> create table t as select '    edge   of   night   ' x from all_objects;
ops$tkyte@ORA10G> */



select space( x )
from
 t
                                                                                                               
                                                                                                               
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.02          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     3142      1.37       1.27          0       3328          0       47107
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     3144      1.39       1.29          0       3329          0       47107
                                                                                                               
Rows     Row Source Operation
-------  ---------------------------------------------------
  47107  TABLE ACCESS FULL T (cr=3328 pr=0 pw=0 time=47216 us)
                                                                                                               
                                                                                                               
select ltrim(rtrim( replace( replace( replace(  x ,' ',' @'),'@ ',''),' @',' ')))
from t
                                                                                                               
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.05          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     3142      0.22       0.24          0       3328          0       47107
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     3144      0.24       0.30          0       3329          0       47107
 

Re BULK INSERT

Ivan, February 04, 2004 - 4:23 pm UTC

I changed the code to get away from calling the OSFUN.SPACE function as seen below and I then ran out of temp space. I watched as it filled the temp space up to 2.5 gigs and then it errored out.

ERROR at line 1:
ORA-01652: unable to extend temp segment by 1280 in tablespace TEMP01



INSERT /*+ APPEND */ INTO match_co_2
SELECT mc.company_id,
mc.name AS name_alias,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(mc.name),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name,
mc.address_1,
mc.address_2,
mc.city,
mc.district,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS telephone_number,
mc.postal_code,
mc.parent_id_usage,
mc.country_id,
UPPER(SUBSTR(mc.address_1,1,10)) AS address_1_10,
UPPER(SUBSTR(mc.address_1,1,8)) AS address_1_8,
UPPER(SUBSTR(mc.address_1,1,5)) AS address_1_5,
UPPER(SUBSTR(mc.address_2,1,10)) AS address_2_10,
UPPER(SUBSTR(mc.address_2,1,5)) AS address_2_5,
UPPER(SUBSTR(mc.city,1,10)) AS city_10,
UPPER(SUBSTR(mc.city,1,5)) AS city_5,
UPPER(SUBSTR(mc.city,1,3)) AS city_3,
UPPER(SUBSTR(mc.district,1,3)) AS district_3,
UPPER(SUBSTR(mc.district,1,2)) AS district_2,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,15) AS phone_15,
SUBSTR(TRANSLATE(mc.telephone_number,'0123456789- /\+()#','0123456789'),1,12) AS phone_12,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(SUBSTR(mc.name,1,40)),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name_40,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(SUBSTR(mc.name,1,20)),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name_20,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(SUBSTR(mc.name,1,13)),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name_13,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(SUBSTR(mc.name,1,7)),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name_7,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(SUBSTR(mc.name,1,3)),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name_3,
SYSDATE,
SUBSTR(LTRIM(postal_code || ' '),1,INSTR(postal_code || ' ',' ')) as postal_1_word,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(SUBSTR(mc.name,1,30)),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name_30,
REPLACE(REPLACE (REPLACE(TRANSLATE(UPPER(SUBSTR(mc.name,1,10)),'-(),',' '),' ',' ^'),'^ ',''),' ^',' ') AS name_10
FROM companies mc;

Thanks,
Ivan




Tom Kyte
February 04, 2004 - 6:05 pm UTC

any indexes on this table. we build the index segments for the appended data off to the side and then merge them in after the fact.

sonali, March 11, 2004 - 2:09 pm UTC

I have a general design question..
say I am doing this insert

Insert into table t1
select * from table t2 where t2.entity in (1,2,3,4......n);

entity (1,2,3,4) are coming from HTML page as constants and the number of constants in this IN clause can be upto 1000 numbers..may be more.

Is this insert statement right to do this way ? Is there better way to do this ? Should I create a temp table and put these constants in that table and then read from there ?
Will there be any locks ?

Whats your suggestion ?
Thanks so much

Tom Kyte
March 11, 2004 - 2:17 pm UTC

i would put the constants into a global temp table yes.

will there be any locks? sure, on unique key entries on table T1.

A reader, March 11, 2004 - 5:30 pm UTC

Hi Tom, is there a way to optimize

insert into t select * from t2;

Every day the whole table must be inserted in a historic table,

something like insert into t bulk select * from t2;

without disabling logging nor indexes in standar edition 9i.

Please :)

Tom Kyte
March 12, 2004 - 9:28 am UTC

insert /*+ append */ into t select * from t2;


would avoid UNDO generation for the table (but not the indexes) and would minimize the redo generation for the indexes.

append does *not* bypass redo
append does maintain indexes
append minimizes undo (and therefore redo) generation for the table and indexes.

A reader, March 12, 2004 - 4:23 pm UTC

Thanks Tom,

Sorry but I'm not understainding, one question more
my database in archivelog mode
If I do this

insert /*+ append */ into t select * from t2;

I think I'll have to do a full backup
beacuse information will not be saved in archive log files.
Or I'm wrong

Thanks again.

Tom Kyte
March 12, 2004 - 7:51 pm UTC

thats wrong.

append only bypasses UNDO generation (which is 100% safe)...

if you flip the table into NOLOGGING mode, *then* and only then might you bypass redo generation as well.


measure the redo generated, check the unrecoverable time, you can see this easily.


A reader, March 15, 2004 - 8:11 am UTC

Thanks Tom

OK

Ramanan, March 21, 2004 - 10:02 pm UTC

Dear Tom,
Do PL/SQL records have any built-in attribute to iterate
over each record?I tried the following one and got errors.
How to loop over each record?

SQL>  declare
  2    type t is record(enm emp.ename%type,salary emp.sal%type);
  3    r t;
  4   begin
  5    select ename,sal into r from emp;
  6     for i in 1..r.count loop
  7       dbms_output.put_line(r(i).enm||chr(9)||r(i).salary);
  8     end loop;
  9   end;
 10  /
   for i in 1..r.count loop
                 *
ERROR at line 6:
ORA-06550: line 6, column 18: 
PLS-00302: component 'COUNT' must be declared 
ORA-06550: line 6, column 4: 
PL/SQL: Statement ignored 

Please do reply.
 

Can APPEND and NOLOGGING be used here

A reader, May 06, 2004 - 9:08 am UTC

Hi Tom,

Please consider this:

declare
l_data myArrayType;
begin
select cast ( multiset ( select ename, hiredate, sal
from emp
)
as myArrayType
)
into l_data
from dual;
for i in 1 .. l_data.count
loop
l_data(i).ename := 'X' || l_data(i).ename;
end loop;

insert /*+ APPEND NOLOGGING PARALLEL(a,4) */
into emp2 a ( ename, hiredate, sal )
select /*+ PARALLEL(b,4) */
* from TABLE( cast (l_data as myArrayType) ) b;
end;
/

Will the hints APPEND, NOLOGGING, PARALLEL be effective here? I know if there were a value cluase it would not have been.

Thanks and regards,

Tom Kyte
May 06, 2004 - 9:22 am UTC

I would never consider using an array big enough to use parallel, no.

APPEND would work -- write above the HWM.

NOLOGGING is *not a hint*. NOLOGGING is an attribute of a segment (alter table t nologging).


I would not be using nologging except in very special, well considered, coordinated with the DBA cases.

RETURNING CLAUSE with BULK COLLECT

Ik, August 10, 2004 - 12:22 pm UTC

Tom,

This is on one of the examples you provided earlier. I have a situation where i replaced a procedural code (loop through cursor and inserting records one by one) in PL/SQL by a single SQL Statement (INSERT INTO table SELECT * FROM table2).

The old code as a counter which increments after every record. This count is tracked and is used later in the program.

With the single INSERT i should be able to capture the records inserted. Using a RETURNING clause with BULK COLLECT throws compilation error.

CREATE TABLE tst ( t1 NUMBER);

DECLARE
type tabtyp IS TABLE OF NUMBER;
tab tabtyp;
cnt NUMBER := 0;
BEGIN
INSERT INTO tst SELECT ROWNUM FROM all_objects WHERE ROWNUM <= 5
RETURNING 1 BULK COLLECT INTO tab;
END;

ORA-06550: line 7, column 4:
PL/SQL: ORA-00933: SQL command not properly ended
ORA-06550: line 6, column 4:
PL/SQL: SQL Statement ignored


Thanks a ton for your time.

regards
ik

Tom Kyte
August 10, 2004 - 3:46 pm UTC

the bulk collect isn't valid there, you would bulk fetch and forall i insert. array.count tells you exactly how many "rows" there were.

Populate to different tables from one big table.

Sean, August 20, 2004 - 4:52 pm UTC

Hi Tom,

I have a table with 6M records and a few hundreds columns. There is index on update_time column. The table is analyzed. The default opitimizer_mode is Choose.

I want populate this table to different tables according to the year of update_time.

My source table is called Source, my target table is called target_1999, target_2000 ....


When I analyze this query, it used the index and also indicate 300k rerords would be inserted.

------------------------------------------
insert into target_1999(c1)
select c1
from source
where trunc(update_time) < '1-JAN-2000';
------------------------------------------


But when I add all columns for these two tables, it always use full table scan no matter what I do (I use index hint, rule hint).

----------------------------------------
insert into target_1999(c1, c2 ...)
select c1, c2 ...
from source
where trunc(update_time) < '1-JAN-2000';
----------------------------------------

(1)I don't know why the second query does not use index though two queries are quite the same.
(2) Do you have any suggestions to speed up the insert. I used "parallel" hint and change the target table to nologging.


Tom Kyte
August 21, 2004 - 11:24 am UTC

a table with a "couple of hundred" columns will be stored in rowpieces of 255 columns -- meaning each and every row is "chained" over multiple blocks in all probability.

getting column "c1" (the first column) is very easy -- it is on the head row piece. getting column 300 is "hard" -- not only would you index range scan, and then table access by index rowid (to get the head rowpiece), you would then do another access to get the second piece and maybe the 3rd, 4th, whatever.


If I was to split a table out into many other tables, i would be using a multi-table insert


insert
when (deptno=10) then
into emp_10 ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
values ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
when (deptno=20) then
into emp_20 ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
values ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
when (deptno=30) then
into emp_30 ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
values ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
else
into leftover ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
values ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
select * from external_table e
/


like that. one pass.

Bulk collect TKprof

Sadiq, September 11, 2004 - 8:44 am UTC

When we user Bulk Collect with LIMIT, this is my observation.

Total records - 100,000
Limit set - 1000
Then Fetch in Tkprof shows 10 (but if you dont use limit it is 1)
Confirm my observation.


Tom Kyte
September 11, 2004 - 10:45 am UTC

huh? you just confirmed your own observation.

Tell us -- if you use limit 1000, how many times did you code call fetch?

and if you didn't use limit (with 100,000 elements that would be "not recommended", you would want to use a reasonable limit of say between 100 and 500), how many times did you call fetch?

but I think you either used limit 10,000 OR you saw 100 fetches OR you had 10,000 rows..... (10 fetches of 1000 records would bring back 10,000 rows...)

mistake

Sadiq, September 23, 2004 - 9:37 am UTC

Type mismatch. yes. It showed only 100 fetch (not 10)
If it is so, is my observation is right?

Tom Kyte
September 24, 2004 - 8:36 am UTC

correct, tkprof will accurately reflect how many times the server was asked to fetch.

Question about direct insert

Jen, November 08, 2004 - 11:03 am UTC

In theory, direct insert will be the fastest way. My SELECT query returns 17k rows with 20 columns by joining 9 tables. The SELECT query itself returns the whole set of data in 2 minutes. However, when I put the same query into the stament ‘insert into… select .. from’, it will take forever to finish. The developer killed the SQL after three hours. I have used your suggestion in this thread to use the hint /*+ append */. But, I didn’t see huge improvement in term of the running time. Are there any possible reasons that cause the slow inserting you would think about? Thanks ahead.




Tom Kyte
November 08, 2004 - 5:12 pm UTC

how much data is 17k * 20 columns?

are the plans the same -- when you run the select versus insert as select? (explain plan will tell)

Bulk insert of records , using something like a user defined record

Gunjeet Singh, November 18, 2004 - 6:44 pm UTC

Hi Tom,

I'm using 9.2.0.5. Database is a development DB in non-archivelog mode.

I have a table with a column of type mdsys.sdo_geometry.
I need to load very large amount of data (around 50 million records - maybe more) into this table. I can't use sqlldr in direct mode to load data into this table as sqlldr direct mode does not support the spatial object type.

So, I first load the data in a staging table (that does not have any object type columns) using sqlldr in direct mode. This is very fast. Then I use a stored procedure to transfer the data to the real table. The procedure has an insert that looks like :

insert /*+ append */ into <table> select ...

This has greatly helped the transfer routine [ no for loops, no commits within loops]. For 9 million records, the sqlldr in direct mode takes 2.5 mins and the transfer is taking 25 to 30 mins. I wanted to improve this further using parallel dml but the 9i docs indicate
"Parallel DML can be done on tables with object columns provided you are not touching the object columns."

I do need to populate the geometry column as part of this insert, so based on Oracle docs parallel dml is out.

Can you suggest any other performance improvements for this case ?

I'm currently experiementing with 9 million records but eventually I need to handle much larger volume (at least 50 million per table).

Please advise.

Thanks,

Gunjeet


Tom Kyte
November 19, 2004 - 10:14 am UTC

have you tried DIY parallelism? you would not insert /*+ APPEND */, but rather use conventional path inserts -- but perhaps the ability to do it in parallel will far outweigh any additional work the convention path does.

using split.sql:

set echo off

set verify off
define TNAME=&1
define CHUNKS=&2


select grp,
dbms_rowid.rowid_create( 1, data_object_id, lo_fno, lo_block, 0 ) min_rid,
dbms_rowid.rowid_create( 1, data_object_id, hi_fno, hi_block, 10000 ) max_rid
from (
select distinct grp,
first_value(relative_fno) over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) lo_fno,
first_value(block_id ) over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) lo_block,
last_value(relative_fno) over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) hi_fno,
last_value(block_id+blocks-1) over (partition by grp order by relative_fno, block_id
rows between unbounded preceding and unbounded following) hi_block,
sum(blocks) over (partition by grp) sum_blocks
from (
select relative_fno,
block_id,
blocks,
trunc( (sum(blocks) over (order by relative_fno, block_id)-0.01) /
(sum(blocks) over ()/&CHUNKS) ) grp
from dba_extents
where segment_name = upper('&TNAME')
and owner = user order by block_id
)
),
(select data_object_id from user_objects where object_name = upper('&TNAME') )
/



you can get rowid ranges that cover your stage table:

big_table@ORA9IR2> @split big_table 4

GRP MIN_RID MAX_RID
---------- ------------------ ------------------
0 AAAHzcAALAAAAAJAAA AAAHzcAALAAAA8ICcQ
1 AAAHzcAALAAAA8JAAA AAAHzcAALAAAB4ICcQ
2 AAAHzcAALAAAB4JAAA AAAHzcAALAAACwICcQ
3 AAAHzcAALAAACwJAAA AAAHzcAALAAADwICcQ

big_table@ORA9IR2> @split big_table 8

GRP MIN_RID MAX_RID
---------- ------------------ ------------------
0 AAAHzcAALAAAAAJAAA AAAHzcAALAAAAeICcQ
1 AAAHzcAALAAAAeJAAA AAAHzcAALAAAA8ICcQ
2 AAAHzcAALAAAA8JAAA AAAHzcAALAAABaICcQ
3 AAAHzcAALAAABaJAAA AAAHzcAALAAAB4ICcQ
4 AAAHzcAALAAAB4JAAA AAAHzcAALAAACQICcQ
5 AAAHzcAALAAACQJAAA AAAHzcAALAAACwICcQ
6 AAAHzcAALAAACwJAAA AAAHzcAALAAADQICcQ
7 AAAHzcAALAAADQJAAA AAAHzcAALAAADwICcQ

8 rows selected.

big_table@ORA9IR2> @split big_table 16

GRP MIN_RID MAX_RID
---------- ------------------ ------------------
0 AAAHzcAALAAAAAJAAA AAAHzcAALAAAAOICcQ
1 AAAHzcAALAAAAOJAAA AAAHzcAALAAAAeICcQ
2 AAAHzcAALAAAAeJAAA AAAHzcAALAAAAsICcQ
3 AAAHzcAALAAAAsJAAA AAAHzcAALAAAA8ICcQ
4 AAAHzcAALAAAA8JAAA AAAHzcAALAAABKICcQ
5 AAAHzcAALAAABKJAAA AAAHzcAALAAABaICcQ
6 AAAHzcAALAAABaJAAA AAAHzcAALAAABoICcQ
7 AAAHzcAALAAABoJAAA AAAHzcAALAAAB4ICcQ
8 AAAHzcAALAAAB4JAAA AAAHzcAALAAACAICcQ
9 AAAHzcAALAAACAJAAA AAAHzcAALAAACQICcQ
10 AAAHzcAALAAACQJAAA AAAHzcAALAAACgICcQ
11 AAAHzcAALAAACgJAAA AAAHzcAALAAACwICcQ
12 AAAHzcAALAAACwJAAA AAAHzcAALAAADAICcQ
13 AAAHzcAALAAADAJAAA AAAHzcAALAAADQICcQ
14 AAAHzcAALAAADQJAAA AAAHzcAALAAADgICcQ
15 AAAHzcAALAAADgJAAA AAAHzcAALAAADwICcQ

16 rows selected.


and you can use that to do something like:


big_table@ORA9IR2> l
1 select 'dbms_job.submit( :n, ''insert into t select * from t2 where rowid between chartorowid('''
2 ||min_rid||''') and chartoworid(''' || max_rid || ''');'' )'
3 from (
4 select grp,
5 dbms_rowid.rowid_create( 1, data_object_id, lo_fno, lo_block, 0 ) min_rid,
6 dbms_rowid.rowid_create( 1, data_object_id, hi_fno, hi_block, 10000 ) max_rid
7 from (
8 select distinct grp,
9 first_value(relative_fno) over (partition by grp order by relative_fno, block_id
10 rows between unbounded preceding and unbounded following) lo_fno,
11 first_value(block_id ) over (partition by grp order by relative_fno, block_id
12 rows between unbounded preceding and unbounded following) lo_block,
13 last_value(relative_fno) over (partition by grp order by relative_fno, block_id
14 rows between unbounded preceding and unbounded following) hi_fno,
15 last_value(block_id+blocks-1) over (partition by grp order by relative_fno, block_id
16 rows between unbounded preceding and unbounded following) hi_block,
17 sum(blocks) over (partition by grp) sum_blocks
18 from (
19 select relative_fno,
20 block_id,
21 blocks,
22 trunc( (sum(blocks) over (order by relative_fno, block_id)-0.01) /
23 (sum(blocks) over ()/&CHUNKS) ) grp
24 from dba_extents
25 where segment_name = upper('&TNAME')
26 and owner = user order by block_id
27 )
28 ),
29 (select data_object_id from user_objects where object_name = upper('&TNAME') )
30* )
big_table@ORA9IR2> /

'DBMS_JOB.SUBMIT(:N,''INSERTINTOTSELECT*FROMT2WHEREROWIDBETWEENCHARTOROWID('''|
-------------------------------------------------------------------------------
dbms_job.submit( :n, 'insert into t select * from t2 where rowid between charto
rowid('AAAHzcAALAAAAAJAAA') and chartoworid('AAAHzcAALAAAA8ICcQ');' )

dbms_job.submit( :n, 'insert into t select * from t2 where rowid between charto
rowid('AAAHzcAALAAAA8JAAA') and chartoworid('AAAHzcAALAAAB4ICcQ');' )

dbms_job.submit( :n, 'insert into t select * from t2 where rowid between charto
rowid('AAAHzcAALAAAB4JAAA') and chartoworid('AAAHzcAALAAACwICcQ');' )

dbms_job.submit( :n, 'insert into t select * from t2 where rowid between charto
rowid('AAAHzcAALAAACwJAAA') and chartoworid('AAAHzcAALAAADwICcQ');' )




just have job_queue_processes set high enough (or spool inserts to scripts and run N copies of sqlplus, whatever)


Bulk insert of records

Gunjeet Singh, November 18, 2004 - 7:35 pm UTC


Hi Tom,

In my follow-up question, I forgot to mention that I'm inserting into a partitioned table (150 partitions).
No indexes are created on the table in which the insert
is happening.

Thanks,

Gunjeet

Tom Kyte
November 19, 2004 - 10:15 am UTC

oh, why not use a predicate then that selects all data for a given partition and run N of those with insert /*+ append */ concurrently?

A reader, November 28, 2004 - 8:39 pm UTC

Tom,

I have a 5 million row table in one schema. I would like to create a test table in a new schema (using CTAS). Could you please mention a few checks that should be planned before bulk insert, delete from, truncate and index rebulids ? I am looking for things like rollback segments, sort_area_size etc..

Tom Kyte
November 29, 2004 - 8:01 am UTC

ctas and create index won't use rollback really.
insert /*+ append */ wont either (except if the table is indexed).

nothing "extraordinary" needs to be "checked", just the normal things. to copy and index 5 million rows will generate redo in archive log mode -- that would be one thing to consider

How make insert by select statement?

Andrew, December 26, 2004 - 3:03 pm UTC

Hello,Tom!
I have a table:
create table t
(prod_no number(4),
nwpr_date date,
branch_no number(3),
qty number(7));

I want to insert to this table data from another table as:

INSERT INTO T
SELECT PROD_NO,NWPR_DATE,BRANCH_NO,SUM(X_QTY)
FROM T2
GROUP BY PROD_NO,NWPR_DATE,BRANCH_NO;
AND EVENTUALLY THE DATA IN TABLE T WILL BE:
PROD_NO NWPR_DATE BRANCH_NO QTY
======= ========= ========= ===
1 01/11/04 1 20
1 01/11/04 2 30
1 02/11/04 1 44
1 02/11/04 2 45
But the point is that i also have to insert to this
table per prod_no and nwpr_date total qty with branch_no=0 and the value 0 doesn't exist in table T2, it will look like

PROD_NO NWPR_DATE BRANCH_NO QTY
======= ========= ========= ===
1 01/11/04 1 20
1 01/11/04 2 30
1 01/11/04 0 50
1 02/11/04 1 44
1 02/11/04 2 45
1 02/11/04 0 89
How can i combine such insert
with the above select?
Thank you .
Andrew

Tom Kyte
December 26, 2004 - 5:49 pm UTC

how's about you give me create table/insert into table statements so I don't spend 1/2 my life turning:

PROD_NO NWPR_DATE BRANCH_NO QTY
======= ========= ========= ===
1 01/11/04 1 20
1 01/11/04 2 30
1 02/11/04 1 44
1 02/11/04 2 45


into insert into's and figuring out what columns I'd need...


(hint: the answer will include a group by rollup and/or grouping sets if you want to figure it out on your own)


Aalbert, January 11, 2005 - 11:42 am UTC

Tom, you said that "the bulk collect isn't valid there, you would bulk fetch and forall i insert. array.count tells you exactly how many 'rows' there were." in your response to the review from Ik from BG, India on August 10, 2004.

Can you explian why the bulk collect is not valid?

CREATE TABLE test_dummy (x NUMBER);

DECLARE
TYPE NumberList IS TABLE OF NUMBER;
m_x_list NumberList;
BEGIN
INSERT INTO test_dummy VALUES(10);
INSERT INTO test_dummy VALUES(3) RETURNING x BULK COLLECT INTO m_x_list;
UPDATE test_dummy SET x = 10 RETURNING x BULK COLLECT INTO m_x_list;
INSERT INTO test_dummy (SELECT 3 FROM dual) RETURNING x BULK COLLECT INTO m_x_list;
END;
/

I get ERROR at line 8:
ORA-06550: line 8, column 40: PL/SQL: ORA-00933: SQL command not properly ended
ORA-06550: line 8, column 2: PL/SQL: SQL Statement ignored

I don't understand why the SELECT makes so much of a difference, my feeling is that it shouldn't make a difference at all. Can you explain why this does not work?

Tom Kyte
January 11, 2005 - 1:39 pm UTC

it makes a difference because it is not a supported, valid construct.

it doesn't work because it is not supported syntax. I personally don't know the reasons "behind" every nuance, I just try to keep up on the nuances.

Aalbert, January 12, 2005 - 10:01 am UTC

I was hoping it was some kind of a syntax error, but at least it's clear to me now.
It seems to me that it would be a useful addition; especially when returning keys, newly created by a sequence-trigger combo, for further use. Do you know of any plans to incorporate such a construct?

As for the nuances, it seems to me you're doing a pretty good job keeping up with them and - more importantly - explaining them. Thank you.

Tom Kyte
January 12, 2005 - 10:05 am UTC

don't know if it is in the plans -- but you would file enhancement requests via support -- the only way to get what you want in there is to formally "ask for it" in the long run....

OK

Raju, February 03, 2005 - 12:31 pm UTC

Dear Tom,
I am not able to test for SQL%bulk_rowcount cursor attribute.
How to correct this?

SQL> create table t(x varchar2(3))
  2  /

Table created.

SQL> declare
  2  type typ is table of varchar2(20);
  3  vals typ := typ('dogs','cats','rats','hens');
  4  begin
  5  forall i in vals.first .. vals.last
  6   insert into t values(vals(i));
  7  if (sql%bulk_rowcount = 0) then
  8   dbms_output.put_line('No rows have been inserted');
  9  else
 10   dbms_output.put_line('All rows have been inserted');
 11  end if;
 12  end;
 13  /
if (sql%bulk_rowcount = 0) then
                      *
ERROR at line 7:
ORA-06550: line 7, column 23: 
PLS-00306: wrong number or types of arguments in call to '=' 
ORA-06550: line 7, column 1: 
PL/SQL: Statement ignored 



 

Tom Kyte
February 03, 2005 - 2:41 pm UTC

well, first of all, the check is *not needed* (think about it, only if you did "insert into T select ..." would you need it)

and secondly, sql%bulk_rowcount is an array.


for i in 1 . vals.first..vals.last
loop
dbms_output.put_line( 'the ' || i || '''th insert inserted ' ||
sql%bulk_rowcount(I) || ' rows...' );
end loop;



BulkInsertion

selva, February 07, 2005 - 1:13 am UTC

Hi tom,


Really your service is very helpful for us,
ok coming to my point i have a table with 6 million rows that does not have any
keys,but for business purpose i need to take 2 columns from this table using
substr function i will merge with another table.
my requirements is if i found substr match then i need to check some more
conditions and finally insert the data to archive table.if match is not found in
this case also insert into archive table.
i wrote a query for this but for 6 million rows it does not run faster.
can you please suggest me which way is best solution for this situation. please see the query

insert into tblsap_history(select * from tblsap ts
where
(
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a,ds_onebill d
where
(ts.taig like 'IS%' or ts.taig like 'TAIG-No Change In Bill Group%' and a.source_system = 1)
or
(
d.bill_profile='EXCLUDE'
and a.onebill_account=d.onebill_account
)
)
OR
(ts.css_database,ts.sap) not in (select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a
)
and required_by <= (sysdate-180)
)
);

Thanks in advance.


Tom Kyte
February 07, 2005 - 4:35 am UTC

this is the third place you've put this. go look at the others where I've commented on this. and please - don't feel obligated to post it all over the place.

Bulk insertion

selva, February 07, 2005 - 4:15 am UTC

i think no way to avoid substr functionality because
i want to find out the acoount through this fuction
for example
table1.sap table1.css_database
CM 1234578
EM 12341234
AM 12367890
now iam having another table like the column values

table2.source_account
CM1234578
EM12341234
AM12367890
in this case how can we replace substr function?
can you please look this query for any possibilities.
insert into tblsap_history(select * from tblsap ts
where
(
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a
where a.source_system = 1
and (ts.taig like 'IS%' or ts.taig like 'TAIG-No Change In Bill Group%')
)
OR
(ts.css_database,ts.sap) not in (select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a
)
OR
(ts.css_database,ts.sap) in ( select SUBSTR(source_account, 1, 2),SUBSTR(source_account, 3, 18)
from ds_account a ,ds_onebill d
where d.bill_profile='EXCLUDE'
and a.onebill_account=d.onebill_account
)
and required_by <= (sysdate-180)
)
);


Tom Kyte
February 07, 2005 - 5:00 am UTC

stop putting this all over the place -- really.

Bulk collect and memory usage

Praveen, February 07, 2005 - 9:40 am UTC

Hi Tom,

Ref: Question of the tread

"Using either scalar arrays or object collections to bulk insert a huge table with 50 million records with a 50+ columns is sure going eatup all the system memory and thus stalling other transactions, if not breaking up itself at one point when no more memory is available. SELECT...INSERT do the job as batches hence reliable."

Does that make any sense, Tom?

Praveen

Tom Kyte
February 07, 2005 - 10:41 am UTC

yes, if you fill an array up with 50,000,000 items, you are sort of going to up eat a considerable amount of memory.

how can we pass a record to a procedure or function

A reader, March 09, 2005 - 4:50 pm UTC

Hi

We have a code like this

declare
....
procedure prc_x (p1 number, p2 number, p3 number... until p50)
...
procedure prc_y (p_f1 varchar(10), p_f2 varchar(10))
....
procedure prc_z (p_f1 varchar(10), p_f2 varchar(10))
....
begin
for i in (select * from a, b where a.x = b.x)
loop
if i.y = 1 then prc_y;
else prc_z;
end if;
prc_x(p1, p2, p3 until p50
end loop;
end;
/

Instead of using 50 variables for p1 - p50 I created a record but I wonder how do you pass a record to a procedure?

THX

Tom Kyte
March 09, 2005 - 6:29 pm UTC


ops$tkyte@ORA9IR2> declare
  2          cursor c is select * from dual;
  3
  4          procedure foo( l_data in c%rowtype )
  5          is
  6          begin
  7                  null;
  8          end;
  9  begin
 10          for x in c
 11          loop
 12                  foo(x);
 13          end loop;
 14  end;
 15  /

PL/SQL procedure successfully completed.
 

insert records

A reader, March 11, 2005 - 7:57 am UTC

Hi

There is an example in the documentation about how to insert with records

Chapter 5 Using PL/SQL Collections and Records

it does not work

DECLARE
dept_info dept%ROWTYPE;
BEGIN
dept_info.deptno := 70;
dept_info.dname := 'PERSONNEL';
dept_info.loc := 'DALLAS';
INSERT INTO dept VALUES dept_info;
END;
/

ERROR at line 7:
ORA-06550: line 7, column 28:
PLS-00518: This INSERT statement requires VALUES clause containing a parenthesised list of values
ORA-06550: line 7, column 4:
PL/SQL: SQL Statement ignored


DECLARE
dept_info dept%ROWTYPE;
BEGIN
dept_info.deptno := 70;
dept_info.dname := 'PERSONNEL';
dept_info.loc := 'DALLAS';
INSERT INTO dept VALUES (dept_info);
END;
/

ERROR at line 7:
ORA-06550: line 7, column 29:
PLS-00382: expression is of wrong type
ORA-06550: line 7, column 4:
PL/SQL: SQL Statement ignored

How can we insert using records? Is the documentation wrong?





why this is not supported

A reader, March 14, 2005 - 3:58 am UTC

Hi

I am curious why Oracle supports bulk collect into record but not the normal fetch


declare
cursor c_emp is select ename, sal from emp;
type r_emp is table of c_emp%rowtype;
record_emp r_emp;
begin
open c_emp;
loop
fetch c_emp into record_emp;
exit when c_emp%notfound;
dbms_output.put_line(record_emp.ename);
end loop;
close c_emp;
end;
/ -- DOES NOT WORK, PLS-00597: expression 'RECORD_EMP' in the INTO list is of wrong type


declare
cursor c_emp is select ename, sal from emp;
type r_emp is record (ename varchar2(10), sal number);
record_emp r_emp;
begin
open c_emp;
loop
fetch c_emp into record_emp ;
exit when c_emp%notfound;
dbms_output.put_line(record_emp.ename);
end loop;
close c_emp;
end;
/ -- WORKS

declare
cursor c_emp is select ename, sal from emp;
type r_emp is table of c_emp%rowtype;
record_emp r_emp;
begin
open c_emp;
loop
fetch c_emp bulk collect into record_emp;
for i in 1..record_emp.count
loop
dbms_output.put_line(record_emp(i).ename);
end loop;
exit when c_emp%notfound;
end loop;
close c_emp;
end;
/ -- WORKS

HOW SO?!

Tom Kyte
March 14, 2005 - 8:05 am UTC

how not?

I mean -- why would you expect the first one to work? You are fetching "A ROW" into "AN ARRAY"

when you fetch "A ROW" into "A RECORD" it works

when you bulk collect many "ROWS" into an "ARRAY" it works

when you fetch "A ROW" into an "ARRAY" -- it fails since a row is not an array.

returning bulk collect, possible problem

A reader, March 15, 2005 - 10:33 am UTC

Hi

I would like to know how can we limit the number of rows being returned to a collection, something similar to LIMIT for fetch operations.

For example if we do

delete X
returning ... bulk collect into .....

if X has 1 million rows we might get ORA-04030, is there any sort of work around to avoid this or we should code something like

delete X
where rownum <= SIZE_ARRAY
returning ... bulk collect into .....

Tom Kyte
March 15, 2005 - 10:40 am UTC

ops$tkyte@ORA9IR2> create table emp as select empno from scott.emp;
Table created.
 
ops$tkyte@ORA9IR2> declare
  2          type array is varray(10) of number;
  3          l_data array;
  4  begin
  5          delete from emp where rownum <= 5 returning empno bulk collect into l_data;
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> rollback;
 
Rollback complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          type array is varray(10) of number;
  3          l_data array;
  4  begin
  5          delete from emp returning empno bulk collect into l_data;
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-22160: element at index [10] does not exist
ORA-06512: at line 5
 



I would not use "where rownum", that changes the entire semantics of the query 

what to do if we get ORA-22160

A reader, March 15, 2005 - 10:48 am UTC

If we get ORA-22160 what can we do? How would we handle it? I think we must split the delete into several small delete staments (dynamically) no?

Tom Kyte
March 15, 2005 - 11:49 am UTC

no - it means that your fundemental assumption that 10 or less rows would be deleted has been violated and you need to change your algorithm accordingly.

Give me a real world use case to work with -- whereby you would delete hundreds or thousands of rows and need the values you just deleted and tell us what you would do with them.

Then I would turn the approach around and say you probably want to process the records before you delete them in SETS using nice bulk operations and then delete them.

what to do if we get ORA-22160

A reader, March 16, 2005 - 2:56 am UTC

I am ashamed to tell why I need to store the before-delete rows.

I am updating rows in a partition table, however I will be updating the partition keys soooooooooo I had to tell the DBAs here (myself I am a DBA btw) that they need to enable the row movement in the table. The answer surprised me, they wont enable it because moving many rows from partition to partition will "SCREW UP" the table (wont say why). I asked for a tecnical reason because I dont consider that a reasonable reason, in fact Oracle must do smth like DELETE/INSERT when we update a partition key.

So they told me that I need to code the enable row movement myself, which is equivalent to a DELETE operation then INSERT, so what I am trying to do is store what i deleted in an array (the good news is the number of rows to be deleted wont be large, probably not over 5000) then using HOST array to insert what i deleted substituting the old partition key with the new one.

I am amazed but there is nothing I ca do.

Regards

Tom Kyte
March 16, 2005 - 8:29 am UTC

we do a delete + insert under the covers, that is the very REASON for the enable row movement - the rowid changes since we delete and then re-insert.


do an INSERT and then a DELETE would be my advice (short of getting a new dba team of course)

A reader, March 17, 2005 - 4:45 am UTC

IS IT POSSIBLE TO LOAD A DATA FROM OTEHR DATABASE(sql-server,ms-aceess,ms-excel) TO ORACLE(9i)? IF YES COULD U EXPLAIN THE STEPS IN DETAIL PLZ

Tom Kyte
March 17, 2005 - 8:45 am UTC

capslock totally stuck?

PLZ? german postal codes? not sure what to make of that.

And this "U" person, I really want to meet him or her (do you have their email? I get lots of requests for them, lots and lots)

Since "U" isn't around, will I be ok answering?

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4406709207206#18830681837358 <code>
is one approach... you can query those sources using odbc.

all of them can create CSV (comma separated values) files and sqlldr or external tables can be used easily as well.

Bulk updates

Kanth, April 27, 2005 - 9:30 am UTC

Tom,
I have bulk collected all empno's & salaries in an array from EMP table. Now i want to check in EMP2 table, if same empno exists then update salary for all rows (bulk updates), else insert using empno,and salary.
MERGE statement will not work i guess, as I need bulk insert and bulk update.

Shall i use (pseudo code)
For each row
Select from EMP2 table for empno(i)
if not found then, insert
if found then, update

Or can you give another approach for this

Thank you,


Tom Kyte
April 27, 2005 - 9:35 am UTC

why bulk collect out just to put back in? why not just merge in the first place using a single sql statement.

Bulk updates

Kanth, April 27, 2005 - 9:56 am UTC

I am not bulk collecting in the same procedure, but these are bulk collected in a package array (some business requirement) and used by couple of other functions . I have to make use of same array values and insert into a table depending on whether they exist or not.

Thank you

Tom Kyte
April 27, 2005 - 10:03 am UTC

are they SQL collections or PLSQL index by tables?

(best if they are SQL collections -- a collection of object types, we can use MERGE and a single operation again)

Bulk Updates

Kanth, April 27, 2005 - 10:10 am UTC

Tom,
They are pl/sql index tables. If you think changing them to sql collections would work,I can discuss that with my group and ask other developer to use sql collections. But if you can , give me best approach, using both pl/sql tables and sql collections ,just in case..

Thanks for your time

Tom Kyte
April 27, 2005 - 10:28 am UTC

ops$tkyte@ORA9IR2> create table emp as select empno, sal from scott.emp;
 
Table created.
 
ops$tkyte@ORA9IR2> create table emp2 as select empno, sal/2 sal from scott.emp where mod(empno,2)=1;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace type myScalarType as object ( empno number, sal number )
  2  /
 
Type created.
 
ops$tkyte@ORA9IR2> create or replace type myArrayType as table of myScalarType
  2  /
 
Type created.
 
ops$tkyte@ORA9IR2> select count(*) from
  2  (( select * from emp minus select * from emp2 ) union all (select * from emp2 minus select * from emp ));
 
  COUNT(*)
----------
        18
 
ops$tkyte@ORA9IR2> declare
  2      l_data myArrayType;
  3  begin
  4      select myScalarType(empno, sal) bulk collect into l_data
  5        from emp;
  6
  7      merge into emp2
  8      using ( select x.empno, x.sal from table( cast(l_data as myArrayType) ) X )  emp
  9      on (emp2.empno = emp.empno)
 10      when matched then update set sal = emp.sal
 11      when not matched then insert (empno,sal) values(emp.empno, emp.sal);
 12  end;
 13  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> ( select * from emp minus select * from emp2 ) union all (select * from emp2 minus select * from emp );
 
no rows selected
 
ops$tkyte@ORA9IR2> rollback;
 
Rollback complete.
 
ops$tkyte@ORA9IR2> select count(*) from
  2  (( select * from emp minus select * from emp2 ) union all (select * from emp2 minus select * from emp ));
 
  COUNT(*)
----------
        18
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2      type numArray is table of number index by binary_integer;
  3      l_empnos  numArray;
  4      l_sals    numArray;
  5  begin
  6      select empno, sal bulk collect into l_empnos, l_sals from emp;
  7
  8      forall i in 1 .. l_empnos.count
  9          merge into emp2
 10          using ( select l_empnos(i) empno, l_sals(i) sal from dual ) emp
 11          on (emp2.empno = emp.empno)
 12          when matched then update set sal = emp.sal
 13          when not matched then insert (empno,sal) values(emp.empno, emp.sal);
 14  end;
 15  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> ( select * from emp minus select * from emp2 ) union all (select * from emp2 minus select * from emp );
 
no rows selected
 
 

quick question

Raajesh, May 02, 2005 - 12:41 pm UTC

Tom

Iam trying to do a bulk insert as
insert into ---
(select ......)

My select has a start with and connect by construct as well.
Now, If I insert as a separate sql it works fine. But if I insert it by giving the same in a stored procedure it is not working. Rather it gives error, Missing Select Keyword. Am I doing anything wrong..can you help me out..

Thanks
Raajesh

Tom Kyte
May 02, 2005 - 2:06 pm UTC

give example.

and versions

Examples

raajesh, May 02, 2005 - 2:28 pm UTC

Here is my insert statement
SELECT DISTINCT A.A1,A.A2,'TE',TRUNC(SYSDATE),'NEW'
FROM A, B
WHERE A.A1=B.B1 AND
A.A2=B.B2 AND
A.A3=' ' AND
A.A4 IN ('S1','S2','S3') AND
A.A5 NOT IN ('S4','S5') AND
B.B2 IN ((SELECT B2 FROM C
WHERE C.C1 = 'C1' AND C.C2 <> 'C2' AND C.C3='C3'
START WITH C1 IN (SELECT D1 FROM D WHERE D.D1 IN (SELECT E.E1 FROM E WHERE E.E2 LIKE '1%' AND E.E3='STR3') AND C2='STR4')
CONNECT BY PRIOR C1 = C4 and C5 <> 'STR6')));
this
works fine if executed as standalone but not when placed inside the procedure

Tom Kyte
May 02, 2005 - 7:54 pm UTC

that is not an insert and I don't have the create table structures to test it.

Need example.
And versions.

something I can run - table creates, plsql block.

Bulk and merge

William de Ronde, May 10, 2005 - 10:19 am UTC

Hello Tom,

Considering the following code, you provided:

ops$tkyte@ORA9IR2>
declare
l_data myArrayType;
begin
select myScalarType(empno, sal) bulk collect into l_data
from emp;

merge into emp2
using ( select x.empno, x.sal from table( cast(l_data as myArrayType) ) X ) emp
on (emp2.empno = emp.empno)
when matched then update set sal = emp.sal
when not matched then insert (empno,sal) values(emp.empno, emp.sal);
end;
/



I tried to use your "table(cast(some_array))" solution to fix a collection for the merge statement, however the performance of this solution did not meet my expectations. The proces is taking too much time now.

So I 'bulk collected' the data in a global temporary table, using:

begin
insert into emp_tmp /* global temporary table */
as
select empno, sal
from emp
;
merge into emp2
using ( select *
from emp_tmp ) emp
on (emp2.empno = emp.empno)
when matched then update set sal = emp.sal
when not matched then insert (empno,sal) values(emp.empno, emp.sal);
end;
/


This did the job in an acceptable amount of time. Am i missing something here? Is using a global temporary table instead of a conversion from PL/SQL array to a SQL array really a good solution?

With regards,
William de Ronde


Tom Kyte
May 10, 2005 - 10:46 am UTC

plans? tkprofs are always very useful.

A reader, June 01, 2005 - 4:10 pm UTC

Hi Tom,
Questions:

-While inserting data into 't2' table, I want to include primary key
of the table and its value is coming from a sequence.

-And also I have to include foriegn keys,user_id and time_stamp, while inserting
data into a table.

How can i accomplish these
with FORALL /InSERT statement?

Here is the example:


CREATE TABLE t1 (pnum INTEGER, pname VARCHAR2(15));
CREATE TABLE t2 AS SELECT * FROM t1;

CREATE OR REPLACE PROCEDURE forall_delete(iterations PLS_INTEGER) IS

TYPE NumTab IS TABLE OF t1.pnum%TYPE INDEX BY PLS_INTEGER;
TYPE NameTab IS TABLE OF t1.pname%TYPE INDEX BY PLS_INTEGER;
pnums NumTab;
pnames NameTab;

a INTEGER;
b INTEGER;
c INTEGER;

BEGIN
FOR j IN 1..iterations LOOP -- load index-by tables
pnums(j) := j;
pnames(j) := 'Part No. ' || TO_CHAR(j);
END LOOP;

a := dbms_utility.get_time;

FOR i IN 1..iterations LOOP -- use FOR loop
INSERT INTO t1 VALUES (pnums(i), pnames(i));
END LOOP;

b := dbms_utility.get_time;

FORALL i IN 1 .. iterations -- use FORALL statement
INSERT INTO t2 VALUES (pnums(i), pnames(i));

c := dbms_utility.get_time;

dbms_output.put_line('Execution Time (secs)');
dbms_output.put_line('---------------------');
dbms_output.put_line('FOR loop: ' || TO_CHAR((b - a)/100));
dbms_output.put_line('FORALL: ' || TO_CHAR((c - b)/100));
COMMIT;
END;
/

set serveroutput on
exec forall_delete(500);
exec forall_delete(5000);
exec forall_delete(50000);



Tom Kyte
June 01, 2005 - 5:32 pm UTC

example doesn't seem to illustrate issue? not sure what you need.

A reader, June 03, 2005 - 8:36 am UTC

Hi Tom,

I want to do 'Bulk Inserts' into multiple tables.

For every 'i'th record in master table, I have to insert
'n' records in child table.

How is it possible?

Thanks.


Tom Kyte
June 03, 2005 - 8:45 am UTC

doesn't make sense to me.

A reader, June 03, 2005 - 9:58 am UTC

I want to do a 'bulk insert' into the table.

The table does not have any triggers.
Programatically I have to insert the
sequence value for the primary key. How can I do
this using 'FORALL' statement.
I mean for each row i insert; it should take a unique sequence values. Please help.

Thanks.

Tom Kyte
June 03, 2005 - 10:08 am UTC

ops$tkyte@ORA9IR2> create table t ( x int, y int );
 
Table created.
 
ops$tkyte@ORA9IR2> create sequence s;
 
Sequence created.
 
ops$tkyte@ORA9IR2> declare
  2          type array is table of number;
  3          l_data array := array(1,1,2,3,4);
  4  begin
  5          forall i in 1..l_data.count
  6                  insert into t (x,y) values ( s.nextval, l_data(i) );
  7  end;
  8  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from t;
 
         X          Y
---------- ----------
         1          1
         2          1
         3          2
         4          3
         5          4
 
 

A reader, June 03, 2005 - 10:42 am UTC

Thanks a lot for your timely response Tom!
I have another doubt:

If table 't' has a child table 't1';
how can I insert data into child table for each value of x
using forall?

I mean :

table t table t1
------ --------

X Y A B X
- - _ - -

1 10 100 a1 1
2 20 101 a1 1
3 30 102 a2 2


Please help.


Tom Kyte
June 03, 2005 - 12:14 pm UTC

no idea, insufficient data to even dream up what the logic is....

Why?

Shahrouz DARVISHZADEH, June 14, 2005 - 6:42 am UTC

Hello Tom,
I am inserting from one database into another one for a Datawarehouse application. I write 5Million rows in one case in the remote database in 5 minutes using every trick that I learned from you. On onther table I insert 17Million rows into the remote database and it takes 2 dayes until it is finished. In both cases are partitoned tables, without indexes, same machines same users same databases. Could you give me a clue on how I can go on this problem?

Thanks
Shahrouz DARVISHZADEH

Tom Kyte
June 14, 2005 - 10:00 am UTC

trace it



Shahrouz DARVISHZADEH, June 15, 2005 - 4:16 am UTC

with
ORADEBUG EVENT # TRACE NAME CONTEXT FOREVER, LEVEL 4
??

Thank you
Darvishzadeh

Tom Kyte
June 15, 2005 - 9:42 am UTC

you don't need oradebug to simply turn on trace.

alter your session and trace it, simple alter session does it.

Best way to bulk insert NOT from a cursor

Javier Morales, June 15, 2005 - 8:03 am UTC

Hi Tom,

I was trying to do something really simple. Just fill a table with 1 million of numbers. I tried and traced three ways to do this:

1.- Using "insert into" inside a FOR loop.
2.- Using "FORALL insert into" using a table of numbers.
3.- Using "insert into xxxx select" from a table more than with 1 million rows. (1010001 rows)

The results are:
SQL> exec fill_vals_1

Procedimiento PL/SQL terminado correctamente.

Transcurrido: 00:03:53.06
SQL> exec fill_vals_2

Procedimiento PL/SQL terminado correctamente.

Transcurrido: 00:02:42.00
SQL> exec fill_vals_3

Procedimiento PL/SQL terminado correctamente.

Transcurrido: 00:00:29.06


I understand why the thirth procedure is only 20 seconds, and why the first is slow (each insert is done atomically). 

The thirth way does a full scan of the table and then a "sql load", but I can't understand why the second way is so slow respect the others. It does no physical reads for the v8 table. 

Do I need to have a table with unuseless data for this operations? Is any better way to speed up this executions? Am I missing something?

Thanks in advance, Tom.
Here is the code:



-- PROCEDURE 1: LOOP FOR with INSERT
------------------------------------------------
create or replace procedure FILL_VALS_1 is
begin
  FOR v_counter IN 1..1000000 
       LOOP INSERT INTO TBL_LOOP (NUM) VALUES (v_counter);
  END LOOP;
  COMMIT;
end FILL_VALS_1;


-- PROCEDURE 2: TABLE TYPE with FORALL INSERT
------------------------------------------------
create or replace procedure FILL_VALS_2 is
   TYPE T_COUNT IS TABLE OF number INDEX BY BINARY_INTEGER;
   V_COUNT T_COUNT;
begin
  FOR X IN 1 .. 1000000
  LOOP
       V_COUNT(X):=X;
  END LOOP;
  
  FORALL I IN 1.. V_COUNT.COUNT 
         insert into TBL_LOOP (NUM) 
         VALUES (V_COUNT(I)); 
  COMMIT;
end FILL_VALS_2;


-- PROCEDURE 3: INSERT as SELECT FROM table
------------------------------------------------
create or replace procedure FILL_VALS_3 is
begin
  INSERT INTO TBL_LOOP (NUM) 
      SELECT ROWNUM FROM TEST_JAVIER;
      -- Table TEST_JAVIER has 1 million rows.
  COMMIT;
end FILL_VALS_3;


-- And here's the trace result
-----------------------------------
********************************************************************************
BEGIN fill_vals_1; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.12          0          0          0           0
Execute      1    100.95     109.01          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    100.95     109.13          0          0          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 90  
********************************************************************************

INSERT INTO TBL_LOOP (NUM) 
VALUES
 (:b1)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.45          1          1          0           0
Execute 1000000    119.82     124.02          1       1525    1022221     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   1000001    119.82     124.47          2       1526    1022221     1000000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 90     (recursive depth: 1)
********************************************************************************

COMMIT


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.00       0.13          0          0          0           0
Execute      3      0.01       1.29          0          0          3           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.01       1.43          0          0          3           0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 90     (recursive depth: 1)
********************************************************************************

BEGIN fill_vals_2; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.12          0          0          0           0
Execute      1      1.60      87.29          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.62      87.42          0          0          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 90  
********************************************************************************

INSERT into TBL_LOOP (NUM) 
         VALUES (:b1)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1     12.32      54.89          0       1730      11669     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     12.34      54.89          0       1730      11669     1000000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 90     (recursive depth: 1)
********************************************************************************

BEGIN fill_vals_3; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       3.17          0          0          0           0
Execute      1      0.01       0.65          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.01       3.83          0          0          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 90  
********************************************************************************

INSERT INTO TBL_LOOP (NUM) 
      SELECT ROWNUM+:b1 FROM TEST_JAVIER

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.43          0          0          0           0
Execute      1      1.98      23.96       2220       4092      12429     1010001
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.98      24.39       2220       4092      12429     1010001

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 90     (recursive depth: 1)
******************************************************************************** 

Tom Kyte
June 15, 2005 - 10:00 am UTC

putting a million things into an array takes time.
inserting a million things in bulk (but still, one at a time) takes time


I would never put 1,000,000 things in an array. 100, maybe 1,000 and then insert, over and over.

bulk processing is a definite "too much of a good thing" case. You don't want to go overboard and save up 1,000,000 things to do, just do a little, let the database do a little, you do a little, database does a little.


But using SQL is going to be the right answer ( as usual )

Best way to bulk insert NOT from a cursor. Thank you

Javier Morales, June 15, 2005 - 11:06 am UTC

Thank you so much, Tom,

The way you suggested is the best. I tried an array of 1000 numbers and insert it with FORALL insert 1000 times, and the time's even better than with only SQL.
  THE BEST: there's no need to have a table with unuseless data.
  THE WORST: rows are numbered from 1 to 1000, 1000 times.

SQL> exec fill_vals_4

Procedimiento PL/SQL terminado correctamente.

Transcurrido: 00:00:02.03

The point is that I don't have rows from 1 to 1 million, but in that case, that's perfect for my test.

Thanks, Tom.


-- PROCEDURE 4: 1000 times TABLE TYPE with FORALL INSERT
------------------------------------------------
create or replace procedure FILL_VALS_4 is
   TYPE T_COUNT IS TABLE OF number INDEX BY BINARY_INTEGER;
   V_COUNT T_COUNT;
begin
  FOR X IN 1 .. 1000           --1000 values to the TABLE
  LOOP
       V_COUNT(X):=X;
  END LOOP;
  for x in 1..1000             --1000 times a FORALL INSERT
  loop
  FORALL I IN 1.. V_COUNT.COUNT 
         insert into TBL_LOOP (NUM) 
         VALUES (V_COUNT(I)); 
  end loop
  COMMIT;
end FILL_VALS_4;



-- and here's the trace
--------------------------------------
********************************************************************************

BEGIN fill_vals_4; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.14       0.27          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.14       0.27          0          0          0           1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 90  (JAVIER)
********************************************************************************

INSERT into TBL_LOOP (NUM)
         VALUES (:b1)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   1000      1.79       2.50          0       1732      14039     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1001      1.79       2.50          0       1732      14039     1000000

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 90  (JAVIER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   GOAL: CHOOSE

 

Tom Kyte
June 15, 2005 - 3:37 pm UTC

insert as select - if can be done -- is the best.

You had wait events in one that you did not in the other. if you are moving data from tablea to tableb - writing procedural code *is just wrong*

Best way to bulk insert NOT from a cursor. Thank you

Javier Morales, June 16, 2005 - 4:23 am UTC

Hi Tom,

Thank you so much for your explanation.

No, It's not a data movement. In that case I only wanted to fill a table with a million rows for testing (no real data).

you told: "You had wait events in one that you did not in the other." but I can't find that waitings anywhere. So, In the first case I'm reading a table with 1 million rows (it takes time to read them), and in the other case I'm just increasing a counter in a v8 table (I thought less time) to 1000 and doing by myself "1000 bulk inserts".

Am I "abusing" my PGA stack? Does this waits you told me exist because of freelist waits (1000 inserts)?   

I tested "insert as select" vs. "1000 bulk inserts of a v8table" with runstats. Please, may you explain me about this waits and what do they impact (freelist? or wherelse)? I only see less redo, less physical reads, less latches and less time vs. 1000 increase of STAT...calls to get snapshot s, STAT...execute count and STAT...recursive calls

Thanks in advance for your help and advice,
Javier

*****************************************************
SQL> exec runStats_pkg.rs_start;

Procedimiento PL/SQL terminado correctamente.

SQL> exec fill_vals_3;

Procedimiento PL/SQL terminado correctamente.

SQL> exec runStats_pkg.rs_middle;

Procedimiento PL/SQL terminado correctamente.

SQL> exec fill_vals_4;

Procedimiento PL/SQL terminado correctamente.

SQL> exec runStats_pkg.rs_stop;
Run1 ran in 1645 hsecs
Run2 ran in 1190 hsecs
run 1 ran in 138,24% of the time

Name                                  Run1        Run2        Diff
LATCH.kwqit: protect wakeup ti           1           0          -1
LATCH.session idle bit                  15          14          -1
LATCH.session timer                      6           5          -1
STAT...CR blocks created                 1           2           1
STAT...commit cleanouts succes           1           2           1
STAT...data blocks consistent            1           2           1
STAT...index scans kdiixs1               1           2           1
STAT...opened cursors current            3           2          -1
STAT...parse time cpu                    0           1           1
STAT...user commits                      1           0          -1
STAT...table scans (long table           1           0          -1
STAT...shared hash latch upgra           1           2           1
STAT...rollbacks only - consis           1           2           1
STAT...redo synch writes                 1           0          -1
STAT...pinned buffers inspecte           9          10           1
STAT...parse time elapsed                0           1           1
STAT...parse count (total)               9          10           1
STAT...opened cursors cumulati           9          10           1
STAT...index fetch by key                1           2           1
STAT...cursor authentications            2           1          -1
STAT...cluster key scans                 1           2           1
LATCH.channel operations paren          10           8          -2
STAT...bytes received via SQL*         879         877          -2
STAT...commit txn count during           0           2           2
STAT...table fetch by rowid              2           4           2
STAT...redo log space requests           2           0          -2
STAT...cluster key scan block            2           4           2
STAT...calls to kcmgas                   9           6          -3
STAT...deferred (CURRENT) bloc           1           4           3
LATCH.FIB s.o chain latch                4           0          -4
STAT...hot buffers moved to he          10           6          -4
STAT...consistent changes              495         499           4
STAT...write clones created in           6           2          -4
LATCH.FOB s.o list latch                 4           0          -4
STAT...redo buffer allocation            2          15          13
LATCH.active checkpoint queue           26          12         -14
STAT...recursive cpu usage             165         179          14
STAT...redo log space wait tim          14           0         -14
STAT...messages sent                    88          65         -23
STAT...CPU used by this sessio         167         191          24
STAT...CPU used when call star         167         191          24
STAT...enqueue requests                340         315         -25
LATCH.Consistent RBA                    91          65         -26
LATCH.lgwr LWN SCN                      91          65         -26
LATCH.mostly latch-free SCN             91          65         -26
STAT...enqueue releases                342         313         -29
LATCH.row cache objects                 76          44         -32
STAT...change write time                26          62          36
STAT...consistent gets - exami         229         190         -39
LATCH.row cache enqueue latch           68          28         -40
STAT...cleanout - number of kt         227         185         -42
STAT...active txn count during         227         184         -43
LATCH.post/wait queue                  182         133         -49
STAT...commit cleanouts                 65           2         -63
STAT...commit cleanout failure          64           0         -64
LATCH.SQL memory manager worka         335         268         -67
LATCH.enqueues                         701         634         -67
LATCH.session allocation                96           4         -92
LATCH.redo writing                     312         215         -97
LATCH.simulator lru latch              261         117        -144
STAT...calls to kcmgcs               1,670       1,519        -151
LATCH.library cache pin alloca         237          82        -155
LATCH.dml lock allocation              195           5        -190
LATCH.simulator hash latch           1,257       1,063        -194
LATCH.messages                         580         384        -196
LATCH.undo global data                 417         199        -218
LATCH.multiblock read objects          292           0        -292
LATCH.enqueue hash chains              987         641        -346
STAT...Elapsed Time                  1,655       1,195        -460
STAT...free buffer inspected         2,046       1,510        -536
STAT...dirty buffers inspected       2,037       1,500        -537
STAT...redo entries                 11,448      10,885        -563
STAT...db block gets                12,742      12,113        -629
LATCH.cache buffer handles             670           0        -670
LATCH.redo allocation               11,785      11,040        -745
LATCH.shared pool                      360       1,158         798
STAT...db block changes             19,865      19,020        -845
STAT...calls to get snapshot s          13       1,010         997
STAT...execute count                     9       1,009       1,000
STAT...recursive calls                  28       1,045       1,017
LATCH.checkpoint queue latch         4,991       3,748      -1,243
LATCH.library cache                    612       2,263       1,651
LATCH.library cache pin                307       2,141       1,834
STAT...prefetched blocks             2,042           0      -2,042
STAT...buffer is not pinned co       2,196          16      -2,180
STAT...physical reads                2,188           7      -2,181
STAT...no work - consistent re       2,193          10      -2,183
STAT...table scan blocks gotte       2,188           0      -2,188
STAT...free buffer requested         4,174       1,835      -2,339
STAT...consistent gets               4,079       1,721      -2,358
STAT...session logical reads        16,821      13,834      -2,987
LATCH.cache buffers lru chain       10,483       4,956      -5,527
LATCH.cache buffers chains          76,395      63,809     -12,586
STAT...table scan rows gotten    1,010,001           0  -1,010,001
STAT...redo size                17,749,668  14,483,544  -3,266,124

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
111,961      93,189     -18,772    120.14%

Procedimiento PL/SQL terminado correctamente. 

Tom Kyte
June 16, 2005 - 9:48 am UTC

INSERT INTO TBL_LOOP (NUM)
SELECT ROWNUM+:b1 FROM TEST_JAVIER

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.43 0 0 0 0
Execute 1 1.98 23.96 2220 4092 12429 1010001
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.98 24.39 2220 4092 12429 1010001

Misses in library cache during parse: 1

You see the big difference between elapsed and cpu, you had waits, IO would be part of it in this case.

inserting streaaming data

sam, July 20, 2005 - 1:52 pm UTC

Tom,

I did search if you have already answered this question. I did not find it.

The question is, We are getting a stream of data that need to be inserted into table (volume is 40 rows / sec (aroung 300 bytes / row)).

Do you have any suggestion for table level & db level parameters we need to look at ?

Tom Kyte
July 21, 2005 - 7:40 am UTC

bulk inserts (save up 100 rows, insert them) and this will not be too much of a big deal.

you'll want to monitor redo generation.

you'll want to minimize the number of indexes.

if you have multiple feeders of this table, you'll want to look at either ASSM (automatic segment space management) or setting of multiple freelists.

you'll want to design for the purge process that must be in there somewhere (partition)



inserting streaaming data

sam, July 21, 2005 - 12:22 pm UTC

Tom,

Thanks for reply.

One more fact to be added . It's a third party API . Doing commit after every row. (We can not change it.)
Also only one API is inserting so I guess freelist won't help.

So whatever we can do is at db side.

We cleanup that table every night. (Any specific things we need to do when we cleanup?)

With this additional info can you suggest something?



Tom Kyte
July 21, 2005 - 4:50 pm UTC

"pray" would be good.

if you "clean up" the table every night -- whatever that means -- why even keep it?

3rd party API would be the ones responsible for telling you what to do to make this feasible and good on your system, if you are buying software from someone, they should be able to tell you how to make it work. If they cannot, you should not buy said software from them?

inserting streaaming data

sam, July 22, 2005 - 11:53 am UTC

Tom,

Cleaning = archiving ....

Sorry for using wrong words.

Can we look for something at table level like pre allocating space etc. after we archive rows??

Thanks for your patience



Tom Kyte
July 22, 2005 - 12:55 pm UTC

I'd be looking at partitioning the underlying table and hoping the 3rd party api had a timestamp column to partition on.

so you don't 'clean', you just roll data off the end.

but again, what does "3rd party api supplier do", it is their stuff, what is their solution to this problem?

inserting streaaming data

sam, July 22, 2005 - 1:01 pm UTC

Tom,

One more thing.

The s/w is 6-7 years old. The vendor do not provide any support on it. So whatever we can do is at db side.

Now you wondering which indestry I am in ???

You guessed right. Banking (Government)




RETURNING ROW

Yuan, August 19, 2005 - 1:53 pm UTC

I'd like to do something like this:

DECLARE
lrShipment Shipment%ROWTYPE;
BEGIN
INSERT INTO Shipment VALUES lrShipment
RETURNING ROW INTO lrShipment;
END;

The reason for this is so that if there are any fields altered by triggers, then lrShipment will reflect what was actually inserted into Shipment.

Unfortunately, I get this error:
PL/SQL: ORA-00904: "ROW": invalid identifier

I also tried RETURNING * but that didn't work either. Is there any way I can do this without listing out all the columns in the RETURNING clause?


Tom Kyte
August 20, 2005 - 4:37 pm UTC

nope, if you don't want to type the names out, you would have to select it back out (i would opt for typing myself, you could even generate a stored procedure that given a table name -- generates a procedure that accepts a record of that table%type, does the insert and returns the values column by column into the record (it would read the data dictionary to generate this procedure)



Yuan, September 02, 2005 - 9:29 am UTC

Thanks. Would be nice if it could return the whole row in the future.

Tom Kyte
September 03, 2005 - 7:15 am UTC

the only way to get your enhancement requests logged and taken into consideration is via an iTar with support!

TAR Created

Yuan, September 06, 2005 - 9:40 am UTC

Done. Thanks again.

Bulk Insert and RETURNING clause

Praveen, October 17, 2005 - 9:02 am UTC

Hi Tom,


I am trying to achieve the following :

Step1:
INSERT INTO new_table (newcol1, newcol2)
SELECT oldcol1, oldcol2
FROM old_table
WHERE <predicate>;

Step2:
SELECT oldcol1 INTO l_array
SELECT oldcol1, oldcol2
FROM old_table
WHERE <predicate>;

Step3:
FORALL i IN l_array.FIRST .. l_array.LAST
INSERT INTO another_new_table
SELECT * FROM another_old_table
WHERE col3 = l_array(i);

Is it possible to combine Step1 and Step2 into a single step using the RETURNING clause? Its giving "SQL command not properly ended" error, when I try.

Regards
Praveen

Tom Kyte
October 17, 2005 - 9:28 am UTC

why not

INSERT INTO new_table (newcol1, newcol2)
SELECT oldcol1, oldcol2
FROM old_table
WHERE <predicate>;

insert into another_new_table
SELECT * FROM another_old_table
WHERE col3 IN ( SELECT oldcol2 BULK COLLECT INTO l_array
FROM old_table
WHERE <predicate> );


?? no code.

Correction

Praveen, October 17, 2005 - 9:05 am UTC

Step2:
SELECT oldcol2 BULK COLLECT INTO l_array
FROM old_table
WHERE <predicate>;

Thanks
Praveen

Re:Bulk Insert and RETURNING clause

Praveen, October 18, 2005 - 1:07 am UTC

Thankyou, Tom. The answer is most appropriate.

But two doubts:

1) Why did you use "BULK COLLECT INTO l_array" instead of more straight forward-

INSERT INTO another_new_table
SELECT * FROM another_old_table
WHERE col3 IN ( SELECT oldcol2 FROM old_table
WHERE <predicate> );
(This came into my mind only after seeing your answer)

2) I came to know that using DBMS_SQL package allows us to RETURN col2 from old_table into l_array in a dynamic sql. Now using the l_arrya bulk insert into another_new_table using FORALL.

Of the three methods, including yours, which one will perform best?

Tom Kyte
October 18, 2005 - 8:57 am UTC


My mantra:

if you can do it in a single sql statement do it.......

Re:Bulk Insert and RETURNING clause

Praveen, October 18, 2005 - 7:07 am UTC

Tom,

The statement -

insert into another_new_table
SELECT * FROM another_old_table
WHERE col3 IN ( SELECT oldcol2 BULK COLLECT INTO l_array
FROM old_table
WHERE <predicate> );

is throwing compile error "ora-01744:inappropriate INTO".

I am working on 10g.


Tom Kyte
October 18, 2005 - 9:13 am UTC

lose the bulk collect into l_array, i just cut and pasted your text.

insert into another_new_table
SELECT * FROM another_old_table
WHERE col3 IN ( SELECT oldcol2
FROM old_table
WHERE <predicate> )

Thankyou, TOM

Praveen, October 19, 2005 - 12:43 am UTC


Bulk merge using tables

A reader, October 28, 2005 - 4:55 pm UTC

Hi Tom reading your previous example,
is there a way to bulk insert if my data is individual table variables.

I have my data in this way

FORALL i IN 1 .. nCountTransac INSERT INTO TRANSACE
VALUES( nTRC_CORR(i), cCodigoTransacI, cTRC_CUENTA(i),
nTRC_CCO_GRUPO(i), 0, -- TRC_MONTO_ORIGEN

I would like without creating a type to move to
for all ....
merge

Will be some performance issue, or any comment, is mandatory to use a type?

Thank you


Tom Kyte
October 29, 2005 - 10:54 am UTC

I did not understand this.

A reader, October 29, 2005 - 6:50 pm UTC

Thanks Tom,
Sorry, it was my mistake I am using bulk merge and I lost performance.
There I have severa records repeated, so it has to do an insert, then an update, an update, etc. For the same record.

The reason is that I was too optimistic and expected the bulk merge, will do a group by based on the condition to avoid makeing unnecesary updates.

Instead of bulk merge I'll move to a temporary table and I'll insert from there.




select individaul column from table type

Ravi, November 08, 2005 - 7:58 pm UTC

Hi Tom,
       I got a problem, when I am trying to select a single column, I am getting an error
SQL> CREATE OR REPLACE PROCEDURE dummy
  2  IS
  3     CURSOR emp_cur
  4     IS
  5        SELECT empno, ename
  6          FROM emp;
  7  
  8     TYPE emp_tb IS TABLE OF emp_cur%ROWTYPE;
  9  
 10     myobj   emp_tb;
 11  BEGIN
 12     OPEN emp_cur;
 13  
 14     LOOP
 15        FETCH emp_cur
 16        BULK COLLECT INTO myobj;
 17     for i in 1..myobj.count loop
 18     dbms_output.put_line(myobj.empno(i));
 19     end loop;
 20     END LOOP;
 21  END;
 22  /

Warning: Procedure created with compilation errors.

SQL> show errors
Errors for PROCEDURE DUMMY:

LINE/COL ERROR
-------- -----------------------------------------------------------------
18/4     PL/SQL: Statement ignored
18/31    PLS-00302: component 'EMPNO' must be declared
SQL> 
 
can you please help me in fixing this problem
Thank you in advance for your valuable time 

Tom Kyte
November 08, 2005 - 10:52 pm UTC

ops$tkyte@ORA10GR2> CREATE OR REPLACE PROCEDURE dummy
  2  IS
  3     CURSOR emp_cur
  4     IS
  5        SELECT empno, ename
  6          FROM emp;
  7
  8     TYPE emp_tb IS TABLE OF emp_cur%ROWTYPE;
  9
 10     myobj   emp_tb;
 11  BEGIN
 12     OPEN emp_cur;
 13     FETCH emp_cur BULK COLLECT INTO myobj;
 14     for i in 1..myobj.count loop
 15        dbms_output.put_line(myobj(i).empno);
 16     END LOOP;
        close emp_cur;
 17  END;
 18  /

Procedure created.


good thing it didn't work :) it was an infinite loop your way!! 

Thank you, you are genius

Ravi, November 08, 2005 - 11:17 pm UTC

I nearly spent one day looking for solution, thank you for all you time and help

Cursors Vs PLSQL Tables(Bulk Collect)

Ravi Kumar, November 09, 2005 - 6:29 am UTC

Hi Tom..

Can you please suggest me that which type of code is good out of these 2 below and why ?

1)
PROCEDURE rtst1 IS
CURSOR c1 IS
SELECT ename,job
FROM emp
WHERE job = 'MANAGER' AND SAL > 2500;

c1_rec c1%ROWTYPE;

BEGIN

OPEN c1;
LOOP
FETCH c1 INTO c1_rec;
EXIT WHEN c1%NOTFOUND;
----DO SOMTHING----
END LOOP;
CLOSE c1;
END;

2)
PROCEDURE rtst1 IS
TYPE rtype IS RECORD(ename emp.ename%TYPE,
job emp.job%TYPE);

TYPE ttype IS TABLE OF rtype INDEX BY PLS_INTEGER;

ttable ttype;
BEGIN
SELECT ename,job BULK COLLECT INTO ttable
FROM emp
WHERE job = 'MANAGER' AND SAL > 7681;

FOR i IN ttable.first .. ttable.last LOOP
----DO SOMTHING----
END LOOP;
END;

And if there is anything better than this please explain that.

Thanks & Regards
Ravi Kumar




Tom Kyte
November 10, 2005 - 5:27 pm UTC

I'm very fond of option 3

begin
for x in ( select .... )
loop
process...
end loop;
end;


in 10g, that'll array fetch 100 rows at a time and process them nicely. simple code, no messy bulk collects.


In 9i and before - if the number of rows was sizable (more than a couple of dozen), I would use bulk collect with the limit clause

begin
open c;
loop
fetch c bulk collect into array LIMIT 100;
for i in 1 .. array.count
loop
process...
end loop;
exit when c%notfound;
end loop;
close c;
end;
/


so, neither 1 or 2 :)



need suggestion

Ravi, November 09, 2005 - 5:44 pm UTC

Hi Tom,
can you please suggest me which way to go for bulk insert,
in my case I am trying to insert about 15 million records with the table consisting about 50 columns
which one to take from the two ways
1)insert into t1 select * from t2 where <some conditions>;
or
2)using BULK COLLECT with LIMIT option specified along with using FORALL and /*+APPEND*/ hint

can you please explain me how does the first option works,I mean performance wise, I got that techniqe from you Tom

Thank you
Ravi

Tom Kyte
November 11, 2005 - 10:29 am UTC

if it takes 1 to 2 days to insert such a small number of rows - there is something dead seriously wrong with your system here.


why does it take 1 to 2 days!!!!! that should take minutes.

Bulk insert failure

sid, November 10, 2005 - 1:11 am UTC

Hi Tom,

I need to know that if there is a multiple or bulk insert going on to a table and it fails in the middle of the loop due to some data inconsistency

will it insert the earlier data into table, or will it just abort without inserting anything into the table.

Appreciate your help.

Tom Kyte
November 11, 2005 - 11:37 am UTC

individual statements either entirely succeed or entirely fail. Each insert is atomic.

You can use save exceptions to change this bulk behavior.


So, in general if you

forall i in 1 .. N insert into t values ( ..... );


they either all succeed or not.


but if you use the save exceptions clause, you can change that behaviour.

got my question wrong

Ravi, November 11, 2005 - 10:56 am UTC

I never said it is taking 1 to 2 days, some how you might have got it wrong, I just wanted to know whether I have to use, which option has optimal performance

1) Insert into T1 select * from T2 where T2.C1 = 'constant')

or use

2) cursor with BULK COLLECT along with LIMIT option.

if I use first option will there be any memory problems or rollback segment problems(snapshot). in my case I am trying to insert about 15 million records with 50 columns.

I really need your suggesstion Tom, I am planning to use the first option but worried about memory and rollback segment. as I am not commiting untill the end of 15 million records.
Thank you, for your quick response and time Tom

Thank you
Ravi

Tom Kyte
November 12, 2005 - 8:41 am UTC

sorry, I read two ways as two days apparently, apologize for that.


#1 always wins if you ask me. I would not consider writing procedural code to accomplish what a single sql statement can do.


it'll take less memory.
it'll generate less overall rollback, redo. 15million records is pretty small these days.

You cannot use bulk collect/forall with APPEND - append would only work on the insert as select statement from #1

Thank you for the great peace of information

Ravi, November 12, 2005 - 1:27 pm UTC

I will consider the first method in my code.
(insert into t1 select * from t2) Thank you for the information, the information that you provided is priceless for me

Thank you
Ravi

q on bulk collect versus single select

Menon, November 15, 2005 - 8:18 pm UTC

"The fastest way is:

insert into t select * from t2;
"

In our database, some of the settings in the sort and hash area size are quite small (read default). This of course, needs to be fixed. One thing I am observing is that when I do things in single statement, it seems to be taking much more time than when I do things using bulk processing with appropriate limit clause. Is it possible that our db configuration is skewed in a way that bulk operations would be faster since we do things in small steps thus consuming less memory resources at a time, instead of single statements, which consume more resources at a time when running. In other words, does the fact that sort and hash area sizes are set to somewhat smaller values, explain this behaviour (bulk operation being faster than single statements) at all?

Tom Kyte
November 16, 2005 - 8:38 am UTC

compare the plans, are the plans for the bulk processing things different from the single statement plans.

bulk collect and insert select

A reader, November 17, 2005 - 2:16 pm UTC

Hi

I have some code which look like this

select ing_id
bulk collect
into listacodigo
from eng
order by ing_name;


FORALL i IN 1 .. listacodigo.COUNT
INSERT INTO histtel
(SELECT tahesmsm_tra_codigo, tahesmsm_tra_msisdn
FROM tahesmsm_tra_teltrab
WHERE tahesmsm_tra_codigo = listacodigo (i));
FORALL i IN 1 .. listacodigo.COUNT
DELETE tahesmsm_tra_teltrab
WHERE tahesmsm_tra_codigo = listacodigo (i);

My question is about the insert select, will thar be faster than ? Imagine the cursor return 500000 rows

for i in (select ing_id
from eng
order by ing_name)
loop
INSERT INTO histtel
(SELECT tahesmsm_tra_codigo, tahesmsm_tra_msisdn
FROM tahesmsm_tra_teltrab
WHERE tahesmsm_tra_codigo = i.ind_id);
end loop;

Tom Kyte
November 18, 2005 - 9:58 am UTC

why not just an insert and delete?

insert into histtel select ... from ... where column in (select ... );
delete from .... where column in (select ... );


If needed, you would use serializable or flashback query in the subquery to get a totally consistent read on the listacodigo table.

UPDATE using BULK COLLECT and FORALL

Ravi, November 23, 2005 - 12:32 am UTC

Hi Tom,
      can you please help me in fixing this problem, I am trying to BULK insert and update using forall clause and I get the below errors, below is my set up

SQL> select * from v$version;
  

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production
      
SQL> create table emp1 as select empno, ename, deptno from emp where 1 = 2;

Table created.

created a empty table EMP1 like EMP

SQL> select count(*) from emp1;

  COUNT(*)
----------
         0

create a sample table based on emp with flag set to 'N'

SQL> create table update_emp1 as select empno, 'N' flag from emp;

Table created.

SQL> select * from update_emp1;

     EMPNO F
---------- -
      7369 N
      7499 N
      7521 N
      7566 N
      7654 N
      7698 N
      7782 N
      7788 N
      7839 N
      7844 N
      7876 N

     EMPNO F
---------- -
      7900 N
      7902 N
      7934 N

14 rows selected.


CREATE OR REPLACE PROCEDURE dummy 
IS
   CURSOR emp_cur 
   IS
      SELECT empno, ename,deptno
        FROM emp where deptno = 20;

   TYPE emp_tb IS TABLE OF emp_cur%ROWTYPE index by binary_integer;

   myobj   emp_tb;
BEGIN
   OPEN emp_cur;

   LOOP
      FETCH emp_cur
      BULK COLLECT INTO myobj;

      FOR i IN 1 .. myobj.COUNT
      LOOP
         DBMS_OUTPUT.put_line (myobj (i).ename);
      END LOOP;

      FORALL i IN 1 .. myobj.COUNT
         INSERT /*+append*/INTO emp1
              VALUES myobj (i);
      
      Forall i IN 1 .. myobj.COUNT
      UPDATE UPDATE_EMP1 
      SET FLAG = 'Y' 
      WHERE EMPNO = MYOBJ(i).EMPNO;
         
      COMMIT;
      EXIT WHEN emp_cur%NOTFOUND;
   END LOOP;

   CLOSE emp_cur;
END;
/

when i execute it I get the below errors
SQL> show errors
Errors for PROCEDURE DUMMY:

LINE/COL ERROR
-------- -----------------------------------------------------------------
15/7     PL/SQL: SQL Statement ignored
16/25    PLS-00597: expression 'MYOBJ' in the INTO list is of wrong type
28/7     PL/SQL: SQL Statement ignored
30/21    PLS-00436: implementation restriction: cannot reference fields of
         BULK In-BIND table of records

30/29    PL/SQL: ORA-00904: : invalid identifier

Can you please help me out in fixing above problems

Thank you,
Ravi
 

Tom Kyte
November 23, 2005 - 9:35 am UTC

30/21    PLS-00436: implementation restriction: cannot reference fields of
         BULK In-BIND table of records



"implementation restriction", you cannot bulk bind that.


Now, THANKFULLY this isn't obeyed either:

         INSERT /*+append*/INTO emp1
              VALUES myobj (i);


the append hint works on insert as select only (else you would end up with 1 row per block, and never reuse any blocks!!!!!!)



You will have to modify your logic.  What you are trying to code isn't valid plsql.


depending on the volume of data here (must be small if you are bulk collecting them all into memory) you could use a serializable transaction and just


insert into t2 select * from t1 where <predicate> and flag = 'N';
update t1 set flag = 'Y' where <predicate> and flag = 'N';


Or, using flashback query to get a read consistent view of the tables (that is what I assume you are trying to achieve - to make sure you mark only rows processed that were processed - avoiding marking rows processed that were added during the run)


ops$tkyte@ORA10GR2> create table emp as select 'N' flag, emp.* from scott.emp;

Table created.

ops$tkyte@ORA10GR2> create table emp1 as select * from emp where 1=0;

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec dbms_lock.sleep(10)

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> CREATE OR REPLACE PROCEDURE dummy (p_deptno in number )
  2  IS
  3      l_scn number;
  4  BEGIN
  5      l_scn := sys.dbms_flashback.get_system_change_number;
  6
  7      insert into emp1
  8      select *
  9        from emp as of scn l_scn
 10       where deptno = p_deptno
 11         and flag = 'N';
 12
 13      update emp
 14         set flag = 'Y'
 15       where empno in ( select empno
 16                          from emp as of scn l_scn
 17                         where deptno = p_deptno
 18                           and flag = 'N' );
 19
 20  end;
 21  /

Procedure created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select empno, flag from emp where deptno = 20;

     EMPNO F
---------- -
      7369 N
      7566 N
      7788 N
      7876 N
      7902 N

ops$tkyte@ORA10GR2> select empno, flag from emp1 where deptno = 20;

no rows selected

ops$tkyte@ORA10GR2> exec dummy(20);

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select empno, flag from emp where deptno = 20;

     EMPNO F
---------- -
      7369 Y
      7566 Y
      7788 Y
      7876 Y
      7902 Y

ops$tkyte@ORA10GR2> select empno, flag from emp1 where deptno = 20;

     EMPNO F
---------- -
      7369 N
      7566 N
      7788 N
      7876 N


or, by using a "tri-valued flag", the third value of which is never seen outside the transaction:


ops$tkyte@ORA10GR2> CREATE OR REPLACE PROCEDURE dummy (p_deptno in number )
  2  IS
  3  BEGIN
  4          update emp
  5             set flag = 'x'
  6           where deptno = p_deptno
  7             and flag = 'N';
  8
  9      insert into emp1
 10      select *
 11        from emp
 12       where deptno = p_deptno
 13         and flag = 'x';
 14
 15      update emp
 16         set flag = 'Y'
 17       where deptno = p_deptno
 18             and flag = 'x';
 19  end;
 20  /

Procedure created.

 

Thank you, for quick and a elobrate response

RAVI, November 23, 2005 - 10:25 am UTC

Thank you very much, I am thankfull to you for the knowledge that you provided to me, and thank you for your valuable time

Thank you,
Ravi

Tom , can you please explain

Ravi, November 23, 2005 - 10:47 am UTC

Hi Tom,
can you please provide some information on use and working of the below or some reference of where I can find some information for the below

1) exec dbms_lock.sleep(10);
2) sys.dbms_flashback.get_system_change_number;

Thank you,
Ravi


Tom Kyte
November 23, 2005 - 11:01 am UTC

</code> http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14258/toc.htm <code>

(in earlier releases this was known as the supplied packages guide)

That was a real quick response

RAVI, November 23, 2005 - 11:02 am UTC

Thank you,
Ravi

dynamic bulk collect

kant, December 08, 2005 - 10:49 am UTC

Tom, I have to fetch some values from a query,Bulk collect into an array and update using those values.
Key is, I need to select using a dynamic query.

--v_array is a sql type;

Eg: Execute immediate 'select empno,ename sal from emp bulk collect into :v_array ' using OUT v_array;

I know the syntax is wrong. can you help me how to acheive this?

thanks,


Tom Kyte
December 08, 2005 - 12:00 pm UTC

ops$tkyte@ORA10GR2> declare
  2          type array is table of scott.emp%rowtype;
  3          l_data array;
  4          l_cursor sys_refcursor;
  5  begin
  6          execute immediate
  7          'select * from scott.emp' bulk collect into l_data;
  8          dbms_output.put_line( l_data.count );
  9  end;
 10  /
14

PL/SQL procedure successfully completed.

 

Whether Object type creation is only option

Rajeswari, February 01, 2006 - 5:54 am UTC

Tom,

I am not sure same question is answered in asktom if so please provide the link.

Our requirement is we select 100000 records from different tables using bulk collect operation (since we have in-between process) and insert into another table say table2.

Now my question is, if the table contains 20 columns and 10 columns are not inserted during insert operation either we have to create 10 user defined types or object type for the 10 columns. I have given the example below to explain my requirement not pointing to the error as I got the answer from this link </code> https://asktom.oracle.com/Misc/oramag/on-procedures-flushes-and-writes.html <code>

Currently we are creating 10 user defined variables because of the implementation restriction error.

Is there any other option? or whether it will be supported in future releases ? Eventhough the original answer(in the year 2001) provides the solution I am asking this expecting there may be some new features on this types.

create table t3(col1 varchar2(30),col2 number,col3 varchar2(30),col4 number,col5 number);


declare
cursor c is select t.table_name, i.index_name
from all_tables t, all_indexes i
where t.table_name = i.table_name
and t.owner = i.table_owner
and rownum < 10;
type t_tab is table of c%rowtype index by binary_integer;
l_tab t_tab;
proc number;
begin
open c;
loop
fetch c bulk collect into l_tab limit 100;
for cntr in 1..l_tab.count loop
proc := proc * 0.1; --some process
end loop;
forall cntr in 1..l_tab.count
insert into t3(col1,col2,col3) values(l_tab(cntr).table_name,proc,l_tab(cntr).owner);
exit when c%notfound;
end loop;
end;
/

scott@DEV74> /
insert into t3(col1,col2,col3) values(l_tab(cntr).table_name,proc,l_tab(cntr).owner);
*
ERROR at line 18:
ORA-06550: line 18, column 50:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records


select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
CORE 9.2.0.6.0 Production
TNS for HPUX: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 - Production



Tom Kyte
February 02, 2006 - 3:40 am UTC

only if you use object types and select from a collection:

ops$tkyte@ORA9IR2> create table emp as select empno,ename,job,mgr from scott.emp;

Table created.

ops$tkyte@ORA9IR2> create table emp2 as select ename, empno from emp where 1=0;

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace type myScalarType as object (
  2  EMPNO     NUMBER(4),
  3  ENAME     VARCHAR2(10),
  4  JOB       VARCHAR2(9),
  5  MGR       NUMBER(4)
  6  )
  7  /

Type created.

ops$tkyte@ORA9IR2> create or replace type myArrayType as table of myScalarType
  2  /

Type created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          l_data myArrayType;
  3  begin
  4          select myScalarType(empno,ename,job,mgr)
  5            bulk collect
  6            into l_data
  7            from emp;
  8
  9          insert into emp2 (ename,empno)
 10          select x.ename, x.empno
 11            from table( cast( l_data as myArrayType ) ) X;
 12  end;
 13  /

PL/SQL procedure successfully completed.
 

Thanks a lot

Rajeswari, February 03, 2006 - 12:36 am UTC

Tom,

Always I am thinking that I have to use 2 types one for bulk collect and one for insert. I restricted my thinking to use of %rowtype using cursor template. Thanks a lot it helps me to think in different ways. Everytime I am getting new ideas from you.

This thread is very useful

Murali, April 03, 2006 - 10:42 am UTC

Hi Tom,

You mentioned "insert into tab2 select * from tab1" is fastest.

I am having different results. I probably not doing something correct.


-- This uses insert into select * from
SQL> exec emis_bo.load_equip_static_fast;
PL/SQL procedure successfully completed.

Elapsed: 06:58:20.01
SQL> select count(*) from emis_bo.equip_static_flat_temp;

  COUNT(*)
----------
   2044876

Elapsed: 00:00:07.02

-- This uses for loop and normal insert with APPEND hint
SQL> exec emis_bo.load_equip_static_flat;
PL/SQL procedure successfully completed.

Elapsed: 03:38:20.01
SQL> select count(*) from emis_bo.equip_static_flat;

  COUNT(*)
----------
   2044876

Elapsed: 00:00:07.02


CREATE OR REPLACE PROCEDURE load_equip_static_fast
AS

errstr   VARCHAR2 (1000);

l_start NUMBER(10);

begin

l_start := dbms_utility.get_time;
                   
                       
EXECUTE IMMEDIATE 'TRUNCATE TABLE EMIS_BO.EQUIP_STATIC_FLAT_TEMP';

EXECUTE IMMEDIATE 'TRUNCATE TABLE EMIS_BO.EQUIP_STATIC_ERRORS_TEMP';


INSERT INTO EMIS_BO.EQUIP_STATIC_FLAT_TEMP
SELECT 
        a.equip_unit_init_cd, 
        a.equip_unit_nb, 
        a.equip_group, 
        a.equipment_key,
                 a.equip_eff_date, 
                 a.equip_exp_date, 
                 a.equip_status, 
                 a.dynamic_flag,
                 a.lese, 
                 a.p001, 
                 a.tcgr, 
                 a.tcme, 
                 a.tcmr, 
                 a.tcpc, 
                 a.tcur, 
                 a.umet, 
                 a.ummd, 
                 a.add_ts,
                a.add_userid, 
                a.lst_update_ts, 
                a.lst_update_userid, 
                a.umler_tc,
                 a.umler_tcc, 
                 a.umler_owner, 
                 a.maintenance_party,
                 (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'BLDT' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v1,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'RBDT' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v2,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'RBFL' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v3,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'RCLE' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v4,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'EFDT' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v5,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'AIRD' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v6,
                (select rtrim(element_value) from 
                emis.equip_dynamic@emisprod where element_id  = 'CBSI' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v7,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'ABMD' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v8,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'SBDT' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v9,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'ARSC' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v10,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'SRDT' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v11,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'SRFL' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v12,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'EINN' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v13,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'DDNE' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v14,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'DU13' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v15,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'DU58' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v16,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'DUDL' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v17,
                (select TO_DATE (nvl(trim(element_value),'99991231'), 'YYYYMMDD') from 
                emis.equip_dynamic@emisprod where element_id  = 'DUAI' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v18,
                (select element_value from 
                emis.equip_dynamic@emisprod where element_id  = 'ARCG' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v19,
                (select rtrim(element_value) from 
                emis.equip_dynamic@emisprod where element_id  = 'PRID' and
                equip_unit_init_cd = a.equip_unit_init_cd and equip_unit_nb = a.equip_unit_nb) v20
        FROM emis.equip_static@emisprod a;
        
        
        COMMIT;


dbms_output.put_line( 'That took ' || 
                       (dbms_utility.get_time-l_start) || ' hsecs' ); 

        
EXCEPTION
   WHEN OTHERS
   THEN
      errstr := SUBSTR (SQLERRM, 1, 1000);

      INSERT INTO emis_bo.equip_static_errors_temp
           VALUES ('TEST', 1, 'OUT STATIC', NULL, errstr, SYSDATE);

      COMMIT;

END

   ;
/



Thanks,
Murali 

Tom Kyte
April 04, 2006 - 9:37 am UTC

you give me nothing to compare to

I've no clue what a "normal" insert might mean.

Here is the other procedure

Murali, April 04, 2006 - 6:15 pm UTC

This is faster than the first one I have posted. Can you please advice why this is working faster than the first one. I appreciate any pointers, advice to write this procedure in a better way.


CREATE OR REPLACE PROCEDURE load_equip_static_flat
AS
errstr VARCHAR2 (1000);
cnt NUMBER;
v1 DATE;
v2 DATE;
v3 VARCHAR2 (100);
v4 VARCHAR2 (100);
v5 DATE;
v6 VARCHAR2 (100);
v7 VARCHAR2 (100);
v8 VARCHAR2 (100);
v9 DATE;
v10 VARCHAR2 (100);
v11 DATE;
v12 VARCHAR2 (100);
v13 VARCHAR2 (100);
v14 DATE;
v15 DATE;
v16 DATE;
v17 DATE;
v18 DATE;
v19 VARCHAR2 (100);
v20 VARCHAR2 (100);
-- This variable stores the element_ids that were
-- not translated by this procedure
mei VARCHAR2 (100);

CURSOR c
IS
SELECT equip_unit_init_cd, equip_unit_nb, equip_group, equipment_key,
equip_eff_date, equip_exp_date, equip_status, dynamic_flag,
lese, p001, tcgr, tcme, tcmr, tcpc, tcur, umet, ummd, add_ts,
add_userid, lst_update_ts, lst_update_userid, umler_tc,
umler_tcc, umler_owner, maintenance_party
FROM emis.equip_static@emisprod;

CURSOR str (euc IN VARCHAR2, eun IN VARCHAR2)
IS
SELECT element_id a, element_value b
FROM emis.equip_dynamic@emisprod
WHERE equip_unit_init_cd = euc AND equip_unit_nb = eun;
BEGIN
FOR c_rec IN c
LOOP
BEGIN
FOR str_rec IN str (c_rec.equip_unit_init_cd, c_rec.equip_unit_nb)
LOOP
IF str_rec.a = 'BLDT'
THEN
v1 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'RBDT'
THEN
v2 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'RBFL'
THEN
v3 := str_rec.b;
ELSIF str_rec.a = 'RCLE'
THEN
v4 := str_rec.b;
ELSIF str_rec.a = 'EFDT'
THEN
v5 := TO_DATE(nvl(trim(str_rec.b),'99991231'),'YYYYMMDD');
ELSIF str_rec.a = 'AIRD'
THEN
v6 := str_rec.b;
ELSIF str_rec.a = 'CBSI'
THEN
v7 := RTRIM (str_rec.b);
ELSIF str_rec.a = 'ABMD'
THEN
v8 := str_rec.b;
ELSIF str_rec.a = 'SBDT'
THEN
v9 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'ARSC'
THEN
v10 := str_rec.b;
ELSIF str_rec.a = 'SRDT'
THEN
v11 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'SRFL'
THEN
v12 := str_rec.b;
ELSIF str_rec.a = 'EINN'
THEN
v13 := str_rec.b;
ELSIF str_rec.a = 'DDNE'
THEN
v14 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'DU13'
THEN
v15 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'DU58'
THEN
v16 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'DUDL'
THEN
v17 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'DUAI'
THEN
v18 := TO_DATE (nvl(trim(str_rec.b),'99991231'), 'YYYYMMDD');
ELSIF str_rec.a = 'ARCG'
THEN
v19 := str_rec.b;
ELSIF str_rec.a = 'PRID'
THEN
v20 := RTRIM (str_rec.b);
ELSE
mei := str_rec.a;

INSERT INTO emis_bo.equip_static_errors
VALUES (c_rec.equip_unit_init_cd, c_rec.equip_unit_nb,
SUBSTR (mei, 1, 10), NULL, 'I WAS NOT TRANLATED',
SYSDATE);

COMMIT;
END IF;
END LOOP;

EXCEPTION
WHEN OTHERS
THEN
errstr := SUBSTR (SQLERRM, 1, 1000);

INSERT INTO emis_bo.equip_static_errors
VALUES (c_rec.equip_unit_init_cd, c_rec.equip_unit_nb,
'str_rec.a', NULL, errstr, SYSDATE);

COMMIT;

END;

BEGIN
INSERT /*+ APPEND */INTO emis_bo.equip_static_flat
(equip_unit_init_cd, equip_unit_nb,
equip_group, equipment_key,
equip_eff_date, equip_exp_date,
equip_status, dynamic_flag, lessee,
pool_id, "SYSTEM_GENERATED_ROUTING INSTR",
mechanical_restriction, mechanical_restriction_reason,
pool_control, user_reported_routing_instr,
equipment_type_code, mechanical_designation, add_ts,
add_userid, lst_update_ts,
lst_update_userid, umler_tc,
umler_tcc, umler_owner,
maintenance_party, built_date, rebuilt_date,
rebuilt_flag, rcl_equipped_flag,
ADVANCED_INSTALLATION_DATE,
air_dryer_equipped_flag, cab_signal_configuration,
air_brake_model, superstructure_built_date,
autorack_inspection_score,
superstructure_rebuilt_date,
superstructure_rebuilt_flag, equipment_id_number,
fra_drop_dead_date, abt_13_month_due_date,
abt_5_8_year_due_date, door_lube_due_date,
autorack_inspection_due_date,
autorack_inspection_category, prior_equipment_id
)
VALUES (c_rec.equip_unit_init_cd, c_rec.equip_unit_nb,
c_rec.equip_group, c_rec.equipment_key,
c_rec.equip_eff_date, c_rec.equip_exp_date,
c_rec.equip_status, c_rec.dynamic_flag, c_rec.lese,
c_rec.p001, c_rec.tcgr,
c_rec.tcme, c_rec.tcmr,
c_rec.tcpc, c_rec.tcur,
c_rec.umet, c_rec.ummd, c_rec.add_ts,
c_rec.add_userid, c_rec.lst_update_ts,
c_rec.lst_update_userid, c_rec.umler_tc,
c_rec.umler_tcc, c_rec.umler_owner,
c_rec.maintenance_party, v1, v2,
v3, v4,
v5,
v6, v7,
v8, v9,
v10,
v11,
v12, v13,
v14, v15,
v16, v17,
v18,
v19, v20
);
EXCEPTION
WHEN OTHERS
THEN
errstr := SUBSTR (SQLERRM, 1, 1000);

INSERT INTO emis_bo.equip_static_errors
VALUES (c_rec.equip_unit_init_cd, c_rec.equip_unit_nb,
'FROMSTATIC', NULL, errstr, SYSDATE);

COMMIT;


END;

cnt := cnt + 1;

IF cnt > 49999
THEN
COMMIT;
cnt := 0;
END IF;

v1 := NULL;
v2 := NULL;
v3 := NULL;
v4 := NULL;
v5 := NULL;
v6 := NULL;
v7 := NULL;
v8 := NULL;
v9 := NULL;
v10 := NULL;
v11 := NULL;
v12 := NULL;
v13 := NULL;
v14 := NULL;
v15 := NULL;
v16 := NULL;
v17 := NULL;
v18 := NULL;
v19 := NULL;
v20 := NULL;
END LOOP;

COMMIT;
EXCEPTION
WHEN OTHERS
THEN
errstr := SUBSTR (SQLERRM, 1, 1000);

INSERT INTO emis_bo.equip_static_errors
VALUES ('TEST', 1, 'OUT STATIC', NULL, errstr, SYSDATE);

COMMIT;

END

;
/


Tom Kyte
April 05, 2006 - 5:28 pm UTC

I don't know about you - but we appear to be looking at two routines that don't do the same things at all.


you do know though that

insert /*+ APPEND */ into t ( .... ) VALUES (.....);
^^^^^^ ^^^^^^

means you might not understand when and where the append hint can be applied. It (thankfully) will never work with VALUES.




Thank you for your response!

Murali, April 06, 2006 - 6:19 pm UTC

But I did not fully understand what you were trying to say. Do you mean that I should not be using APPEND hint when I am using values clause? I have seen examples using APPEND hint with VALUES clause on the web.

I was also not sure how you concluded that the 2 procedures I have posted are functionally different. The function of both the procedures is to get values from tab2 ( for the records matched with tab1) and decode them to insert them along with the tab1 values in to a new table.

I know you have tonnes of threads to answer and I am thankful that you have been very patient in answering my earlier questions. I request you to elaborate your answer one more time.

Thanks.

Tom Kyte
April 08, 2006 - 8:30 am UTC

I mean if you use the /*+ APPEND */ hint like this:


insert /*+ APPEND */ into t (c1,c2) VALUES (x,y);
^^^^^^ ^^^^^^

it only shows that you don't understand when and where APPEND can be used. APPEND works ONLY with:

insert /*+ APPEND */ into t(c1,c2) SELECT .....;
^^^^^^

it will not work with VALUES.



I see no truncates or anything in the other procedure, hence - therefore - they are very different.

BULK INSERT

Rajesh, April 11, 2006 - 6:16 am UTC

I have a package which contains procedures to
1. Open and read the file
2. Parse each record in the file and form insert string dynamically.
3. Do Data type validations on the data read.
4. Insert the data into the table.

I have 10000 records in the file. And it is taking 20 minutes to upload the data into the table. I am using normal SQL insert statement to insert the data into the table.

The table to which I am inserting the records contains 100 fields and the fields used in the insert string will be around 50 to 60 fields. These fields will change for each file. So I was not able to use Bulk Insert as I don;t know before hand which all fields will be used in the insert string.

Now I have to improve the performance. what is the solution?


Tom Kyte
April 11, 2006 - 2:30 pm UTC

the solution is to.....

figure out where the time is being spent....

so you can concentrate on that.


tools you can use:

a) sql-trace + tkprof (is it slow sql)
b) dbms_profiler (where am I spending cycles in plsql)


make sure that dynamic insert is using BIND VARIABLES or 90% of your time spent inserting is probably spent parsing (or more...)

BULK INSERT

Rajesh, April 11, 2006 - 6:58 am UTC

There is no database interaction after opening the file other than the insert statement. The static data to parse the record are stored in PL/SQL table.

For the problem mentioned above

A reader, May 17, 2006 - 10:02 am UTC

Tom,

You said:
insert /*+ APPEND */ into t ( .... ) VALUES (.....);
^^^^^^ ^^^^^^

means you might not understand when and where the append hint can be applied.
It (thankfully) will never work with VALUES.


I just had a conversation with another DBA, over a package that's been running here since probably before electricity, and that uses just such a statement. I pointed out that the APPEND hint is useless in such a statement, and that the statement should be rewritten as INSERT AS SELECT for the hint to work.

The other DBA agreed that as it is written now, it would still generate redo logs, but that perhaps the APPEND hint in this context might still save the INSERT from having to do a full table scan to find free blocks, and instead insert above the HWM. Does his theory make sense? Would there be any significant performance gain from this, as he suggested?

The statement can't be rewritten as an INSERT AS SELECT statement, so it needs to stay the way it is. But is there another way to force nologging?

Tom Kyte
May 17, 2006 - 11:42 pm UTC

false, append is only used to write above the high water mark with insert as select.

with values, it is "not meaningful" at all.


you DON'T WANT it to be an append, not with a single row, that would just be "silly" (you would get ONE ROW PER BLOCK!!! what a waste)

not only that, but you would SERIALIZE.


And - by the way - it does not bypass redo, it by passes undo and then only for the table (never the indexes).... NOLOGGING mode tables could (or noarchivelog mode databases) bypass redo for the TABLE (but never the indexes)

if you are doing single row inserts, just be hugely grateful this 'hint' is utterly ignored.

Del/Ins vs Merge & Bulk Merge

Ashutosh, October 21, 2006 - 9:06 am UTC

Tom,
I have one particular scenario where I delete the records from one table which are modified today in second table. While doing this, I insert newly created records into first table. I would like to know, which is faster way of doing this, DEL/INS or Merge?

When I insert into this table, I use FORALL-INSERT mechanism (I insert processed records from a pl/sql table). Could the same be done for merge too i.e. FORALL-MERGE?


Tom Kyte
October 21, 2006 - 11:09 am UTC

I'm not understanding the flow at all here.


"delete records in T1 which are modified in T2"
"insert newly created records into T1" (assume that T1 is the first table)

where do the newly created records come from exactly?


but it sounds like two sql statements:

a) delete
b) insert

I don't see the use of forall?

Explained little more

Ashutosh, October 29, 2006 - 9:10 am UTC

I have customer database which is getting updated in OLTP/Online environment.

I have another application which caters to reporting and has DW kind of setup.

I pickup all the changed record for the day from customer database and put into DW application. I do the following:

1. Pick all changed record from customer for condition trunc(maker_dt_stamp)=TODAY. I do not have FB index on this table;
Q. If I create one, it will certainly help my DW application; but will have any imapct on online application?
2. I delete all the related record from DW application which I have found to be changed in customer database.
3. While doing some processing in DW application, I store all record in pl/sql table and at the end I bulk-insert all those record to DW customer table.
Q. Here comes my question; as first I'm deleting all records from DW customer table and then insert again. It includes some new records and some updated records. Now my question is that; merge is certainly going to help instead of DELETE-INSERT.
Q. Can I use merge in same fashion as I m using FORALL and pl/sql table to bulk insert records into the table. I am not sure if merge supports this.

Please clarify the same.

Tom Kyte
October 29, 2006 - 9:44 am UTC

1) if you have the date indexed, just

where maker_dt_stamp between trunc(sysdate) and trunc(sysdate)+1-1/24/60/60


you need no function based index.

but yes, if you ADD an index, that will obviously impact the transactional system.


3) merge would likely be better, yes. but you would NOT fetch and forall merge, you use MERGE INSTEAD OF SLOW BY SLOW PROCEDURAL CODE.

that is, erase your code, just use merge

Bulk insert Millions of rows

Maverick, December 01, 2006 - 3:46 pm UTC

I have to insert around 23 Million+ Rows [Cumulative] every quarter into a table. What could be my best option to do this? [My source table is an external file]

This table is partitioned by Year.

1)Bulk collect Limit to 1000 [pl/sql]
2)Direct load like Insert..Select * from Source.. [SQL]
3)MERGE in a pl/sql

Since it's Cumulative, 2nd option could work, but I have to truncate table everytime I do this. Will there be any issues with this?

Thanks,


Tom Kyte
December 01, 2006 - 9:55 pm UTC

1) would not make sense, you don't need code
3) would not make sense, you want to insert - not merge

therefore 2)

else, you need to define "cumulative"


could be that the answer is:

a) create table as select in parallel nologing
b) index it
c) drop old
d) rename new

Bulk Insert

Maverick, December 02, 2006 - 10:21 am UTC

I think The steps you mentioned makes sense. I will follow that. But just thinking, in case if data needs to be checked , if exists update else Insert [considering not cumulative] what is the best approach. Our users are currently sending us files with that much of data [20 Mill +] rows and currently they made it cumulative [all the data everytime], but that might change in future and they might be sending only incremental data , and might want us to checkit before insert..

Thanks for your help

Tom Kyte
December 02, 2006 - 1:15 pm UTC

...
if exists update else Insert
[considering not cumulative] what is the best approach....

that doesn't make total sense as stated, but I believe you might have said "I want to merge" - not sure what cumulative means to you, not sure what "data needs to be checked" means to you.




bulk collect into twice

oracleo, December 08, 2006 - 11:17 am UTC

can we do bulk collect into twcie in same collection.

For instace,

select name bulk collect into arr_name from people;

..
...
...
some code
..
..

now again,
select name bulk collect into arr_name from people1;

-- people 1 is another table... the records should be appended to the previosu collection.

how to achieve this.

Tom Kyte
December 09, 2006 - 12:31 pm UTC

no, you would use "UNION ALL" in one query - or two separate collections.

Which is faster?

Shaji, December 08, 2006 - 2:50 pm UTC

Hi Tom,
I have had a hard time convincing my friend that direct insert is faster than the bulk insert. Here is the scenario, We need to insert some thousands of records based on some query condition. So I am directly inserting into the table like

insert into tableX
(col1, col2......)
select col1,col2,........
from t1, t2 t2
where ----
---

This insert taken some 9 seconds to insert some thousands or records. My colleague says instead of this we need to put into a pl-sql table and then do a bulk insert into the database table. And he assumes that his method will be faster. How do I convince him that direct insert is fater than any other method?

Tom Kyte
December 09, 2006 - 2:38 pm UTC

code it up.

and also - compare the code and say

"dude, which is easier to code, maintain, debug"

it'll be NO CONTEST.


by the way, just to avoid confusion - you are not doing a "direct PATH insert", you are just following a really good mantra

o if you can do it in a single SQL statement - do it.




Is there a better method?

A reader, December 14, 2006 - 2:31 pm UTC

Hi,

I recognise that "INSERT ... SELECT" is (usually) the best method, but I want to change one of the values I am inserting from the original value. I want to avoid having a list of columns in my "INSERT ... SELECT" by doing a "SELECT *" - so if in the future the table has columns added or dropped my stored procedure will just need re-compiling rather than amending.

At the moment I am having to use an intermediate PL/SQL table.

Eg:

<snip>

TYPE t_mytab IS TABLE OF mytab%ROWTYPE
INDEX BY BINARY_INTEGER;

v_smytab t_mytab;

BEGIN

SELECT
*
BULK COLLECT INTO
v_mytab
FROM
mytab
WHERE
...

FOR
l_index
IN
1
..
v_mytab.COUNT
LOOP

v_mytab(l_index).code := i_code;

END LOOP;

FORALL
l_index
IN
1
..
v_mytab.COUNT

INSERT INTO
mytab
VALUES
v_mytab(l_index);



The above example works fine - but I am stuck with having to SELECT the data into the intermediate PL/SQL table - just so I can change the value of the code column before I INSERT the rows back into the database.

Is there any way that I can avoid this (Eg: in a SELECT ... INSERT construct) without having to name all the columns explicitly?

We are currently using 9.2.0.5. I think when we migrate to 10g I could use the MODEL clause to alter the result set? At the moment that is not an option though.

By the way - if I omit the "INDEX BY BINARY_INTEGER" clause everything compiles and still works the same. I presume that if I do this I would be using a NESTED table rather than using an "INDEX BY" table? Is there any difference / benefit / overhead in which definition I use?

Thanks.


Tom Kyte
December 15, 2006 - 8:30 am UTC

just do it correctly please - code the column names. sometimes life is just what life is.

Is there a better method? Follow up

A reader, December 16, 2006 - 11:54 am UTC

OK, fair enough "sometimes life is just
what life is." - but it never hurts to ask.

By the way, you didn't comment on my last paragraph. When you do have a (valid) need to use a table in PL/SQL does omitting "INDEX BY BINARY_INTEGER" make any difference?

Thanks.

Tom Kyte
December 17, 2006 - 10:01 am UTC

in this particular case - since you bulk collect and then process without adding any new entries - they are functionally equivalent.

the index by binary integer is "sparse" (you can have array(1) and array(10000000) filled in without the intervening elements) where as the collections are "dense" and you typically have to use "extend" on them to make them grow

hence, I tend to typically use "index by" plsql tables...

Spares / Dense arrays

A reader, December 17, 2006 - 12:08 pm UTC

Thanks for reminding me - doh!

Select Into Schema level object type

Larry Smith, January 17, 2007 - 3:54 pm UTC

Hi Tom,

At the start of this thread you wrote the below code inserting records into a schema defined object and explicitly inserting into that object, "OPEN...FETCH...CLOSE". My question is can you do the same using an implicit "SELECT myScalarType(...) INTO l_data" cursor? If so, how? I tried and got the error inconsistent datatypes: expected myScalarType got myArrayType.

declare
2 cursor c is select myScalarType( ename, hiredate, sal ) from emp;
3 l_data myArrayType;
4 begin
5
6 open c;
7 loop
8 l_data := myArrayType();
9 fetch c bulk collect into l_data limit 6;
10 dbms_output.put_line( 'Fetched ' || l_data.count || ' rows' );
11
12 for i in 1 .. l_data.count
13 loop
14 l_data(i).ename := 'X' || l_data(i).ename;
15 end loop;
16
17 insert into emp2 ( ename, hiredate, sal )
18 select * from TABLE( cast (l_data as myArrayType) );
19
20 exit when c%notfound;
21 end loop;
22 close c;
23 end;
24 /

Select Into Schema level object type

Larry, January 19, 2007 - 9:48 pm UTC

Actually, what I meant to ask is can you SELECT INTO the object variable without performing a BULK COLLECT into it using the surrounding loop?

i.e.,

LOOP
SELECT ...
INTO ...
FROM SOME_OBJECT_TYPE
END LOOP;

Resource problem

Eddy Mak, March 06, 2007 - 8:54 pm UTC

Hi Tom,

I want to insert millions records in a table. I would like to know the pros and cons of using INSERT INTO ... SELECT ... and Bulk Collect methods. Which one is better? What about the resources needed such as TEMP, UNDO tablespaces or RAM? Many thanks.
Tom Kyte
March 07, 2007 - 10:09 am UTC

well, a single sql statement will be faster than procedural code, generate less redo, generate less undo, it will impact any existing indexes in the least.

bulk collect means you have to write, debug, maintain code that will run slower, generate more redo, generate more undo, and will cause the most work to be performed on the indexes.



Wrong No of Records Using Package

AkS, May 21, 2007 - 11:14 am UTC

Hi Tom,

Sorry, I don't know if this should be a new question or not.

I have got unique problem.

I am running on
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Prod
PL/SQL Release 10.2.0.2.0 - Production
CORE 10.2.0.2.0 Production
TNS for Linux: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production

A piece of code
insert into Table ( select C1, C2 from T1, T2 where ..);
is inserting more no of records when used from a package procedure than when the same insert code is used from SQL*PLUS.

Can you please suggest where should I try to look.

Thanks
Anup

Tom Kyte
May 21, 2007 - 12:27 pm UTC

are t1, t2, .... dictionary views by any chance?


Forall issue

yoav ben moha, May 01, 2008 - 5:15 pm UTC

Hi Tom,
version 10203
I have 10 tables that i need to copy to another 10 tables named xxx_hist.
After the data move to this xxx_hist table the records that been copied, should be delete from the source tables.
One of this tables is the parent table(contain 10 milions rows) the rest are the child tables.
Hir is an example of just coping 3 tables.
Is this approach looks OK , or could you suggest for better way.

Declare
type array is table of Varchar(20) index by binary_integer ;
data array;

Cursor c IS
SELECT INST_PROD_ID
FROM source
WHERE Status = 1
and rownum < 100001 ;

Begin
Open c;
Loop
Fetch c Bulk Collect Into Data Limit 10000;

ForAll I In 1..Data.Count
Insert Into test_HIST
Select * from test
Where Inst_prod_id=data(i);

ForAll I In 1..Data.Count
Insert Into test2_HIST
Select * from test2
Where Inst_prod_id=data(i);

ForAll I In 1..Data.Count
Insert Into test3_HIST
Select * from test3
Where Inst_prod_id=data(i);

exit When c%notFound;
End Loop;
Close c;
commit;
End;

Tom Kyte
May 01, 2008 - 9:46 pm UTC

you are sort of getting 1000001 random records there - what is the logic behind that?

Forall Issue

yoav ben moha, May 02, 2008 - 4:28 am UTC

Hi Tom,
you are sort of getting 1000001 random records there - what is the logic behind that? ==>
I add this only for test purpose.
The parnet table contain 10,000,000 rows.
In order to check the functionality i limit that to 10,001 rows.
My question is if this approach looks OK , or could you kaindly suggest for better way.
Thanks.


Tom Kyte
May 02, 2008 - 7:20 am UTC

why are you writing procedural code

why not just

Insert Into test_HIST
Select * from test
 Where Inst_prod_id in (    SELECT INST_PROD_ID
  FROM source WHERE  Status = 1 )

Insert Into test_HIST2
Select * from test2
 Where Inst_prod_id in (    SELECT INST_PROD_ID
  FROM source WHERE  Status = 1 )

Insert Into test_HIST3
Select * from test3
 Where Inst_prod_id in (    SELECT INST_PROD_ID
  FROM source WHERE  Status = 1 )



Insert All is better for single table insert too

Nikhilesh, May 14, 2008 - 1:08 am UTC

Dear Tom,
 I was just trying on "Insert All" and found that it gives better performance for single table inserts too.

My findings
1) for 100K + records insert worked better
2) for 200K + records "insert all" worked too good
3) for 400K + records "Insert All" was better.

Could you please explain if there is really any performance benefit using "Insert All" over "Insert"? 

Thanks in advance.

I hate uploading lots of code and waste your time but this time I couldn't avoide it.

Here are my test cases:

SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 14 10:01:20 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> set time on
10:01:25 SQL> set timing on
10:01:30 SQL> create table big_table /*+ parallel(degree 4) */ as (select * from all_objects);

Table created.

Elapsed: 00:00:32.23
10:02:42 SQL> insert all into big_table (select * from all_objects);
insert all into big_table (select * from all_objects)
                           *
ERROR at line 1:
ORA-01747: invalid user.table.column, table.column, or column specification


Elapsed: 00:00:00.03
10:03:13 SQL> ed
Wrote file afiedt.buf

  1* insert all into big_table select * from all_objects
10:04:03 SQL> /

40701 rows created.

Elapsed: 00:00:09.62
10:04:13 SQL> insert into big_table (select * from all_objects);

40701 rows created.

Elapsed: 00:00:07.46
10:04:55 SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
10:05:15 SQL> create table t1 as (select * from all_objects where 1=2 and rownum=1);

Table created.

Elapsed: 00:00:00.18
10:05:44 SQL> ed
Wrote file afiedt.buf

  1* create table t2 as (select * from all_objects where 1=2 and rownum=1)
10:06:02 SQL> /

Table created.

Elapsed: 00:00:00.39
10:06:04 SQL> insert into t1 (select * from big_table);

122105 rows created.

Elapsed: 00:00:10.68
10:06:35 SQL> insert all into t2 select * from big_table;

122105 rows created.

Elapsed: 00:00:12.64
10:07:03 SQL> truncate table t1;

Table truncated.

Elapsed: 00:00:00.87
10:09:45 SQL> truncate table t2;

Table truncated.

Elapsed: 00:00:00.51
10:09:51 SQL> insert /*+ append */ into big_table (select * from big_table);

122105 rows created.

Elapsed: 00:00:00.68
10:10:19 SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
10:10:24 SQL> select count(*) from big_table;

  COUNT(*)
----------
    244210

Elapsed: 00:00:00.56
10:10:42 SQL> insert all into t1 select * from big_table;

244210 rows created.

Elapsed: 00:00:06.92
10:11:06 SQL> insert into t2 (select * from big_table);

244210 rows created.

Elapsed: 00:00:13.07
10:12:43 SQL> truncate table t1;

Table truncated.

Elapsed: 00:00:00.06
10:14:47 SQL> truncate table t2;

Table truncated.

Elapsed: 00:00:00.64
10:14:53 SQL> insert /*+ append */ into big table select * from big_table;
insert /*+ append */ into big table select * from big_table
                              *
ERROR at line 1:
ORA-00926: missing VALUES keyword


Elapsed: 00:00:00.01
10:15:21 SQL> ed
Wrote file afiedt.buf

  1* insert /*+ append */ into big_table (select * from big_table)
10:15:33 SQL> /

244210 rows created.

Elapsed: 00:00:00.90
10:15:35 SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
10:15:46 SQL> select count(*) from big_table;

  COUNT(*)
----------
    488420

Elapsed: 00:00:00.90
10:15:54 SQL> insert all into t1 select * from big_table;

488420 rows created.

Elapsed: 00:00:12.56
10:16:29 SQL> insert into t2 (select * from big_table);

488420 rows created.

Elapsed: 00:00:18.99
10:17:12 SQL> 

Tom Kyte
May 14, 2008 - 11:09 am UTC

I would call that test rather inconclusive

you are comparing elapsed times from sqlplus - for a single run - it is more than likely the first insert all filled many blocks in the cache and the second one has to wait for dbwr to flush them, or we had a checkpoint not complete, cannot allocate new log, or something.


use tkprof, so you can analyze what happened, what was waited on, cpu times, work performed:

insert into t1 select * from big_table.big_table where rownum <= 500000

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      2.35       4.75         32      20681      68766      500000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.36       4.76         32      20681      68766      500000

Rows     Row Source Operation
-------  ---------------------------------------------------
 500000  COUNT STOPKEY (cr=7230 pr=0 pw=0 time=1500150 us)
 500000   TABLE ACCESS FULL BIG_TABLE (cr=7230 pr=0 pw=0 time=507283 us)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        32        0.01          0.01
  rdbms ipc reply                               255        0.43          1.09
  log file switch completion                      5        0.97          1.17
  log file switch (checkpoint incomplete)         2        0.07          0.13
  log file sync                                   1        0.01          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************
insert all into t2 select * from big_table.big_table where rownum <= 500000

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      2.25       8.87         33      20704      69217      500000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.26       8.88         33      20704      69217      500000

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  MULTI-TABLE INSERT  (cr=474753 pr=80 pw=0 time=11137835 us)
 500000   VIEW  (cr=7225 pr=0 pw=0 time=2500089 us)
 500000    COUNT STOPKEY (cr=7225 pr=0 pw=0 time=1500083 us)
 500000     TABLE ACCESS FULL BIG_TABLE (cr=7225 pr=0 pw=0 time=500074 us)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        33        0.02          0.06
  rdbms ipc reply                               281        0.85          6.17
  log file switch completion                      3        0.23          0.41
  log file sync                                   1        0.02          0.02
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00





Now they look pretty much the same - run them again, and maybe insert all will best insert elapsed time wise (depends entirely on the WAITS) and vice versa

Selecting 50 columns into a new table

G.BADARIVISHAL, May 14, 2008 - 12:21 pm UTC

Dear Mr.Sam

In xBase we can create the new table with the required fields, the name of which(fields) is same as in the original table which has much more columns and the following command will bring the data in the required columns in the new table. Infact we need not have to create all the new fields in the new table we can copy entire structure to the new table and delete the unwanted fields in the new table:

code:

use <table> name of the new table
appe from <table> name of the original table

The new table is populated with data in the required 50
fields.

Why it is so cumbersome in PL/SQL or as we move up from xBase

Thanks,

14-05-2008


Tom Kyte
May 14, 2008 - 3:58 pm UTC

well, I'm pretty sure there are one or two things you can do in Oracle that you cannot even imagine doing in xbase, so I'm not really overly concerned that xbase has one feature that you like that we don't - not every product has every other products feature set.

The copying of 50 columns out of 50+N columns from one table to another - that sets off SO MANY RED FLAGS. It is not the way you really process data, you don't copy stuff like that - what is the real world use case here - what is your goal - we'll tell you how to do that correctly.

Who is Mr. Sam ?

A reader, May 15, 2008 - 11:07 am UTC


CAST'ing is very slow

A reader, October 06, 2008 - 2:24 pm UTC

I am accepting an array of values from .NET. I take that array and do an insert..select..from cast. I am pulling the data from a remote DB via db link.

When I run the query as an IN statement without the CAST there is no problem using the exact same value. When I cast teh array as a table it take over 1 minute to insert 1 row. I've attached my statement although I dont' think it'll help you mch. I can't decipher why this would take so much longer casting the array as a table vs. an IN statement with a hard coded value.

Insight would be appreciated.


INSERT INTO ta_edi_provider_temp
SELECT prpr_name "ProviderName", prad_addr1 "Address",
prad_addr2 "Address2", prad_city "City",
prad_state "State", prad_zip "Zipcode",
prad_ctry_cd "Country", mctn_id "EINNumber",
prpr_id "ProviderID"
FROM facetsdb.cmc_prpr_prov@testauto_devh_fsupp1_link pr,
facetsdb.cmc_prad_address@testauto_devh_fsupp1_link prad
WHERE prpr_id IN (SELECT provider_id
FROM TABLE (CAST (edi_prv AS edi_prv_type)))
AND prad.prad_id = pr.prad_id
AND prad.prad_type = pr.prad_type_check;

A reader, October 15, 2008 - 4:12 am UTC

Hi Tom,

I have a table which has 90 million records in it.I want to migrate this table from one platform to another.I tried using direct insert i.e insert /* +APPEND*/ into t select * from t but the query takes way too long(almost 24 hrs) to insert the data.Can you please provide me the code for the following

how to break this 90 million records of table into small chunks(10 million at a time) and then migrate it using pl/sql code(cursors etc) ?

Thanks for your help


Tom Kyte
October 15, 2008 - 5:30 pm UTC

why not cross platform transport it? Almost instantaneous.



http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:10498431232211
shows how to get rowid ranges for a table.


and remember - only a single session can insert /*+ APPEND */ at a time, so if you do your "do it yourself parallel" approach - you will not use append.


I'd recommend a cross platform transport.

Bulk Collect Question

Chris Seymour, October 21, 2008 - 5:37 pm UTC

Tom,
Wanted advice on something for bulk collect. Hopefully not a stupid question

Since the "older" versions of Oracle have the implementation restriction on referencing elements of a collection, I typically use a few single separate arrays instead of one collection with %rowtype (especially if I know i'll be updating and have to reference a specific element).

Lets say you have your main procedure where you do the bulk collect, but want to pass the array to another procedure to do the processing. Would you pass all individual elements as parameters, or would you condense it into one array for passing, and back out when you do a forall?

For example...
declare
  v_emp  number_table; --assume this already is a type
  v_dept number_table;
  v_sal  number_table;

  cursor c_emps is
    select empno,
           deptno,
           sal
      from emp;
begin
  open c_emps;
  loop
  fetch c_emps bulk collect into
  v_emp, v_dept, v_sal
  limit 100;
  
  --...do some processing here before updating
    forall i in 1..v_emp.count 
     update emp
        set sal = v_sal(i),
            dept = v_dept(i)
      where empno = v_emp(i));
  exit when c_emps%notfound;
  end loop;
end;


In the code above, where it gets to the part of the processing that I'm not showing here, would you send in all 3 arrays separately like:
process_records(v_emp(i), v_dept(i), v_sal(i));


Or would you first condense those 3 into a single collection using the same index and pass in just the one collection?

The reason I ask is that all the examples I see are never "real world" like where you need to work with 8 or 9 columns and actually need to write the code procedurally. So I'm curious the best way to pass these arrays to other procedures.

Any advice? And yes you can assume that this is too cumbersome to do with pure sql and need to use procedural code.
Tom Kyte
October 22, 2008 - 8:21 am UTC

I'd almost certainly pass the three individual arrays - I don't see any need to copy the data out into a new structure (double the data, extra move) and then copy it back out again.

I've never been afraid of formal named parameters - as many as it takes. Note that you can also use the NOCOPY hint on these guys if appropriate (and if you understand the implications - which are very much implications...)

Passing table type object as parameter to stored procedure

rk, March 19, 2009 - 1:47 pm UTC

I have create a stored procedure that accepts table type object as input variable. I want to pass table type object to procedure. How can I pass table type object to stored procedure?


declare
type tabtype is table of number;
v_num tabtype:=tabtype(1,2,3,4);
begin
tablestoredProcedure(v_num);--????
end;

Tom Kyte
March 23, 2009 - 10:10 am UTC

you would NOT create a new type. You say "i have a table TYPE as input", to me that means SQL type:

ops$tkyte%ORA10GR2> create or replace type tabType as table of number
  2  /

Type created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure p( p_input in tabType )
  2  as
  3  begin
  4          for i in 1 .. p_input.count
  5          loop
  6                  dbms_output.put_line( 'array('||i||') = ' || p_input(i) );
  7          end loop;
  8  end;
  9  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_data tabType := tabType(1,2,3,42,55);
  3  begin
  4          p(l_data);
  5  end;
  6  /
array(1) = 1
array(2) = 2
array(3) = 3
array(4) = 42
array(5) = 55

PL/SQL procedure successfully completed.


but if you really mean a plsql type defined in a package, you would use package.typename:

ops$tkyte%ORA10GR2> create or replace package body my_pkg
  2  as
  3  procedure p( p_input in privTabType )
  4  as
  5  begin
  6          for i in 1 .. p_input.count
  7          loop
  8                  dbms_output.put_line( 'array('||i||') = ' || p_input(i) );
  9          end loop;
 10  end;
 11  end;
 12  /

Package body created.

ops$tkyte%ORA10GR2> declare
  2          l_data MY_PKG.PRIVtabType := MY_PKG.PRIVtabType(1,2,3,42,55);
  3  begin
  4          my_pkg.p(l_data);
  5  end;
  6  /
array(1) = 1
array(2) = 2
array(3) = 3
array(4) = 42
array(5) = 55

PL/SQL procedure successfully completed.


A reader, March 30, 2009 - 6:12 am UTC

I have a master detail insertion screen. Take for example DEPT and EMP of scott schema.Many employees can be entered against one department. How i use arrays to process it. Procedure will be created with Department and Location as input arrays and then all the columns of emp will also be created as input arrays. How we link which employee links to one department. Should array of department will be repeating for each emp.I will be great help if you provide example.


Tom Kyte
March 30, 2009 - 4:29 pm UTC

typically in master detail you have

ONE master record on screen
MANY detail records on screen


so, I do not know where this array of departments fits into the equation.

A reader, March 31, 2009 - 6:03 am UTC

You are right that there is one Master and Multiple Detail Records. But in one screen we have this selection that you can select Multiple Records for Master and Multiple Records for each Detail. Once user complete this selection then a single call goes to Db to Persist these records.
Tom Kyte
March 31, 2009 - 8:17 am UTC

then, in your programming language of choice, you will have some complex data structure and you will likely call a stored procedure multiple times - passing it a dept + array of emp data.

And then after you have iterated over that, your program written in your language of choice will call "commit"


Reader, June 16, 2009 - 4:04 pm UTC

Tom,
I inserted records to tst table using APPEND hit and queried the table for the count before committing. I got the below error. Can you please the cause for this error?

 1  INSERT  /*+ APPEND */ INTO tst
  2  select *
  3*  from tmp partition (TMP2004) 
SQL> /

SQL> select count(*) from tst;
select count(*) from tst
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

Tom Kyte
June 17, 2009 - 11:13 am UTC

append is a direct path operation (this 'error' proves it was direct pathed). The error message is a bit misleading, but basically - you cannot query something in your session after you direct path (bypass buffer cache, do not generate undo) load it - until you commit.

Modify = "bulk" load of record (even just one, not conventional load of one record)

Duke Ganote, June 17, 2009 - 12:41 pm UTC

SQL> create table t ( t number );

Table created.

SQL> insert /*+ APPEND */ into t select 1 from dual;

1 row created.

SQL> select count(*) from t;
select count(*) from t
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL> commit;

Commit complete.

SQL> select count(*) from t;

            COUNT(*)
--------------------
                   1

SQL> insert /*+ APPEND */ into t values (  2 ); -- HINT ignored!

1 row created.

SQL> select count(*) from t;

            COUNT(*)
--------------------
                   2


Tom Kyte
June 17, 2009 - 3:22 pm UTC

unless you are in 11gr1, in which case values does append - until 11gr2 when they backed that out and made it a separate hint to have values use append.

insert into table using forall

A reader, August 03, 2009 - 12:10 am UTC

Hi Tom,

I have written following procedure to insert data into a table but getting following error while i am using FORALL,
please can you explain where i am getting wrong?

{code}
DECLARE

TYPE EMP_RECORD IS RECORD
(
EMP_P_ID EMP.P_ID%TYPE,
EMP_FIRSTNAME EMP.FIRSTNAME%TYPE,
EMP_ADDRESS EMP.ADDRESS%TYPE
);


TYPE TB_ARRAY IS TABLE OF EMP_RECORD
INDEX BY BINARY_INTEGER;

ARRAY TB_ARRAY;

CURSOR CUR_EMP IS
SELECT P_ID, FIRSTNAME, ADDRESS
FROM EMP;

BEGIN

OPEN CUR_EMP;

LOOP

FETCH CUR_EMP BULK COLLECT INTO ARRAY LIMIT 10;

FORALL I IN 1..ARRAY.COUNT
INSERT INTO DEPT (P_ID, FIRSTNAME, ADDRESS)
VALUES (ARRAY(I.EMP_P_ID), ARRAY(I.EMP_FIRSTNAME),
ARRAY(I.EMP_ADDRESS));

--DBMS_OUTPUT.PUT_LINE(ARRAY(I.EMP_P_ID));

EXIT WHEN CUR_EMP%NOTFOUND;

END LOOP;

CLOSE CUR_EMP;

END;

{code}

Tom Kyte
August 04, 2009 - 1:37 pm UTC

can anyone else see the error ? The clearly wrote:

...
I have written following procedure to insert data into a table but getting
following error
while i am using FORALL,
please can you explain where i am getting wrong?

...

It is not showing up on my screen and when I run their code, I get errors about missing objects in the database - which cannot be the case for them....



(hint to the poster - you sort of need to give us a way to reproduce your findings or at least give us a clue as to the error you are actually receiving and where in the code it is being hit)




I just scanned your code and it is very strange, please explain to us what you think it should be doing?

... ARRAY(I.EMP_P_ID) ...

that is strange looking to me, not sure what you are trying to do with that.

array is the array
array(i) is the i'th record in the array
array(i).emp_p_id would be the i'th emp_p_id in it



and in short, this should have NO PROCEDURAL CODE, this is a simple insert as select - period.


did you mean to do this?

ops$tkyte%ORA10GR2> create table emp as select * from scott.emp;

Table created.

ops$tkyte%ORA10GR2> create table new_emp as select * from scott.emp where 1=0;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2      cursor c is select empno, ename, hiredate from emp;
  3
  4      type tableType is table of c%rowtype index by binary_integer;
  5      l_data tableType;
  6  begin
  7      open c;
  8      loop
  9          fetch c bulk collect into l_data limit 10;
 10          forall i in 1 .. l_data.count
 11              insert into (select empno, ename, hiredate from new_emp) values l_data(i);
 12          exit when c%notfound;
 13      end loop;
 14      close c;
 15  end;
 16  /

PL/SQL procedure successfully completed.

if condition on array

Kishore, August 21, 2009 - 4:16 pm UTC

Hi Tom,

One you have your pl/sql table populated, if you want to see a specific value exist or not is that possible to seek to that value?

for example in the below code i said if IF v_rec(2).na = 'KISHORE' as I know v_rec(2).na value. But I want to know if anywhere in that column is there a value 'KISHORE'; how is that possible?

DECLARE
TYPE cust_type IS RECORD (
sno NUMBER (10),
na VARCHAR2 (50)
);

TYPE rec_type IS TABLE OF cust_type
INDEX BY BINARY_INTEGER;

v_rec rec_type;
na2 VARCHAR2 (50);
BEGIN
v_rec (1).sno := 1;
v_rec (1).na := 'KRISHNA';
v_rec (2).sno := 2;
v_rec (2).na := 'KISHORE';
DBMS_OUTPUT.put_line (v_rec (1).sno || v_rec (1).na);
DBMS_OUTPUT.put_line (v_rec (2).sno || v_rec (2).na);

IF v_rec (2).na = 'KISHORE'
THEN
DBMS_OUTPUT.put_line ('caught');
END IF;
END;


RE: Forall Issue May 2, 2008 @yoav ben moha

Scott, November 12, 2009 - 11:25 am UTC

RE: Forall Issue   May 2, 2008 @yoav ben moha 

Suppose there is a requirement to schedule this insert via dbms_job / dbms_scheduler....What are the options to store the code in the database and for it to pass the "Do it in SQL" test?  

Please consider:

SQL> CREATE PROCEDURE
  2      my_scheduled_insert
  3          is
  4        begin
  5
  6                INSERT INTO test_hist
  7                SELECT * FROM test;
  8
  9         end my_scheduled_insert;
 10    /

Is this SQL or Procedural?  

-----------------------------------------------
Followup   May 2, 2008 - 7am Central time zone:

why are you writing procedural code 

why not just 


Insert Into test_HIST
Select * from test
 Where Inst_prod_id in (    SELECT INST_PROD_ID
  FROM source WHERE  Status = 1 )
-------------------------------------------------------

Tom Kyte
November 15, 2009 - 2:15 pm UTC

I would say:


SQL> CREATE PROCEDURE
  2      my_scheduled_insert
  3          is
  4        begin
  5
  6                INSERT INTO test_hist
  7                SELECT * FROM test;
  8
  9         end my_scheduled_insert;
 10    /


is not really procedural, it is set based.

It is a stored procedure
Sans procedural code.


When I write "why is there any procedural code", I'm talking about, looping, if, then else, etc.

Bulk collect

reader, November 13, 2009 - 4:55 am UTC

Hi Tom,

I have recently compressed some tables and we have an existing process that inserts into these tables using FORALL. The inserts are around the 15 million figure. As you know, to keep the data compressed we need to insert /*+ append */ and that can't be done in an FORALL statement.

My SQL is as follows (simplified) ...

declare

cursor c1 is
select key1,
key2
from t1;

type a is table of x.a%type;
type b is table of y.b%type;

g_a a;
g_b b;

begin
open t1
loop
fetch t1 into g_a, g_b limit 1000;

if g_a.count = 0 then exit; end if;

forall element in g_a.first .. g_a.last
insert into compressed_table
(a,b,c,d,e...z)
select * from
(select a,b,c,d,e ...
from another_table b
where b.a = g_a(element)
and b.b = g_b(element)
where rownum = 1;
end loop;

commit;
end;

So using the above method the data would be loaded in uncompressed format. I tried re-writing this using a single SQL statement and analytic function and row_number() but I blew the TEMP tablespace ...

e.g. insert /*+ APPEND */ compressed_table
(a,b...z)
SELECT a,b,c..z
(select p.*,
row_number() over
(partition by a,b
order by p desc ,r asc ,s asc ,t asc ) rn
from t1 a,
another_table b
where key1 = b.a
and key2 = b.b)
where rn = 1

Do I have any other options I can try? Thanks



Tom Kyte
November 15, 2009 - 2:41 pm UTC

how big was temp - many times, that is the problem (make temp larger is always an option)

additionally - what are your pga settings?

and without knowing what columns come from where - it would not be possible to suggest a rewrite.

Bulk Collect

reader, November 13, 2009 - 6:36 pm UTC

Hi Tom,

I forgot to add my ORDER BY clause to my bulk collect PL/SQL - I meant

begin

...
forall element in g_a.first .. g_a.last
insert into compressed_table
(a,b,c,d,e...z)
select * from
(select a,b,c,d,e,f ...
from another_table b
where b.a = g_a(element)
and b.b = g_b(element)
order by d desc, e asc, f desc)
where rownum = 1;
...
end;

When I look into this piece of SQL, contrary to what people might think it doesn't add one record (rownum = 1) in the FORALL, rather it orders by the columns for the 1000 records (window defined in the limit) and picks out the record (based on the order by).

Is there a way of accomplishing this functionality using an Analytic Function and INSERT /*+ APPEND */ INTO compressed table SELECT ...

Bulk Collect

reader, November 16, 2009 - 2:59 am UTC

Hi Tom - thanks for your prompt response.

pga_aggregate_target = 134MB
workarea_size_policy = auto
TEMP tablespace size = 10.7 GB

DBA's don't give us access to v$sort_usage, v$tempfile or v$sort_segment. I had access to dba_temp_files, so I found the location of the TEMP datafile on the UNIX box and took the size from there.

Tom Kyte
November 23, 2009 - 9:17 am UTC

that pga aggregate target is tiny, what is the reasoning behind that.

If you have 10.7 gb temp tablespace, one would think "you do lots of big sorts", in order to make big sorts more efficient - one would typically have a lot more memory allocated to that.

You (your session) gets to use about 5% of the pga aggregate target, but you won't use it all at once - so you'll use less than 5% of the pga aggregate target to sort - and then page to disk. You might be using about 5mb or less to sort - ask the DBA's "why is that so very very very low - you do realize that number is an aggregate total for EVERY SESSION - not per session, you have limited EVERY SESSION to about 130mb of sort space in memory at most...."

PGA Aggregate Target

a reader, December 08, 2009 - 7:49 am UTC

Hi Tom - Our DBA's have gone to Oracle EM 10g, PGA Target Advice and advised that 128MB for PGA aggregate target is fine.

I can't paste the graph in here but the cache hit percentage is around 94% before it levels off at around 130MB PGA Target ...
Tom Kyte
December 10, 2009 - 12:43 pm UTC

2mb isn't going to make much of a difference for anything.

PGA Aggregate Target

a reader, December 11, 2009 - 4:58 am UTC

So Tom, does that mean that we should disregard whatever is being suggested in PGA Aggregate Target advice? The thing is I need to have a good reason for suggesting an increase in PGA Aggregate Target. Why does 10g EM give us such a low optimal target?
Tom Kyte
December 11, 2009 - 7:42 am UTC

I fail completely to understand why you say "does that mean that we should disregard whatever is being suggested in
PGA Aggregate Target advice?"

can you tell me why you said that? What in the conversation led you to that thought?

What I saw was:


Hi Tom - Our DBA's have gone to Oracle EM 10g, PGA Target Advice and advised that 128MB for PGA
aggregate target is fine.

I can't paste the graph in here but the cache hit percentage is around 94% before it levels off at
around 130MB PGA Target ...



and I basically said "128mb, 130mb - six one way, half dozen the other". 128mb is what you need given your workload.

Now, where did you get the idea that "this should be ignored"?

PGA Aggregate Target

a reader, December 13, 2009 - 10:48 am UTC

You are mentioning that the PGA aggregate target is too small (128MB) given that we have 10.7Gb of TEMP tablespace. The DBA's have set the PGA Aggrgate target to 128MB because that is what is being suggested in Enterprise Manager - PGA Aggregate target advise. My question was - why is 128MB being suggested as the optimal target in EM? How does it come to that conclusion - based on current workloads? What arguments should I use to persuade the DBA's to increase the PGA Agg value - seeing as what is being suggested by suggested by EM is too low?
Tom Kyte
December 14, 2009 - 8:26 am UTC

Ok, now that we have the advice from EM, let me turn the question around.

If your current workload, as observed, only needs a 128mb pga aggregate target, why do you have 10.7gb temp area - it wouldn't see you are using it?

PGA Aggregate Target

A reader, December 14, 2009 - 5:22 pm UTC

The question of why we have a 10.7 GB Temp tablespace - I am not sure. Presumably because some DBA thought we would be doing a lot of sorting. What I do know is, that when we use analytic functions (on largish datasets ~ 20 million records) we will will consume all the TEMP tablespace. I don't know how we should estimate - what the PGA Aggregate setting should be?
Tom Kyte
December 15, 2009 - 8:32 am UTC

If you have a query that consumes 10.7 gb of temp (or even half of that), you aren't going to be doing in memory

The advisor looks at what you've been doing - how much you've needed (memory/disk wise)

and then based on that, and the possible settings for pga_aggregate_target - suggest "this would be the most reasonable size"

If you do lots of queries that use 1mb of memory and lots of queries that use 5gb of memory, it knows the 5gb aren't going to happen in ram so it just sort of considers the stuff that can.

And 1mb queries - would lead to a relatively small pga_aggregate_target.


Get into the statspace/AWR reports, in the advisor section - they not only give you the advice but all of the supporting data used to give such advice.

PGA Aggregate Target

anon, December 15, 2009 - 10:59 am UTC

Thanks Tom - that's a great starting point I'll follow through on that!

Please clarify

Pranav, January 11, 2010 - 8:36 am UTC

Hi Tom,
Do PL/SQL records have attributes like COUNT/FIRST/LAST
so that we can loop over them?
I tried these attributes but i am getting errors.
Any work around for this?



Tom Kyte
January 18, 2010 - 12:11 pm UTC

no, a record is a record, you cannot dynamically iterate over the components like that.

Parse a text file and load data in table

SV, January 28, 2011 - 12:56 pm UTC

Tom

I need to parse a input file that has multiple master - detail sections and insert the data into a table based on which I create required csv reports.

Below is a snippet of the input file
.......
.......
.......
492990*TRADE DATE BALANCES - REGION 017 PAGE: 13371
492990*
492990*REPORT: 009 CASH ACCOUNTS REQUIRING MARGIN
492990*ACCOUNT NO. DEBIT CREDIT
492990*--------------- ---------- -----------
492990*225-30213-C$-11 25.50
492990*225-30282-C$-17 27.99
492990*225-30286-C$-13 17.10
492990*225-30474-C$-15 122.05
492990*225-30531-C$-16 42.00
492990*225-30549-C$-16 26.89
492990*225-30712-C$-17 27.15
492990*225-30821-C$-15 53.45
492990*225-31422-C$-16 86.76
492990*225-31447-C$-17 79.68
492990*225-31700-C$-19 10.78
492990*225-31733-C$-10 128.39
492990* U$-10
492990* 128.39
492990*225-31794-C$-16 162.35
492990*225-31988-C$-12 17.98
492990*225-90311-C$-16 20,826.00-
492990*
492990*
492990*TRADE DATE BALANCES - REGION 017 PAGE: 13372
492990*TRAC: TSC4D
492990*REPORT: 010 FINANCIAL INSTITUTIONS - AI
492990*ACCOUNT NO. DEBIT CREDIT
492990*--------------- ---------- -----------
492990*214-60843-C$-10 5,104.70-
492990*215-57270-C$-15 67,392.78-
.......
.......

I have created a shell script to parse the file line by line and the insert 1 record at a time into a Staging Table and commit before I leave that sqlplus session to parse the next record.

The script takes more than 4 hours (after I which broke execution).

I then tried the following.

1.Load the complete file in a temporary 2 column (line, data) table.

2.Move the Unix parsing code to a DB package created and parse the data row by row and insert into the Staging Table.

When I ran this this too takes 2.5 hours to complete.

Below is my parsing code which is part of a DB Package.

------------------------------
PROCEDURE parsedata
IS
CURSOR c1 IS
SELECT line, data FROM svs_sched4 WHERE line > 1 ORDER BY 1;

CURSOR c_region (v_reg_line NUMBER) IS
SELECT line, DATA FROM svs_sched4 WHERE DATA LIKE '%' || 'PGM' || '%' AND line > v_reg_line ORDER BY 1;

CURSOR c_region_det (v_det_line NUMBER) IS
SELECT line, DATA FROM svs_sched4 WHERE line > v_det_line ORDER BY 1;

v_im_region VARCHAR2 (10);
v_im_date VARCHAR2 (10);
v_im_acct_cat_grp VARCHAR2 (8);
v_im_acct_cat VARCHAR2 (8);
v_im_clcode VARCHAR2 (8);
v_prev_im_clcode VARCHAR2 (8);
v_im_accfnd VARCHAR2 (10);
v_im_acctyp VARCHAR2 (10);
v_im_chkdig VARCHAR2 (10);
v_im_debit_amt VARCHAR2 (30);
v_im_credit_amt VARCHAR2 (30);
v_im_is_acctlink VARCHAR2 (50);
v_im_is_local VARCHAR2 (10);
v_line NUMBER;
v_rec_no NUMBER;

BEGIN
-- Get IM_DATE
SELECT TRIM (TO_CHAR (TO_DATE (SUBSTR (DATA, 110, 8), 'MM/DD/YY'), 'YYYY-MM-DD')) im_date
INTO v_im_date
FROM svs_sched4
WHERE DATA LIKE '%' || 'PAGE: 1' || '%';

v_line := 1;

FOR x IN c_region (v_line)
LOOP
-- Get the im_region, im_acct_cat_grp and im_acct_cat
IF (x.DATA LIKE '%' || 'REGION 010' || '%' OR x.DATA LIKE '%' || 'REGION 012' || '%' OR x.DATA LIKE '%' || 'REGION 017' || '%') THEN
v_im_region := SUBSTR (x.DATA, 99, 3);
v_line := x.line + 2;

SELECT SUBSTR (DATA, 10, 2)
INTO v_im_acct_cat
FROM svs_sched4
WHERE line = v_line;

v_im_acct_cat_grp := get_im_acct_cat_grp (v_im_acct_cat);
END IF;

v_line := x.line + 4;
v_rec_no := 1;

FOR y IN c_region_det (v_line)
LOOP
EXIT WHEN (y.DATA LIKE '%' || 'REGION 010' || '%' OR y.DATA LIKE '%' || 'REGION 012' || '%' OR y.DATA LIKE '%' || 'REGION 017' || '%');
v_line := y.line;
--Get IM_CLCODE
v_im_clcode := SUBSTR (y.DATA, 1, 3) || SUBSTR (y.DATA, 5, 5);

IF v_im_clcode = '' THEN
v_im_clcode := v_prev_im_clcode;
END IF;

-- Get im_accfnd, im_acctyp, im_chkdig, im_debit_amt, im_credit_amt
v_im_accfnd := SUBSTR (y.DATA, 11, 2);
v_im_acctyp := SUBSTR (y.DATA, 14, 1);
v_im_chkdig := SUBSTR (y.DATA, 15, 1);
v_im_debit_amt := TRIM (SUBSTR (y.DATA, 79, 23));
v_im_credit_amt := TRIM (SUBSTR (y.DATA, 108, 22));

--Remove commas from Debit and Credit Amounts
IF v_im_debit_amt IS NOT NULL THEN
SELECT REPLACE (v_im_debit_amt, ',', '') INTO v_im_debit_amt FROM DUAL;
v_im_debit_amt := '-' || v_im_debit_amt;
END IF;

IF v_im_credit_amt IS NOT NULL THEN
SELECT REPLACE (v_im_credit_amt, ',', '') INTO v_im_credit_amt FROM DUAL;
END IF;

SELECT DECODE (COUNT (1), 0, 'N', 'Y') INTO v_im_is_local FROM client WHERE client.cl_clcode = v_im_clcode;

--Get IM_IS_ACCTLINK
IF v_im_is_local = 'N' THEN
v_im_is_acctlink := '';
ELSE
v_im_is_acctlink := get_im_acct_link (v_im_clcode, v_im_accfnd);

IF v_im_is_acctlink = 'AcctLink' THEN
v_im_is_acctlink := 'Y';
ELSE
v_im_is_acctlink := 'N';
END IF;
END IF;

--For each detail record insert a row in il_mtsc4d table
BEGIN
INSERT INTO il_mtsc4d
(im_date, im_region, im_acct_cat_grp, im_acct_cat, im_clcode, im_accfnd, im_acctyp, im_chkdig, im_debit_amt, im_credit_amt, im_is_local, im_is_acctlink
)
VALUES (v_im_date, v_im_region, v_im_acct_cat_grp, v_im_acct_cat, v_im_clcode, v_im_accfnd, v_im_acctyp, v_im_chkdig, v_im_debit_amt, v_im_credit_amt, v_im_is_local, v_im_is_acctlink
);
v_rec_no := v_rec_no + 1;
END;

v_prev_im_clcode := v_im_clcode;
END LOOP;
IF v_rec_no > 1000 THEN
COMMIT;
END IF;
END LOOP;
COMMIT;
END parsedata;
------------------------------

I want to use Bulk Inserts but am not sure about how to do it in my case.


Any idea of how to go about improving the performance or fine tuning the above procedure.


Parse a text file and load data in table

SV, January 28, 2011 - 1:01 pm UTC

Below is the actual snippet of the input file. Sorry about the line numbers before :)
.......
.......
.......
492990*PGM: CAS20A 068 BMO NB CAPITAL EXPOSURE, SCHEDULE 4 - TRADE DATE BALANCES - REGION 017 DATE: 12/30/10 PAGE: 13371
492991*TRAC: TSC4D
492992*REPORT: 009 CASH ACCOUNTS REQUIRING MARGIN
492993*ACCOUNT NO. RR# ACCOUNT NAME MC CL DEBIT BALANCES CREDIT BALANCES
492994*--------------- --- ------------------------------ -- -- ----------------------- -----------------------
492995*225-30213-C$-11 6D4 MR MICHAEL CHOY A1 40 25.50
492996*225-30282-C$-17 6D4 MS PATTY WEBB A1 40 27.99
492997*225-30286-C$-13 6D4 MS JILLIAN DUPONT A1 40 17.10
492998*225-30474-C$-15 6D4 MR JOSEFF NAUBERT A1 40 122.05
492999*225-30531-C$-16 6D4 MR WINSTON LEUNG A1 40 42.00
493000*225-30549-C$-16 6D4 MRS DOLORES PODLUBNY A1 40 26.89
493001*225-30712-C$-17 6D4 DR JACKIE FELBERG A1 40 27.15
493002*225-30821-C$-15 6D4 MR DOUGLAS J NICKEL A1 40 53.45
493003*225-31422-C$-16 6D4 MR ALAN YU A1 40 86.76
493004*225-31447-C$-17 6D4 MR HARJIT DEOL A1 40 79.68
493005*225-31700-C$-19 6D4 MR TERENCE D RYSZ A1 40 10.78
493006*225-31733-C$-10 6D4 MR WAI KONG WONG A1 40 128.39
493007* U$-10
493008* *** TOTAL FOR ACCOUNT 128.39
493009*225-31794-C$-16 6D4 MR MALCOLM D LEANGEN A1 40 162.35
493010*225-31988-C$-12 6D4 MRS JIANMIN HE A1 40 17.98
493011*225-90311-C$-16 6MX MR GUENTHER MOELLER NC 55 20,826.00-
493012* *** TOTAL FOR REPORT 281,801.25 367,853.71-.......
.......

Tom Kyte
February 01, 2011 - 3:25 pm UTC

sorry, I cannot (will not actually) reverse engineer code - especially when it is so large.

Maybe if you had a specification (set of rules) that told us what needed to be done to the file we could look at it. I don't know why you need the multiple passes - you read the table inside the loop that reads the table. It seems to me that parsing a master/details would involve a single pass - you wouldn't need to read and re-read the table over and over again.

BULK COLLECT into plsql collection

Rajeshwaran, Jeyabal, May 26, 2011 - 11:57 am UTC

Tom:

Can you help me what i am doing wrong here?

<quote>
When operating on multiple rows, a DML statement with the returning_clause stores values from expressions, rowids, and REFs involving the affected rows in bind arrays
</quote>

http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_9014.htm#BABJFHEF

rajesh@ORA11GR2> create table emp_temp as select * from emp where 1 = 0;

Table created.

Elapsed: 00:00:00.03
rajesh@ORA11GR2>
rajesh@ORA11GR2> declare
  2     type num_type is table of number;
  3     l_empno num_type;
  4  begin
  5     insert into emp_temp(empno,ename)
  6     select empno,ename
  7     from emp
  8     returning empno
  9     bulk collect into l_empno;
 10  end;
 11  /
        returning empno
                  *
ERROR at line 8:
ORA-06550: line 8, column 12:
PL/SQL: ORA-00933: SQL command not properly ended
ORA-06550: line 5, column 2:
PL/SQL: SQL Statement ignored


Elapsed: 00:00:00.00
rajesh@ORA11GR2>
rajesh@ORA11GR2> declare
  2     type num_type is table of number;
  3     l_empno num_type := num_type();
  4  begin
  5     insert into emp_temp(empno,ename)
  6     select empno,ename
  7     from emp
  8     returning empno
  9     bulk collect into l_empno;
 10  end;
 11  /
        returning empno
                  *
ERROR at line 8:
ORA-06550: line 8, column 12:
PL/SQL: ORA-00933: SQL command not properly ended
ORA-06550: line 5, column 2:
PL/SQL: SQL Statement ignored


Elapsed: 00:00:00.01
rajesh@ORA11GR2>

Tom Kyte
May 26, 2011 - 1:33 pm UTC


working link:

http://docs.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_9014.htm#SQLRF55083


insert as select is currently not supported with the returning clause. Only insert with the values clause is.

You currently have to two step this by

a) selecting bulk collecting the data
b) forall i bulk inserting the data.

bulk collect error

A reader, May 30, 2011 - 2:33 am UTC

DECLARE
TYPE NumList IS TABLE OF employees.EMPNO%TYPE;
enums NumList;
TYPE NameList IS TABLE OF employees.last_name%TYPE;
names NameList;
BEGIN
DELETE FROM emp_temp
RETURNING EMPNO, last_name
BULK COLLECT INTO enums, names;

DBMS_OUTPUT.PUT_LINE ('Deleted ' || SQL%ROWCOUNT || ' rows:');
FOR i IN enums.FIRST .. enums.LAST
LOOP
DBMS_OUTPUT.PUT_LINE ('Employee #' || enums(i) || ': ' || names(i));
END LOOP;
END;
/

when no record is deleted then following error is returned why?

ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 12
Tom Kyte
May 31, 2011 - 10:18 am UTC

because enums.first would be NULL. And that raises that error.

you should just use

for i in 1 .. enums.count
loop


you know the array is "dense" and the index starts with 1.

ops$tkyte%ORA11GR2> begin
  2          for i in null .. null
  3          loop
  4                  dbms_output.put_line( 'looping' );
  5          end loop;
  6  end;
  7  /
begin
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 2


Elapsed: 00:00:00.01
ops$tkyte%ORA11GR2> begin
  2          for i in 1 .. 0
  3          loop
  4                  dbms_output.put_line( 'looping' );
  5          end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.


bulk collect error

Rajeshwaran, Jeyabal, May 30, 2011 - 12:55 pm UTC

When no record is deleted, then the collections enums & names will be empty.

- enums.FIRST & enums.LAST will return NULL. Hence you get this error ORA-06502: PL/SQL: numeric or value error

The work around is check if the collection has any data's to process, if yes go a head with FOR LOOP.


rajesh@ORA11GR2>
rajesh@ORA11GR2> DECLARE
  2    TYPE NumList IS TABLE OF emp.EMPNO%TYPE;
  3    enums  NumList;
  4    TYPE NameList IS TABLE OF emp.ename%TYPE;
  5    names  NameList;
  6  BEGIN
  7    DELETE FROM emp_temp
  8    RETURNING EMPNO, ename
  9    BULK COLLECT INTO enums, names;
 10
 11    DBMS_OUTPUT.PUT_LINE ('Deleted ' || SQL%ROWCOUNT || ' rows:');
 12    dbms_output.put_line ( 'enums.first = '||enums.first);
 13    dbms_output.put_line ( 'enums.last = '||enums.last);
 14
 15    if enums.count > 0 then
 16       FOR i IN enums.FIRST .. enums.LAST
 17       LOOP
 18             DBMS_OUTPUT.PUT_LINE ('Employee #' || enums(i) || ': ' || names(i));
 19       END LOOP;
 20    end if;
 21  END;
 22  /
Deleted 0 rows:
enums.first =
enums.last =

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
rajesh@ORA11GR2>


Tom Kyte
May 31, 2011 - 10:58 am UTC

or just use


for i in 1 .. enums.count


the logic is easier that way - that would be the more "standard" approach..

direct insert vs bulk collect

Rae, July 12, 2011 - 1:36 am UTC

Hi Tom,

I'm a fan for so many years now. You always have great Oracle tips.

As I was reading through the messages on this thread, I thought "so when do I use Insert-Select vs Bulk Collect and vice-versa"? Perhaps you can shed light on this?

Cheers



Tom Kyte
July 13, 2011 - 1:36 pm UTC

you do insert as select whenever you can and you use bulk insert only when insert as select is not feasible.

Do I use bulk collect for a long time of subset SQL?

A reader, October 05, 2011 - 4:00 am UTC

Dear Sir!
I've got one problem which code name is 4030 (out of process memory). My server is AIX, 8gb total physical, and we've got an external program - BILLING written by Delphi, that loading into database many data, it's complex process when load success into partitioned, then it call a function in Oracle, the function will create a structure of table, create index, ... and insert into a target table by create table as select (CTAS).

So, I've give some solution, such as increase PGA, decrease PGA, dumping to system by setospid, tracing whatever it ran in, ..etc.. however, the process (call by Delphi) was returned by ORA-04030 at CTAS. It's only appeared if the data was big (count(*) about 40 millions rows). I're wondered, worried and did not how do I continued.

And I think about bulk collect, however, I'd not rather wrote any code for complex ever.

May you guide me something?

Thank you!
Tom Kyte
October 05, 2011 - 10:45 am UTC

increasing the pga aggregate target will have zero effect.

decreasing it will only help IF there are a TON of other sessions using memory at the same time or your pga target was set WAY above your real memory. decreasing MIGHT leave free some memory that is currently allocated. I am doubtful.


You need to get to the root cause. 40,000,000 records is pretty tiny, small potatoes these days.

do you have a ulimit set on the process that does the create table as select (whatever spawned the dedicated server).

are you really running out of memory - if you monitor this, do you see the process grabbing gigabyte after gigabytes of memory?

I've solved this problem

tuan anh tran, October 05, 2011 - 9:59 pm UTC

Dear Sir!
Many thank to you.
I've solved this problem yesterday.

After solutions giving from me and my team but were not a best solution, I decided to embedded code in Delphi program which device to limited array size when inserting 10.000 rows one time, force to commit, made append hint, ..etc.. So, the much time I took the rows insert, it's not easy to wait.

After 2 hours, I found that only 3.000.000 rows inserted, and I though it's not a best solution for me. Looking again at old dump file when trapped 4030 (errorstack level 3), it's that:

ksedmp: internal or fatal error
ORA-04030: out of process memory when trying to allocate 258072 bytes (QERGH hash-agg,kllcqas:kllsltba)
Current SQL statement for this session:
CREATE TABLE CCS_HCM.hoadon_092011 AS   SELECT kh.ma_kh, ps.ma_tb,nvl(kh.sodaidien,ps.ma_tb) sodaidien,1 is_group,Nvl(ps.cuoc
ps_dichvu,0) cuocps_dichvu,Nvl(ps.cuoccv_thukhac,0) cuoccv_thukhac,
                         ps.cuoc_km,decode(ps.cuoc_cackhoan_thue,1,0,-1,0,ps.cuoc_cackhoan_thue) cuoc_cackhoan_thue,
                         decode(ps.cuoc_congdv,1,0,-1,0,ps.cuoc_congdv) cuoc_congdv,
                         decode(ps.thue,1,0,-1,0,ps.thue) thue,decode(ps.cuoc_tt,1,0,-1,0,ps.cuoc_tt) cuoc_tt,
                          ps.TIEN_EZPAY+ps.TIEN_GIAMTRU+ps.TIEN_TRATRUOC+ps.TIEN_DATCOC+ps.TIEN_SDTK cuoc_giamtru,
                         decode(ps.cuoc_tt,1,0,-1,0,ps.cuoc_tt) + ps.TIEN_EZPAY+ ,

.....

SO: 70000008af17dd0, type: 53, owner: 70000008c9d8120, flag: INIT/-/-/0x00
      LIBRARY OBJECT LOCK: lock=70000008af17dd0 handle=70000008e9e2ae8 mode=N
      call pin=0 session pin=0 hpc=0000 hlc=0000
      htl=70000008af17e50[700000081fd81b8,70000008af19328] htb=700000081fd81b8 ssga=700000081fd77a0
      user=70000008c9d8120 session=70000008ca00350 count=1 flags=[0000] savepoint=0x4e8c444e

....

============
Plan Table
============
--------------------------------------------------------+-----------------------------------+-------------------------+
| Id  | Operation                    | Name             | Rows  | Bytes | Cost  | Time      |  TQ  |IN-OUT|PQ Distrib |
--------------------------------------------------------+-----------------------------------+-------------------------+
| 0   | CREATE TABLE STATEMENT       |                  |       |       |  192K |           |      |      |           |
| 1   |  TABLE ACCESS BY INDEX ROWID | CHUKY_TCS        |     1 |    12 |     3 |  00:00:01 |      |      |           |
| 2   |   INDEX SKIP SCAN            | CKTC_PK          |     1 |       |     2 |  00:00:01 |      |      |           |
| 3   |  HASH UNIQUE                 |                  |     1 |    22 |     3 |  00:00:01 |      |      |           |
| 4   |   REMOTE                     | NHANVIEN_TCS     |     1 |    22 |     2 |  00:00:01 |DBL_2_MAIN.VINAPHONE.VN| R->S | 
          |
| 5   |  LOAD AS SELECT              |                  |       |       |       |           |      |      |           |
| 6   |   HASH JOIN RIGHT OUTER      |                  | 1348K | 2355M |  118K |  00:24:07 |      |      |           |
| 7   |    VIEW                      |                  |     1 |  1060 |     5 |  00:00:01 |      |      |           |
| 8   |     REMOTE                   |                  |       |       |       |           |DBL_2_MAIN.VINAPHONE.VN| R->S | 
          |
| 9   |    HASH JOIN                 |                  | 1348K |  960M |  118K |  00:24:07 |      |      |           |
| 10  |     TABLE ACCESS FULL        | KHACHHANGS_092011|  682K |  134M |  9419 |  00:02:54 |      |      |           |
| 11  |     HASH JOIN RIGHT OUTER    |                  | 1348K |  694M |   67K |  00:14:46 |      |      |           |
| 12  |      TABLE ACCESS FULL       | DANHBA_DDS_092011|  528K |   15M |  5659 |  00:01:08 |      |      |           |
| 13  |      VIEW                    |                  | 1348K |  654M |   28K |  00:06:46 |      |      |           |
| 14  |       HASH GROUP BY          |                  | 1348K |  637M |  7198 |  00:01:27 |      |      |           |
| 15  |        VIEW                  |                  | 1348K |  637M |  7198 |  00:01:27 |      |      |           |
| 16  |         HASH GROUP BY        |                  | 1348K |  141M |  2616 |  00:00:32 |      |      |           |
| 17  |          TABLE ACCESS FULL   | NOPS_092011      | 1348K |  141M |  2491 |  00:00:30 |      |      |           |
--------------------------------------------------------+-----------------------------------+-------------------------+
 
Content of other_xml column
===========================
  db_version     : 10.2.0.4
  parse_schema   : CCS_COMMON
  dynamic_sampling: yes
  plan_hash      : 529276534
  Outline Data:

...

FIVE LARGEST SUB HEAPS for heap name="QERHJ hash-joi"   desc=11095c9e0
******************************************************
SUBHEAP 2: desc=11c6c7b20
******************************************************
HEAP DUMP heap name="QERGH hash-agg"  desc=11c6c7b20
 extent sz=0xff00 alt=32767 het=32767 rec=0 flg=2 opc=2
 parent=1109136d8 owner=0 nex=0 xsz=0x0
EXTENT 0 addr=11fd3a6c8
  Chunk        11fd3a6d8 sz=   258072    freeable  "kllcqas:kllsltb"
EXTENT 707 addr=11b079748
  Chunk        11b079758 sz=    65304    freeable  "QERHJ hash-joi "  ds=11095c9e0

...

Permanent space    =  1389888
FIVE LARGEST SUB HEAPS for heap name="pga heap"   desc=110009f90
  Subheap ds=110192d20  heap name=   KSFD vec heap  size=          180376
   owner=0  latch=0
  Subheap ds=110386130  heap name=  Alloc environm  size=          171624
   owner=0  latch=0
  Subheap ds=11040e540  heap name=  PLS non-lib hp  size=           63288
   owner=0  latch=0
  Subheap ds=11075cef8  heap name=  koh-kghu call   size=           48536
   owner=0  latch=0
  Subheap ds=11040e3c8  heap name=        qmtmInit  size=           20920
   owner=0  latch=0

...

Total free space   =        0
UNPINNED RECREATABLE CHUNKS (lru first):
PERMANENT CHUNKS:
  Chunk        11046cdb0 sz=      184    perm      "perm           "  alo=184
Permanent space    =      184
FIVE LARGEST SUB HEAPS for heap name="Alloc server h"   desc=11046c578


I found some thing here (I'm sorry, I could not understand anything in dump file, just only little)

- QERGH hash-agg: Hash aggregated -> When hash joined, many buffer cached in memory must be pinned and send from server to client, in my case, the statement was called from client, recursive, execute (and it call to the other database from db link), returned when cached successfully and send to client. But client did not received any thing, excepted 4030 error.
- Save point appeared in dump file, I though, if exceed memory in my server (when the statement is in processing, I queried for pga_target, and found nothing in column ECSTD (exceed to target). Try to run CTAS in my server (at location), it done without no-error.
- 5 big heaps in pga heap, but the biggest is 20920 byte when the target for PGA I configured is 5gb. No, I though, it is not problem for PGA still now.

It's the problem when Oracle database sent to client when OraDBMS done the statment, isn't it? I went to the way, finding again in metalink, so I found Note: ID 753439.1. The ID describe my problem right, I did anything from, such as:

- Unlimited for oracle user (AIX, configure = -1)
- Did not execute from client, but successful in server location.
- Try to re-start CRS and Database 2 times by Srvctl

The simple solution: Stop listener, restart nodes by Lsnrctl command line.

Explanation:
1.  The problem is identified as the listener has started up with crs_start or srvctl.

Srvctl or crs_start (which is not supported) uses lsnrctl start command to start the listener.
Srvctl is a wrapper script written to start the listener.  Consider it like a shell script. 
From 11.2, if you install Grid for ASM and use srvctl to administer the TNS listener
then you might hit this issue.

Please note that this has nothing to do with RAC or Clusterware.
This is not an Oracle memory resource issue since connecting via SQL*Net is using less PGA for the session than a local connection which is successful.


2.    If the listener is being started automatically, then we will inherit the limits of the user who started the process (root).   If the root user is given the same ulimits as the oracle user, the problem should go away as when the listener autostarts on boot (root user privs), it will start with the correct ulimits.


Have a nice day, Sir!

Thank you again for your time to reply my question as soon as possible.

Getting Not enough values error while bulk insert

A reader, November 11, 2011 - 5:32 am UTC

Hi Tom,

Bare with me if my question is stupid and really missing something basic.scenario is as below

SQL>  create table t2 as select  owner,object_id,object_name from all_objects  where 1=0;

Table created.

SQL>  create or replace procedure table_way(p_n in number default 100)
  2    as      
  3            cursor c11 is select owner,object_id,object_name from all_objects where rownum <= 50;

  4      
  5            type array is table of c11%rowtype;
  6      
  7            l_data array;
  8  
  9            --cursor c is select * from all_objects where rownum <= 50;
 10    begin
 11            open c11;
 12            loop
 13                    fetch c11 bulk collect into l_data LIMIT p_n;
 14  
 15                    forall i in 1 .. l_data.count
 16                       insert into t2 values l_data(i);
 17   
 18                   exit when c11%notfound;
 19            end loop;
 20            close c11;
 21    end;
 22  /

Procedure created.

SQL> alter table t2 add  col1 date default null;

SQL>  create or replace procedure table_way(p_n in number default 100)
  2    as      
  3            cursor c11 is select owner,object_id,object_name from all_objects where rownum <= 50;

  4      
  5            type array is table of c11%rowtype;
  6      
  7            l_data array;
  8  
  9            --cursor c is select * from all_objects where rownum <= 50;
 10    begin
 11            open c11;
 12            loop
 13                    fetch c11 bulk collect into l_data LIMIT p_n;
 14  
 15                    forall i in 1 .. l_data.count
 16                       insert into t2(owner,object_id,object_name) values l_data(i);
 17   
 18                   exit when c11%notfound;
 19            end loop;
 20            close c11;
 21    end;
 22  /

Warning: Procedure created with compilation errors.

SQL> show errors;
Errors for PROCEDURE TABLE_WAY:

LINE/COL ERROR
-------- -----------------------------------------------------------------
16/22    PL/SQL: SQL Statement ignored
16/66    PL/SQL: ORA-00947: not enough values

1).Not able to figure out why i am getting enough values error,though i am inserting in to the table  three specific columns(i.e insert into t2(owner,object_id,object_name))
and values(array of values where i declared it as cursor row type where i am selecting only those three columns in the cursor).

Please clarify

2)I have declared newly inserted column as null by default.If i am not inserting any value it should be null,so if i am not inserting any thing into it it should be null and i should not get the error.
Please correct me if i am wrong.


Thanks a lot.






Tom Kyte
November 11, 2011 - 8:27 am UTC

it would look like this:

ops$tkyte%ORA11GR2>  create or replace procedure table_way(p_n in number default 100)
  2    as
  3            cursor c11 is select owner,object_id,object_name from all_objects where rownum <= 50;
  4  
  5            type array is table of c11%rowtype;
  6  
  7            l_data array;
  8  
  9            --cursor c is select * from all_objects where rownum <= 50;
 10    begin
 11            open c11;
 12            loop
 13                    fetch c11 bulk collect into l_data LIMIT p_n;
 14  
 15                    forall i in 1 .. l_data.count
 16                       insert into (select owner, object_id, object_name from t2 ) values l_data(i);
 17  
 18                   exit when c11%notfound;
 19            end loop;
 20            close c11;
 21    end;
 22  /

Procedure created.


there really isn't a column list available with this capability.

See:

http://www.oracle.com/technetwork/issue-archive/o43asktom-094944.html

for how this all works, and the potential downsides to it.

bulk insert and sequence

jianhui, June 25, 2012 - 1:30 pm UTC

what if a column is populated by a sequence, what is the efficient way to bulk insert sequence nextval(s)?

for example, if bulk size is 100, definitely I don't want to select seq.nextval from dual 100 times.
Tom Kyte
June 25, 2012 - 9:05 pm UTC

I don't know what you mean - where would dual come into the equation in a bulk insert???

forall i in 1 .. array.count
insert into t ( .. ) values ( seq.nextval, .... );


why would dual come into the picture with a BULK insert???

useful

A reader, June 27, 2012 - 9:20 am UTC

session uga memory 55836
Is session uga memory reported in mega bytes or bytes?
Tom Kyte
June 27, 2012 - 9:51 am UTC

bytes

can we open cursor directly using forall like for loop

selva, June 30, 2012 - 8:58 am UTC

Dear Tom ,

can we open the cursor in FORALL and process DML operation .
Like we are using FOR LOOP .

Please give us your valuable inputs.
Tom Kyte
July 02, 2012 - 7:02 am UTC

er? I don't even have a clue what you might mean by this???


please explain?

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library